Purpose: Teamwork is fundamental to medical practice and relies on seamless collaboration among professionals with different tasks. Integrating robotic systems into this environment demands smooth interactions. Human action recognition, which infers a person's state without explicit input, can support this. We focus on handovers between medical staff, using the actions as implicit cues for robotic assistance to replace the giving party in such scenarios.
Methods: Skeletal information processed with differing machine learning algorithms makes it possible to derive actions out of sequential image data. Transferred to the medical context, we aim to infer actions defined for each situation in two datasets, a surgery in the operating room and a care intervention in the patient ward, depicting a handover between staff. We aim to abstract movement patterns across individuals through skeletal representation, leveraging the spatiotemporal information of medical handovers to enable future robotic systems to interact based on implicit cues.
Results: We report an F1 score of for the OR dataset with ST-GCN and an F1 score of for the Ward dataset with the SkateFormer human action recognition. The defined actions showed distinction in the confusion matrix with limitations on actions with a rapid transition like approach and reach as well as the handover actions in the OR.
Conclusion: The handover phases in two medical contexts, a minimally invasive surgery and a wound dressing on the patient station, are recognized with the proposed framework. This lays a first step for the integration of robotic assistance in the handover of medical material or instruments.
扫码关注我们
求助内容:
应助结果提醒方式:
