Pub Date : 2024-04-17DOI: 10.1016/j.smhl.2024.100482
David Rei, Céline Clavel, Jean-Claude Martin, Brian Ravenet
Physical activity is one of the most recognised means of disease prevention. While several studies investigated different techniques for motivating people to walk (e.g. adaptive goal setting), only few studies considered user profiles to personalise interactions. In this article, we propose a new interaction model which adapts walking goals and motivational messages in a static way (using the initial physical activity level) and in a dynamic way (using previous days’ performance). We explain how we implemented this model in a mobile application counting and displaying in real time the number of steps, an adapted daily goal and personalised motivational messages throughout the day. We describe two field studies conducted with 32 and 50 users over four weeks. We compare the impacts of adapted daily goals and personalised motivational messages on users’ step counts and motivation to walk. Participants using the adapted version of the mobile application displayed an increase in their motivation to walk after the intervention and were more physically active than participants using non adapted versions of the mobile application.
{"title":"Adapting goals and motivational messages on smartphones for motivation to walk","authors":"David Rei, Céline Clavel, Jean-Claude Martin, Brian Ravenet","doi":"10.1016/j.smhl.2024.100482","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100482","url":null,"abstract":"<div><p>Physical activity is one of the most recognised means of disease prevention. While several studies investigated different techniques for motivating people to walk (e.g. adaptive <em>goal setting</em>), only few studies considered user profiles to personalise interactions. In this article, we propose a new interaction model which adapts walking goals and motivational messages in a static way (using the initial physical activity level) and in a dynamic way (using previous days’ performance). We explain how we implemented this model in a mobile application counting and displaying in real time the number of steps, an adapted daily goal and personalised motivational messages throughout the day. We describe two field studies conducted with 32 and 50 users over four weeks. We compare the impacts of adapted daily goals and personalised motivational messages on users’ step counts and motivation to walk. Participants using the adapted version of the mobile application displayed an increase in their motivation to walk after the intervention and were more physically active than participants using non adapted versions of the mobile application.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100482"},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140638111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1016/j.smhl.2024.100484
Simon L. Gay , Edwige Pissaloux , Jean-Paul Jamont
This paper proposes a new implementation and evaluation in a real-world environment of a bio-inspired predictive navigation model for mobility control, suitable especially for assistance of visually impaired people and autonomous mobile systems. This bio-inspired model relies on the interactions between formal models of three types of neurons identified in the mammals’ brain implied in navigation tasks, namely place cells, grid cells, and head direction cells, to construct a topological model of the environment under the form of a decentralized navigation graph. Previously tested in virtual environments, this model demonstrated a high tolerance to motion drift, making possible to map large environments without the need to correct it to handle such drifts, and robustness to environment changes. The presented implementation is based on a stereoscopic camera, and is evaluated on its possibilities to map and guide a person or an autonomous mobile robot in an unknown real environment. The evaluation results confirm the effectiveness of the proposed bio-inspired navigation model to build a path map, localize and guide a person through this path. The model predictions remain robust to environment changes, and allow to estimate traveled distances with an error rate below 3% over test paths, up to 100m. The tests performed on a robotic platform also demonstrated the pertinence of navigation data produced by this navigation model to guide an autonomous system. These results open the way toward efficient wearable assistive devices for visually impaired people independent navigation.
{"title":"A bio-inspired model for robust navigation assistive devices","authors":"Simon L. Gay , Edwige Pissaloux , Jean-Paul Jamont","doi":"10.1016/j.smhl.2024.100484","DOIUrl":"10.1016/j.smhl.2024.100484","url":null,"abstract":"<div><p>This paper proposes a new implementation and evaluation in a real-world environment of a bio-inspired predictive navigation model for mobility control, suitable especially for assistance of visually impaired people and autonomous mobile systems. This bio-inspired model relies on the interactions between formal models of three types of neurons identified in the mammals’ brain implied in navigation tasks, namely place cells, grid cells, and head direction cells, to construct a topological model of the environment under the form of a decentralized navigation graph. Previously tested in virtual environments, this model demonstrated a high tolerance to motion drift, making possible to map large environments without the need to correct it to handle such drifts, and robustness to environment changes. The presented implementation is based on a stereoscopic camera, and is evaluated on its possibilities to map and guide a person or an autonomous mobile robot in an unknown real environment. The evaluation results confirm the effectiveness of the proposed bio-inspired navigation model to build a path map, localize and guide a person through this path. The model predictions remain robust to environment changes, and allow to estimate traveled distances with an error rate below 3% over test paths, up to 100m. The tests performed on a robotic platform also demonstrated the pertinence of navigation data produced by this navigation model to guide an autonomous system. These results open the way toward efficient wearable assistive devices for visually impaired people independent navigation.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"33 ","pages":"Article 100484"},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352648324000400/pdfft?md5=da26293fcfb66b6f6d83277cab8ac5b3&pid=1-s2.0-S2352648324000400-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140789352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Demonstration unit devices are designed and built for the CPO (Certified Prosthetist Orthotist) to illustrate the functionalities of robotic hands to patients without the need for a prosthetic socket. Demonstration units are also of invaluable importance during the calibration and learning stages, which are preparatory for the final adoption of a prosthesis. This paper presents a smart demo unit (SDU) developed for Adam’s Hand, a robotic underactuated prosthesis proposed by BionIT Labs. It serves as a prosthetic sleeve that powers the robotic hand and gathers the electromyographic commands from the user. The details of the SDU are presented in terms of hardware and firmware, devoting special attention to the electromechanical interface with the robotic hand and the patient or CPO. The advantages of the system over existing alternatives are discussed, showing that the proposed SDU can be a valuable tool in the field of extremity upper limb prosthetics.
{"title":"A smart demonstration unit for upper-limb myoelectric prostheses","authors":"Damiano Cosma Potenza , Andrea Grazioso , Federico Gaetani , Giacomo Mantriota , Giulio Reina","doi":"10.1016/j.smhl.2024.100481","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100481","url":null,"abstract":"<div><p>Demonstration unit devices are designed and built for the CPO (Certified Prosthetist Orthotist) to illustrate the functionalities of robotic hands to patients without the need for a prosthetic socket. Demonstration units are also of invaluable importance during the calibration and learning stages, which are preparatory for the final adoption of a prosthesis. This paper presents a smart demo unit (SDU) developed for Adam’s Hand, a robotic underactuated prosthesis proposed by BionIT Labs. It serves as a prosthetic sleeve that powers the robotic hand and gathers the electromyographic commands from the user. The details of the SDU are presented in terms of hardware and firmware, devoting special attention to the electromechanical interface with the robotic hand and the patient or CPO. The advantages of the system over existing alternatives are discussed, showing that the proposed SDU can be a valuable tool in the field of extremity upper limb prosthetics.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100481"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140638275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-13DOI: 10.1016/j.smhl.2024.100483
Agnese Seregni , Peppino Tropea , Riccardo Re , Verena Biscaro , Elda Judica , Massimo Caprino , Kai Gand , Hannes Schlieter , Massimo Corbo
The continuity of care of subjects with chronic Non-Communicable Diseases (NCDs) is a well-known public health problem. To address this issue, various home-based-technological solutions have been proposed to provide personalized home rehabilitation plans: however, enhancing the compliance is still a challenge. In the framework of the vCare project, an innovative technological home-based platform was developed to provide care and rehabilitation services (motor and cognitive training, e-learning service, and recommendations for additional activities) within a coaching environment in a real-life scenario.
The aim of this work was to evaluate the compliance of post stroke subjects with the solution, and the platform's usability and technological acceptance.
Patients with stroke underwent the personalized home rehabilitation plan for up to 9 weeks. Clinical status and quality of life were assessed before and after the experimental period; compliance, usability and technological acceptance at the end.
Patients experienced the vCare solution without adverse events following their clinical plan. Results were suitable: motor and cognitive training reached 66% and 95% of adherence, respectively. Usability and technological acceptance were above the limits of acceptability.
The vCare coaching system might potentially motivate and empower patients with functional disabilities to actively engage themselves in carrying out, autonomously, personalized rehabilitation activities at home.
{"title":"Secondary care for subjects with stroke: Compliance, usability and technological acceptance of the vCare platform solution","authors":"Agnese Seregni , Peppino Tropea , Riccardo Re , Verena Biscaro , Elda Judica , Massimo Caprino , Kai Gand , Hannes Schlieter , Massimo Corbo","doi":"10.1016/j.smhl.2024.100483","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100483","url":null,"abstract":"<div><p>The continuity of care of subjects with chronic Non-Communicable Diseases (NCDs) is a well-known public health problem. To address this issue, various home-based-technological solutions have been proposed to provide personalized home rehabilitation plans: however, enhancing the compliance is still a challenge. In the framework of the vCare project, an innovative technological home-based platform was developed to provide care and rehabilitation services (motor and cognitive training, e-learning service, and recommendations for additional activities) within a coaching environment in a real-life scenario.</p><p>The aim of this work was to evaluate the compliance of post stroke subjects with the solution, and the platform's usability and technological acceptance.</p><p>Patients with stroke underwent the personalized home rehabilitation plan for up to 9 weeks. Clinical status and quality of life were assessed before and after the experimental period; compliance, usability and technological acceptance at the end.</p><p>Patients experienced the vCare solution without adverse events following their clinical plan. Results were suitable: motor and cognitive training reached 66% and 95% of adherence, respectively. Usability and technological acceptance were above the limits of acceptability.</p><p>The vCare coaching system might potentially motivate and empower patients with functional disabilities to actively engage themselves in carrying out, autonomously, personalized rehabilitation activities at home.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100483"},"PeriodicalIF":0.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140643625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1016/j.smhl.2024.100476
Muhammad Twaha Ibrahim , Aditi Majumder , M. Gopi , Lohrasb R. Sayadi , Raj M. Vyas
In this paper we propose a system that connects surgeons to remote or local experts who provide real-time surgical guidance by illuminating salient markings or stencils (e.g. points, lines and curves) on the physical surgical site using a projector. The projection can be modified in real time by the expert using a GUI and can be seen by all in the operating room (OR) without the use of any wearables. This system overcomes the limitations of AR/VR headsets which can overlay information through a headset, but are obtrusive, not very accurate with movements, and visible only to the surgeon excluding others in the room. Overlaying information, at high precision, directly on the physical surgical site that can be seen by everyone in the OR can become an useful tool for skill transfer, expert consultation and training, especially in telemedicine.
In addition to the projector, the system comprises of a RGB-D camera (e.g. Kinect) for feedback, together designated as the PDC (Projector Depth Camera) unit. The PDC is driven by a PC. The RGB-D camera provides depth information in addition to an image at video frame rates. A high resolution mesh of the surgical site is captured using the PDC unit initially. During the surgical planning, training or execution session, this digital model can be marked by appropriate incision markings on a tablet or monitor using touch based or mouse based interface, on the same local machine or after being transmitted to a remote machine. These markings are then communicated back to the PDC unit and illuminated at high precision via the projector on the surgical site in real time. If the surgical site moves during the process, the movement is tracked and updated quickly on the surgical site. Our method specifically overcomes the obtrusive, exclusive, and indirect attributes of headsets and displays while maintaining high accuracy of registration with movements.
{"title":"Illuminating precise stencils on surgical sites using projection-based augmented reality","authors":"Muhammad Twaha Ibrahim , Aditi Majumder , M. Gopi , Lohrasb R. Sayadi , Raj M. Vyas","doi":"10.1016/j.smhl.2024.100476","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100476","url":null,"abstract":"<div><p>In this paper we propose a system that connects surgeons to remote or local experts who provide real-time surgical guidance by illuminating salient markings or stencils (e.g. points, lines and curves) on the physical surgical site using a projector. The projection can be modified in real time by the expert using a GUI and can be seen by all in the operating room (OR) without the use of any wearables. This system overcomes the limitations of AR/VR headsets which can overlay information through a headset, but are obtrusive, not very accurate with movements, and visible only to the surgeon excluding others in the room. Overlaying information, at high precision, directly on the physical surgical site that can be seen by everyone in the OR can become an useful tool for skill transfer, expert consultation and training, especially in telemedicine.</p><p>In addition to the projector, the system comprises of a RGB-D camera (e.g. Kinect) for feedback, together designated as the PDC (<u>P</u>rojector <u>D</u>epth <u>C</u>amera) unit. The PDC is driven by a PC. The RGB-D camera provides depth information in addition to an image at video frame rates. A high resolution mesh of the surgical site is captured using the PDC unit initially. During the surgical planning, training or execution session, this digital model can be marked by appropriate incision markings on a tablet or monitor using touch based or mouse based interface, on the same local machine or after being transmitted to a remote machine. These markings are then communicated back to the PDC unit and illuminated at high precision via the projector on the surgical site in real time. If the surgical site moves during the process, the movement is tracked and updated quickly on the surgical site. Our method specifically overcomes the obtrusive, exclusive, and indirect attributes of headsets and displays while maintaining high accuracy of registration with movements.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100476"},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352648324000321/pdfft?md5=a5b29d483e4f78a016c0c96ecde0304f&pid=1-s2.0-S2352648324000321-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140539512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1016/j.smhl.2024.100466
Reilly Moberg, Edward Downs, Abby Shelby, Arshia Khan
In today’s world, understanding how people interact with humanoid social robots is important for day-to-day interactions and design purposes. A phasic, between-subjects, psychophysiological experiment (N 72) examined how the norm of reciprocity influenced interactions with the humanoid social robot, “Pepper”. Facial electromyography (zygomatic and corrugator) was measured to determine participant’s emotional valence during interaction. The level of reciprocity in response to a pregiving favor was measured by the number of raffle tickets purchased by participants at the robot’s request. Results suggest that the social rule of reciprocation exists within human–robot interaction. When Pepper offered a pregiving favor to a participant, that person was more likely to reciprocate via the robot’s later ticket purchase request. Contributions to theory and design of humanoid social robots are discussed, as well as avenues for future research.
{"title":"Understanding reciprocity in human–robot interactions through completion of a pregiving favor","authors":"Reilly Moberg, Edward Downs, Abby Shelby, Arshia Khan","doi":"10.1016/j.smhl.2024.100466","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100466","url":null,"abstract":"<div><p>In today’s world, understanding how people interact with humanoid social robots is important for day-to-day interactions and design purposes. A phasic, between-subjects, psychophysiological experiment (N <span><math><mo>=</mo></math></span> 72) examined how the norm of reciprocity influenced interactions with the humanoid social robot, “Pepper”. Facial electromyography (zygomatic and corrugator) was measured to determine participant’s emotional valence during interaction. The level of reciprocity in response to a pregiving favor was measured by the number of raffle tickets purchased by participants at the robot’s request. Results suggest that the social rule of reciprocation exists within human–robot interaction. When Pepper offered a pregiving favor to a participant, that person was more likely to reciprocate via the robot’s later ticket purchase request. Contributions to theory and design of humanoid social robots are discussed, as well as avenues for future research.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100466"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140535706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1016/j.smhl.2024.100477
Anarghya Das , Puru Soni , Ming-Chun Huang , Feng Lin , Wenyao Xu
Speech recognition using EEG signals captured during covert (imagined) speech has garnered substantial interest in Brain–Computer Interface (BCI) research. While the concept holds promise, current implementations must improve performance compared to established Automatic Speech Recognition (ASR) methods using audio. An area often underestimated in previous studies is the potential of EEG utilization during overt speech. Integrating overt EEG signals with speech data by leveraging advancements in deep learning presents significant potential to enhance the efficacy of these systems. This integration proves particularly advantageous in noisy environments and for individuals with speech impairments—challenges even conventional ASR techniques struggle to address effectively. Our investigation delves into this relationship by introducing a novel multimodal model that merges EEG and speech inputs. Our model achieves a multiclass classification accuracy of 95.39%. When subjected to artificial white noise added to the input audio, our model exhibits a notable level of resilience, surpassing the capabilities of models reliant solely on single EEG or audio modalities. The validation process, leveraging the robust techniques of t-SNE and silhouette coefficient, corroborates and solidifies these advancements.
利用隐蔽(想象)说话时捕获的脑电信号进行语音识别已引起脑机接口(BCI)研究的极大兴趣。虽然这一概念前景广阔,但与使用音频的成熟自动语音识别 (ASR) 方法相比,目前的实现方法必须提高性能。在以往的研究中,经常被低估的一个领域是在公开讲话时利用脑电图的潜力。通过利用深度学习的进步,将公开的脑电信号与语音数据整合在一起,为提高这些系统的功效提供了巨大的潜力。事实证明,这种整合在嘈杂的环境中和对有语音障碍的人尤其有利--即使是传统的 ASR 技术也难以有效解决这些挑战。我们的研究通过引入一种融合脑电图和语音输入的新型多模态模型来深入探讨这种关系。我们的模型达到了 95.39% 的多类分类准确率。在输入音频中添加人工白噪声时,我们的模型表现出显著的适应能力,超越了仅依赖单一脑电图或音频模式的模型。利用 t-SNE 和剪影系数的稳健技术进行的验证过程证实并巩固了这些进步。
{"title":"Multimodal speech recognition using EEG and audio signals: A novel approach for enhancing ASR systems","authors":"Anarghya Das , Puru Soni , Ming-Chun Huang , Feng Lin , Wenyao Xu","doi":"10.1016/j.smhl.2024.100477","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100477","url":null,"abstract":"<div><p>Speech recognition using EEG signals captured during covert (imagined) speech has garnered substantial interest in Brain–Computer Interface (BCI) research. While the concept holds promise, current implementations must improve performance compared to established Automatic Speech Recognition (ASR) methods using audio. An area often underestimated in previous studies is the potential of EEG utilization during overt speech. Integrating overt EEG signals with speech data by leveraging advancements in deep learning presents significant potential to enhance the efficacy of these systems. This integration proves particularly advantageous in noisy environments and for individuals with speech impairments—challenges even conventional ASR techniques struggle to address effectively. Our investigation delves into this relationship by introducing a novel multimodal model that merges EEG and speech inputs. Our model achieves a multiclass classification accuracy of 95.39%. When subjected to artificial white noise added to the input audio, our model exhibits a notable level of resilience, surpassing the capabilities of models reliant solely on single EEG or audio modalities. The validation process, leveraging the robust techniques of t-SNE and silhouette coefficient, corroborates and solidifies these advancements.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100477"},"PeriodicalIF":0.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140549072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1016/j.smhl.2024.100469
Chen Song, Xinghua Shi
The growing distribution of deep learning models to individuals’ devices on sensitive healthcare data introduces challenging privacy and security problems when computation is being operated on an untrusted server. Homomorphic encryption (HE) is one of the appropriate cryptographic techniques to provide secure machine learning computation by directly computing over encrypted data, so that allows the data owner and model owner to outsource processing of sensitive information to an untrusted server without leaking any information about the data. However, most current HE schemes only support limited arithmetic operations, which significantly hinder their applications to implement a secure deep learning algorithm, especially on the nonlinear activation function of a deep neural network. In this paper, we develop a novel HE-friendly deep neural network, named REsidue ACTivation HE (ReActHE), to implement a precise and privacy-preserving algorithm with a non-approximating HE scheme on the activation function. We consider a residue activation strategy with a scaled power activation function in a deep neural network for HE-friendly nonlinear activation. Moreover, we propose a residue activation network structure to constrain the latent space in the training process to alleviate the optimization difficulty. We comprehensively evaluate the proposed ReActHE method using various biomedical datasets and widely-used image datasets. Our results demonstrate that ReActHE outperforms other alternative solutions to secure machine learning with HE and achieves low approximation errors in classification and regression tasks.
当计算在不受信任的服务器上进行时,越来越多的深度学习模型被分发到个人设备上的敏感医疗数据中,这就带来了具有挑战性的隐私和安全问题。同态加密(HE)是提供安全机器学习计算的合适加密技术之一,它可以直接对加密数据进行计算,从而允许数据所有者和模型所有者将敏感信息的处理外包给不受信任的服务器,而不会泄露数据的任何信息。然而,目前大多数 HE 方案只支持有限的算术运算,这极大地阻碍了它们在实现安全深度学习算法方面的应用,尤其是在深度神经网络的非线性激活函数方面。在本文中,我们开发了一种新型的对 HE 友好的深度神经网络,命名为 REsidue ACTivation HE(ReActHE),利用激活函数上的非逼近 HE 方案实现精确且保护隐私的算法。我们考虑了在深度神经网络中使用缩放幂激活函数的残差激活策略,以实现对 HE 友好的非线性激活。此外,我们还提出了一种残差激活网络结构,用于在训练过程中约束潜空间,以减轻优化难度。我们利用各种生物医学数据集和广泛使用的图像数据集全面评估了所提出的 ReActHE 方法。结果表明,ReActHE 优于其他使用 HE 进行安全机器学习的替代方案,并且在分类和回归任务中实现了较低的近似误差。
{"title":"ReActHE: A homomorphic encryption friendly deep neural network for privacy-preserving biomedical prediction","authors":"Chen Song, Xinghua Shi","doi":"10.1016/j.smhl.2024.100469","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100469","url":null,"abstract":"<div><p>The growing distribution of deep learning models to individuals’ devices on sensitive healthcare data introduces challenging privacy and security problems when computation is being operated on an untrusted server. Homomorphic encryption (HE) is one of the appropriate cryptographic techniques to provide secure machine learning computation by directly computing over encrypted data, so that allows the data owner and model owner to outsource processing of sensitive information to an untrusted server without leaking any information about the data. However, most current HE schemes only support limited arithmetic operations, which significantly hinder their applications to implement a secure deep learning algorithm, especially on the nonlinear activation function of a deep neural network. In this paper, we develop a novel HE-friendly deep neural network, named REsidue ACTivation HE (ReActHE), to implement a precise and privacy-preserving algorithm with a non-approximating HE scheme on the activation function. We consider a residue activation strategy with a scaled power activation function in a deep neural network for HE-friendly nonlinear activation. Moreover, we propose a residue activation network structure to constrain the latent space in the training process to alleviate the optimization difficulty. We comprehensively evaluate the proposed ReActHE method using various biomedical datasets and widely-used image datasets. Our results demonstrate that ReActHE outperforms other alternative solutions to secure machine learning with HE and achieves low approximation errors in classification and regression tasks.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100469"},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140643624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1016/j.smhl.2024.100472
Samuel Andres Gomez , Sudip Vhaduri , Mark D. Wilson , Julius C. Keller
Pilots operating in a distinctive professional realm face substantial stress and fatigue from the crucial responsibility of navigating airplanes at high altitudes with inherent risks. Similarly, college-level students encounter heightened stress and fatigue while pursuing academic goals and engaging in different activities. Stress, a non-specific response to various demands, and fatigue, characterized by extreme tiredness, are prevalent health conditions experienced across a spectrum of intensity in daily life. In this study, we conduct an extensive analysis to address a fundamental question: how do stress, sleep disturbance, and fatigue experiences differ between pilots and non-pilot college students? Delving into stress and fatigue levels within these populations contributes to understanding these phenomena and their potential implications for overall well-being and performance. Building on a comprehensive analysis of the Perceived Stress Scale (PSS), Jenkins Sleep Scale (JSS), and Multidimensional Fatigue Inventory (MFI) scores, we explore variations in stress, sleep disturbance, and fatigue across multiple dimensions. Our findings indicate intriguing disparities among pilot and non-pilot cohorts. Through graphical representations and statistical tests, we reveal that non-pilot college students exhibit higher perceived stress and sleep disturbance levels. In contrast, pilots demonstrate expected higher perceived fatigue levels. Our detailed analysis of subcategories, including General Fatigue, Physical Fatigue, Reduced Activity, Reduced Motivation, and Mental Fatigue, sheds light on the complexity of these differences. Notably, pilot students experience heightened fatigue, potentially linked to the demanding nature of their tasks. In conclusion, our extended analysis contributes valuable insights into the intricate dynamics of stress, sleep disturbance, and fatigue among pilot and non-pilot college students. These findings hold implications for future research and interventions aimed at enhancing the well-being and performance of individuals in these distinct educational and professional domains.
{"title":"Assessing perceived stress, sleep disturbance, and fatigue among pilot and non-pilot trainees","authors":"Samuel Andres Gomez , Sudip Vhaduri , Mark D. Wilson , Julius C. Keller","doi":"10.1016/j.smhl.2024.100472","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100472","url":null,"abstract":"<div><p>Pilots operating in a distinctive professional realm face substantial stress and fatigue from the crucial responsibility of navigating airplanes at high altitudes with inherent risks. Similarly, college-level students encounter heightened stress and fatigue while pursuing academic goals and engaging in different activities. Stress, a non-specific response to various demands, and fatigue, characterized by extreme tiredness, are prevalent health conditions experienced across a spectrum of intensity in daily life. In this study, we conduct an extensive analysis to address a fundamental question: how do stress, sleep disturbance, and fatigue experiences differ between pilots and non-pilot college students? Delving into stress and fatigue levels within these populations contributes to understanding these phenomena and their potential implications for overall well-being and performance. Building on a comprehensive analysis of the Perceived Stress Scale (PSS), Jenkins Sleep Scale (JSS), and Multidimensional Fatigue Inventory (MFI) scores, we explore variations in stress, sleep disturbance, and fatigue across multiple dimensions. Our findings indicate intriguing disparities among pilot and non-pilot cohorts. Through graphical representations and statistical tests, we reveal that non-pilot college students exhibit higher perceived stress and sleep disturbance levels. In contrast, pilots demonstrate expected higher perceived fatigue levels. Our detailed analysis of subcategories, including General Fatigue, Physical Fatigue, Reduced Activity, Reduced Motivation, and Mental Fatigue, sheds light on the complexity of these differences. Notably, pilot students experience heightened fatigue, potentially linked to the demanding nature of their tasks. In conclusion, our extended analysis contributes valuable insights into the intricate dynamics of stress, sleep disturbance, and fatigue among pilot and non-pilot college students. These findings hold implications for future research and interventions aimed at enhancing the well-being and performance of individuals in these distinct educational and professional domains.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100472"},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1016/j.smhl.2024.100474
Zhengfeng Lai , Joohi Chauhan , Dongjie Chen , Brittany N. Dugger , Sen-Ching Cheung , Chen-Nee Chuah
The efficacy of supervised deep learning in medical image analyses, particularly in pathology, is hindered by the necessity for extensive manual annotations. Annotating images at the gigapixel level manually proves to be a highly labor-intensive and time-consuming task. Semi-supervised learning (SSL) has emerged as a promising approach that leverages unlabeled data to reduce labeling efforts. In this work, we introduce Semi-Path, a practical SSL framework enhanced with active learning (AL) for gigapixel pathology tasks. Unlike existing methods that treat SSL and AL as independent components where AL incurs significant computational complexity to SSL, we propose a deep fusion of SSL and AL into a unified framework. Our framework introduces Informative Active Annotation (IAA) that employs a SSL-AL iterative structure to effectively extract knowledge from unlabeled pathology data. This structure significantly minimizes labeling efforts and computational complexity. Then, we propose Adaptive Pseudo-Labeling (APL) to address heterogeneity in class distribution, and prediction difficulty that are often observed in real-world pathology tasks. We evaluate Semi-Path on pathology image classification and segmentation tasks over three datasets that include WSIs from breast, colorectal, and brain tissues. The experimental results demonstrate the consistent superiority of Semi-Path over state-of-the-art methods.
有监督的深度学习在医学图像分析(尤其是病理学)中的功效因必须进行大量手动注释而受到阻碍。事实证明,在千兆像素级别手动注释图像是一项非常耗费人力和时间的任务。半监督学习(SSL)是一种很有前途的方法,它能利用未标注数据来减少标注工作。在这项工作中,我们介绍了半路径(Semi-Path),这是一种实用的 SSL 框架,通过主动学习(AL)增强了千兆像素病理任务的能力。现有的方法将 SSL 和 AL 视为独立的组成部分,AL 会给 SSL 带来显著的计算复杂性,与此不同,我们提出将 SSL 和 AL 深度融合到一个统一的框架中。我们的框架引入了信息主动注释(IAA),采用 SSL-AL 迭代结构,有效地从未标明的病理数据中提取知识。这种结构大大减少了标注工作量和计算复杂度。然后,我们提出了自适应伪标记(APL),以解决现实世界病理任务中经常出现的类别分布不均和预测困难的问题。我们通过三个数据集评估了半路径在病理图像分类和分割任务中的应用,这三个数据集包括来自乳腺、结直肠和脑组织的 WSI。实验结果表明,Semi-Path 始终优于最先进的方法。
{"title":"Semi-Path: An interactive semi-supervised learning framework for gigapixel pathology image analysis","authors":"Zhengfeng Lai , Joohi Chauhan , Dongjie Chen , Brittany N. Dugger , Sen-Ching Cheung , Chen-Nee Chuah","doi":"10.1016/j.smhl.2024.100474","DOIUrl":"https://doi.org/10.1016/j.smhl.2024.100474","url":null,"abstract":"<div><p>The efficacy of supervised deep learning in medical image analyses, particularly in pathology, is hindered by the necessity for extensive manual annotations. Annotating images at the gigapixel level manually proves to be a highly labor-intensive and time-consuming task. Semi-supervised learning (SSL) has emerged as a promising approach that leverages unlabeled data to reduce labeling efforts. In this work, we introduce Semi-Path, a practical SSL framework enhanced with active learning (AL) for gigapixel pathology tasks. Unlike existing methods that treat SSL and AL as independent components where AL incurs significant computational complexity to SSL, we propose a deep fusion of SSL and AL into a unified framework. Our framework introduces Informative Active Annotation (IAA) that employs a SSL-AL iterative structure to effectively extract knowledge from unlabeled pathology data. This structure significantly minimizes labeling efforts and computational complexity. Then, we propose Adaptive Pseudo-Labeling (APL) to address heterogeneity in class distribution, and prediction difficulty that are often observed in real-world pathology tasks. We evaluate Semi-Path on pathology image classification and segmentation tasks over three datasets that include WSIs from breast, colorectal, and brain tissues. The experimental results demonstrate the consistent superiority of Semi-Path over state-of-the-art methods.</p></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"32 ","pages":"Article 100474"},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352648324000308/pdfft?md5=f4f8f22379c8912b3ec2ba8e1545c8c7&pid=1-s2.0-S2352648324000308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}