Background: Stroke therapy is essential to reduce impairments and improve motor movements by engaging autogenous neuroplasticity. Traditionally, stroke rehabilitation occurs in inpatient and outpatient rehabilitation facilities. However, recent literature increasingly explores moving the recovery process into the home and integrating technology-based interventions. This study advances this goal by promoting in-home, autonomous recovery for patients who experienced a stroke through robotics-assisted rehabilitation and classifying stroke residual severity using machine learning methods.
Objective: Our main objective is to use kinematics data collected during in-home, self-guided therapy sessions to develop supervised machine learning methods, to address a clinician's autonomous classification of stroke residual severity-labeled data toward improving in-home, robotics-assisted stroke rehabilitation.
Methods: In total, 33 patients who experienced a stroke participated in in-home therapy sessions using Motus Nova robotics rehabilitation technology to capture upper and lower body motion. During each therapy session, the Motus Hand and Motus Foot devices collected movement data, assistance data, and activity-specific data. We then synthesized, processed, and summarized these data. Next, the therapy session data were paired with clinician-informed, discrete stroke residual severity labels: "no range of motion (ROM)," "low ROM," and "high ROM." Afterward, an 80%:20% split was performed to divide the dataset into a training set and a holdout test set. We used 4 machine learning algorithms to classify stroke residual severity: light gradient boosting (LGB), extra trees classifier, deep feed-forward neural network, and classical logistic regression. We selected models based on 10-fold cross-validation and measured their performance on a holdout test dataset using F1-score to identify which model maximizes stroke residual severity classification accuracy.
Results: We demonstrated that the LGB method provides the most reliable autonomous detection of stroke severity. The trained model is a consensus model that consists of 139 decision trees with up to 115 leaves each. This LGB model boasts a 96.70% F1-score compared to logistic regression (55.82%), extra trees classifier (94.81%), and deep feed-forward neural network (70.11%).
Conclusions: We showed how objectively measured rehabilitation training paired with machine learning methods can be used to identify the residual stroke severity class, with efforts to enhance in-home self-guided, individualized stroke rehabilitation. The model we trained relies only on session summary statistics, meaning it can potentially be integrated into similar settings for real-time classification, such as outpatient rehabilitation facilities.
Background: Determining maximum oxygen uptake (VO2max) is essential for evaluating cardiorespiratory fitness. While laboratory-based testing is considered the gold standard, sports watches or fitness trackers offer a convenient alternative. However, despite the high number of wrist-worn devices, there is a lack of scientific validation for VO2max estimation outside the laboratory setting.
Objective: This study aims to compare the Apple Watch Series 7's performance against the gold standard in VO2max estimation and Apple's validation findings.
Methods: A total of 19 participants (7 female and 12 male), aged 18 to 63 (mean 28.42, SD 11.43) years were included in the validation study. VO2max for all participants was determined in a controlled laboratory environment using a metabolic gas analyzer. Thereby, they completed a graded exercise test on a cycle ergometer until reaching subjective exhaustion. This value was then compared with the estimated VO2max value from the Apple Watch, which was calculated after wearing the watch for at least 2 consecutive days and measured directly after an outdoor running test.
Results: The measured VO2max (mean 45.88, SD 9.42 mL/kg/minute) in the laboratory setting was significantly higher than the predicted VO2max (mean 41.37, SD 6.5 mL/kg/minute) from the Apple Watch (t18=2.51; P=.01) with a medium effect size (Hedges g=0.53). The Bland-Altman analysis revealed a good overall agreement between both measurements. However, the intraclass correlation coefficient ICC(2,1)=0.47 (95% CI 0.06-0.75) indicated poor reliability. The mean absolute percentage error between the predicted and the actual VO2max was 15.79%, while the root mean square error was 8.85 mL/kg/minute. The analysis further revealed higher accuracy when focusing on participants with good fitness levels (mean absolute percentage error=14.59%; root-mean-square error=7.22 ml/kg/minute; ICC(2,1)=0.60 95% CI 0.09-0.87).
Conclusions: Similar to other smartwatches, the Apple Watch also overestimates or underestimates the VO2max in individuals with poor or excellent fitness levels, respectively. Assessing the accuracy and reliability of the Apple Watch's VO2max estimation is crucial for determining its suitability as an alternative to laboratory testing. The findings of this study will apprise researchers, physical training professionals, and end users of wearable technology, thereby enhancing the knowledge base and practical application of such devices in assessing cardiorespiratory fitness parameters.
Background: Step counting is comparable among many research-grade and consumer-grade accelerometers in laboratory settings.
Objective: The purpose of this study was to compare the agreement between Actical and Apple Watch step-counting in a community setting.
Methods: Among Third Generation Framingham Heart Study participants (N=3486), we examined the agreement of step-counting between those who wore a consumer-grade accelerometer (Apple Watch Series 0) and a research-grade accelerometer (Actical) on the same days. Secondarily, we examined the agreement during each hour when both devices were worn to account for differences in wear time between devices.
Results: We studied 523 participants (n=3223 person-days, mean age 51.7, SD 8.9 years; women: n=298, 57.0%). Between devices, we observed modest correlation (intraclass correlation [ICC] 0.56, 95% CI 0.54-0.59), poor continuous agreement (29.7%, n=957 of days having steps counts with ≤15% difference), a mean difference of 499 steps per day higher count by Actical, and wide limits of agreement, roughly ±9000 steps per day. However, devices showed stronger agreement in identifying who meets various steps per day thresholds (eg, at 8000 steps per day, kappa coefficient=0.49), for which devices were concordant for 74.8% (n=391) of participants. In secondary analyses, in the hours during which both devices were worn (n=456 participants, n=18,760 person-hours), the correlation was much stronger (ICC 0.86, 95% CI 0.85-0.86), but continuous agreement remained poor (27.3%, n=5115 of hours having step counts with ≤15% difference) between devices and was slightly worse for those with mobility limitations or obesity.
Conclusions: Our investigation suggests poor overall agreement between steps counted by the Actical device and those counted by the Apple Watch device, with stronger agreement in discriminating who meets certain step thresholds. The impact of these challenges may be minimized if accelerometers are used by individuals to determine whether they are meeting physical activity guidelines or tracking step counts. It is also possible that some of the limitations of these older accelerometers may be improved in newer devices.
Background: The hand is crucial for carrying out activities of daily living as well as social interaction. Functional use of the upper limb is affected in up to 55% to 75% of stroke survivors 3 to 6 months after stroke. Rehabilitation can help restore function, and several rehabilitation devices have been designed to improve hand function. However, access to these devices is compromised in people with more severe loss of function.
Objective: In this study, we aimed to observe stroke survivors with poor hand function interacting with a range of commonly used hand rehabilitation devices.
Methods: Participants were engaged in an 8-week rehabilitation intervention at a technology-enriched rehabilitation gym. The participants spent 50-60 minutes of the 2-hour session in the upper limb section at least twice a week. Each participant communicated their rehabilitation goals, and an Action Research Arm Test (ARAT) was used to measure and categorize hand function as poor (scores of 0-9), moderate (scores of 10-56), or good (score of 57). Participants were observed during their interactions with 3 hand-based rehabilitation devices that focused on hand rehabilitation: the GripAble, NeuroBall, and Semi-Circular Peg Board. Observations of device interactions were recorded for each session.
Results: A total of 29 participants were included in this study, of whom 10 (34%) had poor hand function, 17 (59%) had moderate hand function, and 2 (7%) had good hand function. There were no differences in the age and years after stroke among participants with poor hand function and those with moderate (P=.06 and P=.09, respectively) and good (P=.37 and P=.99, respectively) hand function. Regarding the ability of the 10 participants with poor hand function to interact with the 3 hand-based rehabilitation devices, 2 (20%) participants with an ARAT score greater than 0 were able to interact with the devices, whereas the other 8 (80%) who had an ARAT score of 0 could not. Their inability to interact with these devices was clinically examined, and the reason was determined to be a result of either the presence of (1) muscle tone or stiffness or (2) muscle weakness.
Conclusions: Not all stroke survivors with impairments in their hands can make use of currently available rehabilitation technologies. Those with an ARAT score of 0 cannot actively interact with hand rehabilitation devices, as they cannot carry out the hand movement necessary for such interaction. The design of devices for hand rehabilitation should consider the accessibility needs of those with poor hand function.
Background: Now and in the future, airborne diseases such as COVID-19 could become uncontrollable and lead the world into lockdowns. Finding alternatives to lockdowns, which limit individual freedoms and cause enormous economic losses, is critical.
Objective: The purpose of this study was to assess the feasibility of achieving a society or a nation that does not require lockdown during a pandemic due to airborne infectious diseases through the mass production and distribution of high-performance, low-cost, and comfortable powered air purifying respirators (PAPRs).
Methods: The feasibility of a social system using PAPR as an alternative to lockdown was examined from the following perspectives: first, what PAPRs can do as an alternative to lockdown; second, how to operate a social system utilizing PAPR; third, directions of improvement of PAPR as an alternative to lockdown; and finally, balancing between efficiency of infection control and personal freedom through the use of Internet of Things (IoT).
Results: PAPR was shown to be a possible alternative to lockdown through the reduction of airborne and droplet transmissions and through a temporary reduction of infection probability per contact. A social system in which individual constraints imposed by lockdown are replaced by PAPRs was proposed, and an example of its operation is presented in this paper. For example, the government determines the type and intensity of the lockdown and activates it. At that time, the government will also indicate how PAPR can be substituted for the different activity and movement restrictions imposed during a lockdown, for example, a curfew order may be replaced with the permission to go outside if wearing a PAPR. The following 7 points were raised as directions for improvement of PAPR as an alternative method to lockdown: flow optimization, precise differential pressure control, design improvement, maintenance method, variation development such as booth type, information terminal function, and performance evaluation method. In order to achieve the effectiveness and efficiency in controlling the spread of infection and the individual freedom at a high level in a social system that uses PAPRs as an alternative to lockdown, it was considered effective to develop a PAPR wearing rate network management system utilizing IoT.
Conclusions: This study shows that using PAPR with infection control ability and with less economic and social damage as an alternative to nationwide lockdown is possible during a pandemic due to airborne infectious diseases. Further, the efficiency of the government's infection control and each citizen's freedom can be balanced by using the PAPR wearing rate network management system utilizing an IoT system.
Background: Obstructive sleep apnea/hypopnea syndrome (OSAHS) is a prevalent condition affecting a substantial portion of the global population, with its prevalence increasing over the past 2 decades. OSAHS is characterized by recurrent upper airway (UA) closure during sleep, leading to significant impacts on quality of life and heightened cardiovascular and metabolic morbidity. Despite continuous positive airway pressure (CPAP) being the gold standard treatment, patient adherence remains suboptimal due to various factors, such as discomfort, side effects, and treatment unacceptability.
Objective: Considering the challenges associated with CPAP adherence, an alternative approach targeting the UA muscles through myofunctional therapy was explored. This noninvasive intervention involves exercises of the lips, tongue, or both to improve oropharyngeal functions and mitigate the severity of OSAHS. With the goal of developing a portable device for home-based myofunctional therapy with continuous monitoring of exercise performance and adherence, the primary outcome of this study was the degree of completion and adherence to a 4-week training session.
Methods: This proof-of-concept study focused on a portable device that was designed to facilitate tongue and lip myofunctional therapy and enable precise monitoring of exercise performance and adherence. A clinical study was conducted to assess the effectiveness of this program in improving sleep-disordered breathing. Participants were instructed to perform tongue protrusion, lip pressure, and controlled breathing as part of various tasks 6 times a week for 4 weeks, with each session lasting approximately 35 minutes.
Results: Ten participants were enrolled in the study (n=8 male; mean age 48, SD 22 years; mean BMI 29.3, SD 3.5 kg/m2; mean apnea-hypopnea index [AHI] 20.7, SD 17.8/hour). Among the 8 participants who completed the 4-week program, the overall compliance rate was 91% (175/192 sessions). For the tongue exercise, the success rate increased from 66% (211/320 exercises; SD 18%) on the first day to 85% (272/320 exercises; SD 17%) on the last day (P=.05). AHI did not change significantly after completion of training but a noteworthy correlation between successful lip exercise improvement and AHI reduction in the supine position was observed (Rs=-0.76; P=.03). These findings demonstrate the potential of the device for accurately monitoring participants' performance in lip and tongue pressure exercises during myofunctional therapy. The diversity of the training program (it mixed exercises mixed training games), its ability to provide direct feedback for each exercise to the participants, and the easy measurement of treatment adherence are major strengths of our training program.
Conclusions: The study's portable device for home-based myofunctional therapy shows promise as
Background: Degenerative cervical myelopathy (DCM) is a slow-motion spinal cord injury caused via chronic mechanical loading by spinal degenerative changes. A range of different degenerative changes can occur. Finite element analysis (FEA) can predict the distribution of mechanical stress and strain on the spinal cord to help understand the implications of any mechanical loading. One of the critical assumptions for FEA is the behavior of each anatomical element under loading (ie, its material properties).
Objective: This scoping review aims to undertake a structured process to select the most appropriate material properties for use in DCM FEA. In doing so, it also provides an overview of existing modeling approaches in spinal cord disease and clinical insights into DCM.
Methods: We conducted a scoping review using qualitative synthesis. Observational studies that discussed the use of FEA models involving the spinal cord in either health or disease (including DCM) were eligible for inclusion in the review. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The MEDLINE and Embase databases were searched to September 1, 2021. This was supplemented with citation searching to retrieve the literature used to define material properties. Duplicate title and abstract screening and data extraction were performed. The quality of evidence was appraised using the quality assessment tool we developed, adapted from the Newcastle-Ottawa Scale, and shortlisted with respect to DCM material properties, with a final recommendation provided. A qualitative synthesis of the literature is presented according to the Synthesis Without Meta-Analysis reporting guidelines.
Results: A total of 60 papers were included: 41 (68%) "FEA articles" and 19 (32%) "source articles." Most FEA articles (33/41, 80%) modeled the gray matter and white matter separately, with models typically based on tabulated data or, less frequently, a hyperelastic Ogden variant or linear elastic function. Of the 19 source articles, 14 (74%) were identified as describing the material properties of the spinal cord, of which 3 (21%) were considered most relevant to DCM. Of the 41 FEA articles, 15 (37%) focused on DCM, of which 9 (60%) focused on ossification of the posterior longitudinal ligament. Our aggregated results of DCM FEA indicate that spinal cord loading is influenced by the pattern of degenerative changes, with decompression alone (eg, laminectomy) sufficient to address this as opposed to decompression combined with other procedures (eg, laminectomy and fusion).
Conclusions: FEA is a promising technique for exploring the pathobiology of DCM and informing clinical care. This review describes a structured approach to help future investigators deploy FEA for DCM. However, there are limitations to these r
Background: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency.
Objective: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems.
Methods: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency.
Results: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings.
Conclusions: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.