Objectives
Electrocardiogram (ECG) signals are beneficial for diagnosing cardiac diseases. The cardiac patients' life quality likely increases with continuous or long-period recording and monitoring of ECG signals, leading to better and early diagnosis of disease and heart attacks. However, continuous ECG recording necessitates high data rates and storage, which means high costs. Therefore, ECG compression is a handy concept that facilitates continuous monitoring of ECG signals. Deep neural networks open up new horizons for compression and also for ECG compression by providing high compression rates and quality. Although they bring constant compression ratios with better average quality, the compression quality of individual samples is not guaranteed, which may lead to misdiagnoses. This study aims to investigate the effect of compression quality on the diagnoses and to develop a deep neural network-based compression strategy that guarantees a quality-bound in return for varying compression ratios.
Materials and methods
The effect of the compression quality on the arrhythmia diagnoses is tested by comparing the performance of the deep learning-based ECG classifier on the original ECG recordings and the distorted recordings using a lossy compression algorithm with different compression error levels. Then, a compression error upper limit is calculated in terms of normalized percent root mean square difference (PRDN) error, which also coincides with the findings of the previous studies in the literature. Lastly, to enable deep learning in ECG compression, a single encoder-multi-decoder convolutional autoencoder architecture, and multiple quantization levels are proposed to guarantee a desired upper limit on the error rate.
Results
The efficiency of the proposed method is demonstrated on a popular benchmark data set for ECG compression methods using a transfer learning approach. The PRDN error is fixed to various values, and the average compression rates are reported. An average of compression is achieved for a 10% PRDN error rate, assessed as a fair quality threshold for reconstruction error. It has also been shown that the compression model has a runtime that can be run in real-time on wearable devices such as commercial smartwatches.
Conclusion
This study proposes a deep learning-based ECG compression algorithm that guarantees a desired upper limit on the compression error. This model may facilitate an eHealth solution for continuous monitoring of ECG signals of individuals, especially cardiac patients.
Objectives: The nociceptive flexion reflex (NFR) is used as a pseudo-objective measure of pain that is measured using electromyography (EMG). EMG signals can be analyzed using nonlinear methods to identify complex changes in physiological systems. Physiological complexity has been shown to allow a wider range of adaptable states for the system to deal with stressors. The purpose of this study was to examine changes in complexity and entropy of EMG signals from the biceps femoris during non-noxious stimuli and noxious stimuli that evoked the NFR before and after acute inflammation. Methods and Materials: Twelve healthy participants (25.17y ± 3.43) underwent the NFR protocol. EMG signal complexity was calculated using Hurst Exponent (H), determinism (DET), and recurrence rate (RR), and Sample Entropy (SampEn). Results: RR (∼200%), DET (∼70%), and H (∼35%) were higher and SampEn was reduced (∼50%) during the noxious stimulus that evoked the NFR compared to non-noxious stimuli. No significant differences were found for any of the complexity and entropy measures before and after exercise-induced inflammation (). Reduced complexity (increased H, DET, and RR) and increased regularity (SampEn) reflect reduced adaptability to stressors. Conclusions: Nonlinear methods such as complexity and entropy measures could be useful in understanding how a healthy neuromuscular system responds to disturbances. The reductions in complexity following a noxious stimulus could reflect the neuromuscular system adapting to environmental conditions to prevent damage or injury to the body.
In ophthalmology, there is a need to explore the relationships between clinical parameters of the cornea and the corneal shape. This study explores the paradigm of machine learning with nonlinear regression methods to verify whether corneal shapes can effectively be predicted from clinical data only, in an attempt to better assess and visualize their effects on the corneal shape.
The dimensionality of a database of normal anterior corneal surfaces was first reduced by Zernike modeling into short vectors of 12 to 20 coefficients used as targets. The associated structural, refractive and demographic corneal parameters were used as predictors. The nonlinear regression methods were borrowed from the scikit-learn library. All possible regression models (method + predictors + targets) were pre-tested in an exploratory step and those that performed better than linear regression were fully tested with 10-fold validation. The best model was selected based on mean RMSE scores measuring the distance between the predicted corneal surfaces of a model and the raw (non-modeled) true surfaces. The quality of the best model's predictions was visually assessed thanks to atlases of average elevation maps that displayed the centroids of the predicted and true surfaces on a number of clinical variables.
The best model identified was gradient boosting regression using all available clinical parameters to predict 16 Zernike coefficients. The predicted and true corneal surfaces represented in average elevation maps were remarkably similar. The most explicative predictor was the radius of the best-fit sphere, and departures from that sphere were mostly explained by the eye side and by refractive parameters (axis and cylinder).
It is possible to make a reasonably good prediction of the normal corneal shape solely from a set of clinical parameters. In so doing, one can visualize their effects on the corneal shape and identify its most important contributors.
Utilizing artificial intelligence (AI), a clinical decision support system (CDSS), can help physicians anticipate possible complications of cirrhosis patients before prescribing more accurate treatments. This study aimed to establish a prototype of AI-CDSS modeling using electronic health records to predict five complications for cirrhosis patients who were controlled for oral antiviral drugs, lamivudine (LAM) or entecavir (ETV).
Our modeling attained a web-based AI-CDSS with four steps – data extraction, sample normalization, AI-enabled machine learning (ML), and system integration. We designed the extract-transform-load (ETL) procedure to filter the analytics features from a clinical database. The data training process applied 10-fold cross-validation to verify diverse ML models due to possible feature patterns with medications for predicting the complications. In addition, we applied both statistical means and standard deviations of the realistic datasets to create the simulative datasets, which contained sufficient and balanced data to train the most efficient models for evaluation. The modeling combined multiple ML methods, such as support vector machine (SVM), random forest (RF), extreme gradient boosting, naive Bayes, and logistic regression, for training fourteen features to generate the AI-CDSS's prediction functionality.
The models achieving an accuracy of 0.8 after cross-validations would be qualified for the AI-CDSS. SVM and RF models using realistic data predicted jaundice with an accuracy of over 0.82. Furthermore, the SVM models using simulative data reached an accuracy of over 0.85 when predicting patients with jaundice. Our approaches implied that the simulative datasets based on the same distributions as that of the features in the realistic dataset were adequate for training the ML models. The RF model could reach an AUC of up to 0.82 for multiple complications by testing with the un-trained data. Finally, we successfully installed twenty models of the suitable ML methods in the AI-CDSS to predict five complications for cirrhosis patients prescribed with LAM or ETV.
Our modeling integrated a self-developed AI-CDSS with the approved ML models to predict cirrhosis complications for aiding clinical decision making.
Accurate and timely injection of insulin doses in accordance with the treatment protocol is very important in the follow-up of insulin-dependent diabetes patients. In this study, a new smart mobile apparatus (SMA) has been developed. The SMA can be attached to insulin pens and record and transfer data by detecting the patient's dose of insulin and the time at which it was provided. The SMA can detect the dose determined in the insulin pen through linear capacitive sensors. Electronic parts and sensor mechanism are located on the designed SMA body. The insulin pen's two-part mechanical construction of the body senses movement during dosage adjustment while also making sure the dose information is recorded in the control unit. The dose and time information recorded in the SMA internal memory are transmitted to the patient's smartphone via the developed mobile application. The developed SMA prototypes were evaluated by a team of doctors in a hospital setting for three months. As a result of the three-month study, it was observed that the insulin dose and administration times could be accurately sent to the smartphone application via SMA. The SMA was created in the laboratory environment and was prepared for pilot research with insulin-dependent diabetes patients in a hospital setting. It was observed that the SMA prototype successfully identified and recorded the dose and timing of the patient's self-administered insulin.
Aging is associated with muscle decline, which alters both functional and anatomical properties of the neuromuscular system. These modifications can be reflected in high-density surface electromyography (HD-sEMG) signals. This study examines how age and sex impact the shape of the amplitude Probability Density Function (PDF) of HD-sEMG signals.
Monopolar HD-sEMG signals were collected from the Biceps Brachii in a cohort of 17 individuals: 10 women (mean age: 22.9 ± 3.6 years) and 7 men (mean age: 24.4 ± 2.5 years) in the younger group, and 10 women (mean age: 69.8 ± 4.8 years) and 7 men (mean age: 72.8 ± 2.7 years) in the elderly group. The recordings were conducted during an elbow flexion at both 20% and 40% maximum voluntary contraction. The signal amplitude was evaluated using root means square amplitude (RMSA) and the PDF shape of each HD-sEMG signal was assessed through skewness, excess Kurtosis, and robust functional statistics. These shape distance metrics evaluate the departure from Gaussianity related to muscle aging. a) We conducted a comparison study of the HD-sEMG PDF shapes between younger and elderly individuals. b) Evaluating differences between men and women. c) Considering monopolar and Laplacian electrode configurations that are sensitive to different muscle regions.
A) The HD-sEMG PDFs of elderly subjects demonstrated a lower departure from Gaussianity than their younger counterparts. B) Women exhibited lower RMSA values than men, and, on average, a lower departure from Gaussianity whatever the age and contraction level C) Trends of departure from Gaussianity with contraction level, seems to be influenced by the electrode configuration. In fact, a decrease in Gaussianity departure is observed with monopolar recordings where an increase is observed with Laplacian one, clearly indicating different muscle region assessment.
The findings highlight the influence of factors such aging, sex, contraction level and electrode montage on the shape of the HD-sEMG PDF, emphasizing the significance of using this descriptor for monitoring and better assessment of muscle aging.
Deep learning algorithms have been widely used for cardiac image segmentation. However, most of these architectures rely on convolutions that hardly model long-range dependencies, limiting their ability to extract contextual information. Moreover, the traditional U-net architecture suffers from the difference of semantic information between feature maps of the encoder and decoder (also known as the semantic gap).
To address this issue, a new network architecture relying on attention mechanism was introduced. Swin Filtering Blocks (SFB), that use Swin Transformer blocks in a cross-attention manner, were added between the encoder and the decoder to filter information coming from the encoder based on the feature map from the decoder. Attention was also employed at the lowest resolution in the form of a transformer layer to increase the receptive field of the network.
We conducted experiments to assess both generalization capability and to evaluate how training on all frames of the cardiac cycle rather than only the end-diastole and end-systole impacts strain and segmentation performances.
Visual inspection of feature maps suggested that Swin Filtering Blocks contribute to the reduction of the semantic gap. Performing attention between all patches using a transformer layer brought higher performance than convolutions. Training the model with all phases of the cardiac cycle resulted in slightly more accurate segmentations while leading to a more noticeable improvement for strain estimation. A limited decrease in performance was observed when testing on out-of-distribution data, but the gap widens for the most apical slices.

