Cheng Ding, Tianliang Yao, Chenwei Wu, Jianyuan Ni
The electrocardiogram (ECG) remains a fundamental tool in cardiac diagnostics, yet its interpretation traditionally reliant on the expertise of cardiologists. The emergence of deep learning has heralded a revolutionary era in medical data analysis, particularly in the domain of ECG diagnostics. However, inter-patient variability prohibit the generalibility of ECG-AI model trained on a population dataset, hence degrade the performance of ECG-AI on specific patient or patient group. Many studies have address this challenge using different deep learning technologies. This comprehensive review systematically synthesizes research from a wide range of studies to provide an in-depth examination of cutting-edge deep-learning techniques in personalized ECG diagnosis. The review outlines a rigorous methodology for the selection of pertinent scholarly articles and offers a comprehensive overview of deep learning approaches applied to personalized ECG diagnostics. Moreover, the challenges these methods encounter are investigated, along with future research directions, culminating in insights into how the integration of deep learning can transform personalized ECG diagnosis and enhance cardiac care. By emphasizing both the strengths and limitations of current methodologies, this review underscores the immense potential of deep learning to refine and redefine ECG analysis in clinical practice, paving the way for more accurate, efficient, and personalized cardiac diagnostics.
{"title":"Deep Learning for Personalized Electrocardiogram Diagnosis: A Review","authors":"Cheng Ding, Tianliang Yao, Chenwei Wu, Jianyuan Ni","doi":"arxiv-2409.07975","DOIUrl":"https://doi.org/arxiv-2409.07975","url":null,"abstract":"The electrocardiogram (ECG) remains a fundamental tool in cardiac\u0000diagnostics, yet its interpretation traditionally reliant on the expertise of\u0000cardiologists. The emergence of deep learning has heralded a revolutionary era\u0000in medical data analysis, particularly in the domain of ECG diagnostics.\u0000However, inter-patient variability prohibit the generalibility of ECG-AI model\u0000trained on a population dataset, hence degrade the performance of ECG-AI on\u0000specific patient or patient group. Many studies have address this challenge\u0000using different deep learning technologies. This comprehensive review\u0000systematically synthesizes research from a wide range of studies to provide an\u0000in-depth examination of cutting-edge deep-learning techniques in personalized\u0000ECG diagnosis. The review outlines a rigorous methodology for the selection of\u0000pertinent scholarly articles and offers a comprehensive overview of deep\u0000learning approaches applied to personalized ECG diagnostics. Moreover, the\u0000challenges these methods encounter are investigated, along with future research\u0000directions, culminating in insights into how the integration of deep learning\u0000can transform personalized ECG diagnosis and enhance cardiac care. By\u0000emphasizing both the strengths and limitations of current methodologies, this\u0000review underscores the immense potential of deep learning to refine and\u0000redefine ECG analysis in clinical practice, paving the way for more accurate,\u0000efficient, and personalized cardiac diagnostics.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indoor humidity is a crucial factor affecting people's health and well-being. Wireless humidity sensing techniques are scalable and low-cost, making them a promising solution for measuring humidity in indoor environments without requiring additional devices. Such, machine learning (ML) assisted WiFi sensing is being envisioned as the key enabler for integrated sensing and communication (ISAC). However, the current WiFi-based sensing systems, such as WiHumidity, suffer from low accuracy. We propose an enhanced WiFi-based humidity detection framework to address this issue that utilizes innovative filtering and data processing techniques to exploit humidity-specific channel state information (CSI) signatures during RF sensing. These signals are then fed into ML algorithms for detecting different humidity levels. Specifically, our improved de-noising solution for the CSI captured by commodity hardware for WiFi sensing, combined with the k-th nearest neighbour ML algorithm and resolution tuning technique, helps improve humidity sensing accuracy. Our commercially available hardware-based experiments provide insights into achievable sensing resolution. Our empirical investigation shows that our enhanced framework can improve the accuracy of humidity sensing to 97%.
{"title":"Smart CSI Processing for Accruate Commodity WiFi-based Humidity Sensing","authors":"Yirui Deng, Deepak Mishra, Shaghik Atakaramians, Aruna Seneviratne","doi":"arxiv-2409.07857","DOIUrl":"https://doi.org/arxiv-2409.07857","url":null,"abstract":"Indoor humidity is a crucial factor affecting people's health and well-being.\u0000Wireless humidity sensing techniques are scalable and low-cost, making them a\u0000promising solution for measuring humidity in indoor environments without\u0000requiring additional devices. Such, machine learning (ML) assisted WiFi sensing\u0000is being envisioned as the key enabler for integrated sensing and communication\u0000(ISAC). However, the current WiFi-based sensing systems, such as WiHumidity,\u0000suffer from low accuracy. We propose an enhanced WiFi-based humidity detection\u0000framework to address this issue that utilizes innovative filtering and data\u0000processing techniques to exploit humidity-specific channel state information\u0000(CSI) signatures during RF sensing. These signals are then fed into ML\u0000algorithms for detecting different humidity levels. Specifically, our improved\u0000de-noising solution for the CSI captured by commodity hardware for WiFi\u0000sensing, combined with the k-th nearest neighbour ML algorithm and resolution\u0000tuning technique, helps improve humidity sensing accuracy. Our commercially\u0000available hardware-based experiments provide insights into achievable sensing\u0000resolution. Our empirical investigation shows that our enhanced framework can\u0000improve the accuracy of humidity sensing to 97%.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We address the problem of learning the topology of directed acyclic graphs (DAGs) from nodal observations, which adhere to a linear structural equation model. Recent advances framed the combinatorial DAG structure learning task as a continuous optimization problem, yet existing methods must contend with the complexities of non-convex optimization. To overcome this limitation, we assume that the latent DAG contains only non-negative edge weights. Leveraging this additional structure, we argue that cycles can be effectively characterized (and prevented) using a convex acyclicity function based on the log-determinant of the adjacency matrix. This convexity allows us to relax the task of learning the non-negative weighted DAG as an abstract convex optimization problem. We propose a DAG recovery algorithm based on the method of multipliers, that is guaranteed to return a global minimizer. Furthermore, we prove that in the infinite sample size regime, the convexity of our approach ensures the recovery of the true DAG structure. We empirically validate the performance of our algorithm in several reproducible synthetic-data test cases, showing that it outperforms state-of-the-art alternatives.
我们要解决的问题是从节点观测中学习有向无环图(DAG)拓扑结构的问题,而节点观测遵循线性结构方程模型。最近的进展将组合 DAG 结构学习任务框定为一个连续优化问题,但现有方法必须与非凸优化的复杂性作斗争。为了克服这一局限,我们假设潜在 DAG 只包含非负边权重。利用这一附加结构,我们认为可以使用基于邻接矩阵对数确定的凸非周期性函数来有效地描述(和防止)周期。这种凸性允许我们将学习非负加权 DAG 的任务放宽为一个抽象的凸优化问题。我们提出了一种基于乘法的 DAG 恢复算法,它能保证返回全局最小值。此外,我们还证明了在样本量无限大的情况下,我们方法的凸性可以确保恢复真实的 DAG 结构。我们在几个可重复的合成数据测试案例中实证验证了我们算法的性能,结果表明它优于最先进的替代方法。
{"title":"Non-negative Weighted DAG Structure Learning","authors":"Samuel Rey, Seyed Saman Saboksayr, Gonzalo Mateos","doi":"arxiv-2409.07880","DOIUrl":"https://doi.org/arxiv-2409.07880","url":null,"abstract":"We address the problem of learning the topology of directed acyclic graphs\u0000(DAGs) from nodal observations, which adhere to a linear structural equation\u0000model. Recent advances framed the combinatorial DAG structure learning task as\u0000a continuous optimization problem, yet existing methods must contend with the\u0000complexities of non-convex optimization. To overcome this limitation, we assume\u0000that the latent DAG contains only non-negative edge weights. Leveraging this\u0000additional structure, we argue that cycles can be effectively characterized\u0000(and prevented) using a convex acyclicity function based on the log-determinant\u0000of the adjacency matrix. This convexity allows us to relax the task of learning\u0000the non-negative weighted DAG as an abstract convex optimization problem. We\u0000propose a DAG recovery algorithm based on the method of multipliers, that is\u0000guaranteed to return a global minimizer. Furthermore, we prove that in the\u0000infinite sample size regime, the convexity of our approach ensures the recovery\u0000of the true DAG structure. We empirically validate the performance of our\u0000algorithm in several reproducible synthetic-data test cases, showing that it\u0000outperforms state-of-the-art alternatives.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of reconfigurable intelligent surface (RIS)-assisted communication systems, the connection between a base station (BS) and user equipment (UE) is formed by a cascaded channel, merging the BS-RIS and RIS-UE channels. Due to the fixed positioning of the BS and RIS and the mobility of UE, these two channels generally exhibit different time-varying characteristics, which are challenging to identify and exploit for feedback overhead reduction, given the separate channel estimation difficulty. To address this challenge, this letter introduces an innovative deep learning-based framework tailored for cascaded channel feedback, ingeniously capturing the intrinsic time variation in the cascaded channel. When an entire cascaded channel has been sent to the BS, this framework advocates the feedback of an efficient representation of this variation within a subsequent period through an extraction-compression scheme. This scheme involves RIS unit-grained channel variation extraction, followed by autoencoder-based deep compression to enhance compactness. Numerical simulations confirm that this feedback framework significantly reduces both the feedback and computational burdens.
{"title":"Efficient Deep Learning-based Cascaded Channel Feedback in RIS-Assisted Communications","authors":"Yiming Cui, Jiajia Guo, Chao-Kai Wen, Shi Jin","doi":"arxiv-2409.08149","DOIUrl":"https://doi.org/arxiv-2409.08149","url":null,"abstract":"In the realm of reconfigurable intelligent surface (RIS)-assisted\u0000communication systems, the connection between a base station (BS) and user\u0000equipment (UE) is formed by a cascaded channel, merging the BS-RIS and RIS-UE\u0000channels. Due to the fixed positioning of the BS and RIS and the mobility of\u0000UE, these two channels generally exhibit different time-varying\u0000characteristics, which are challenging to identify and exploit for feedback\u0000overhead reduction, given the separate channel estimation difficulty. To\u0000address this challenge, this letter introduces an innovative deep\u0000learning-based framework tailored for cascaded channel feedback, ingeniously\u0000capturing the intrinsic time variation in the cascaded channel. When an entire\u0000cascaded channel has been sent to the BS, this framework advocates the feedback\u0000of an efficient representation of this variation within a subsequent period\u0000through an extraction-compression scheme. This scheme involves RIS unit-grained\u0000channel variation extraction, followed by autoencoder-based deep compression to\u0000enhance compactness. Numerical simulations confirm that this feedback framework\u0000significantly reduces both the feedback and computational burdens.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter studies the AltGDmin algorithm for solving the noisy low rank column-wise sensing (LRCS) problem. Our sample complexity guarantee improves upon the best existing one by a factor $max(r, log(1/epsilon))/r$ where $r$ is the rank of the unknown matrix and $epsilon$ is the final desired accuracy. A second contribution of this work is a detailed comparison of guarantees from all work that studies the exact same mathematical problem as LRCS, but refers to it by different names.
{"title":"Noisy Low Rank Column-wise Sensing","authors":"Ankit Pratap Singh, Namrata Vaswani","doi":"arxiv-2409.08384","DOIUrl":"https://doi.org/arxiv-2409.08384","url":null,"abstract":"This letter studies the AltGDmin algorithm for solving the noisy low rank\u0000column-wise sensing (LRCS) problem. Our sample complexity guarantee improves\u0000upon the best existing one by a factor $max(r, log(1/epsilon))/r$ where $r$\u0000is the rank of the unknown matrix and $epsilon$ is the final desired accuracy.\u0000A second contribution of this work is a detailed comparison of guarantees from\u0000all work that studies the exact same mathematical problem as LRCS, but refers\u0000to it by different names.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents, for the first time, the concept of textit{polarforming} for wireless communications. Polarforming refers to a novel technique that enables dynamic adjustment of antenna polarization using reconfigurable polarized antennas (RPAs). It can fully leverage polarization diversity to improve the performance of wireless communication systems by aligning the effective polarization state of the incoming electromagnetic (EM) wave with the antenna polarization. To better demonstrate the benefits of polarforming, we propose a general RPA-aided system that allows for tunable antenna polarization. A wavefront-based channel model is developed to properly capture depolarization behaviors in both line-of-sight (LoS) and non-line-of-sight (NLoS) channels. Based on this model, we provide a detailed description of transmit and receive polarforming on planes of polarization (PoPs). We also evaluate the performance gains provided by polarforming under stochastic channel conditions. Specifically, we derive a closed-form expression for the relative signal-to-noise ratio (SNR) gain compared to conventional fixed-polarization antenna (FPA) systems and approximate the cumulative distribution function (CDF) for the RPA system. Our analysis reveals that polarforming offers a diversity gain of two, indicating full utilization of polarization diversity for dual-polarized antennas. Furthermore, extensive simulation results validate the effectiveness of polarforming and exhibit substantial improvements over conventional FPA systems. The results also indicate that polarforming not only can combat depolarization effects caused by wireless channels but also can overcome channel correlation when scattering is insufficient.
{"title":"Polarforming for Wireless Communications: Modeling and Performance Analysis","authors":"Zijian Zhou, Jingze Ding, Chenbo Wang, Bingli Jiao, Rui Zhang","doi":"arxiv-2409.07771","DOIUrl":"https://doi.org/arxiv-2409.07771","url":null,"abstract":"This paper presents, for the first time, the concept of textit{polarforming}\u0000for wireless communications. Polarforming refers to a novel technique that\u0000enables dynamic adjustment of antenna polarization using reconfigurable\u0000polarized antennas (RPAs). It can fully leverage polarization diversity to\u0000improve the performance of wireless communication systems by aligning the\u0000effective polarization state of the incoming electromagnetic (EM) wave with the\u0000antenna polarization. To better demonstrate the benefits of polarforming, we\u0000propose a general RPA-aided system that allows for tunable antenna\u0000polarization. A wavefront-based channel model is developed to properly capture\u0000depolarization behaviors in both line-of-sight (LoS) and non-line-of-sight\u0000(NLoS) channels. Based on this model, we provide a detailed description of\u0000transmit and receive polarforming on planes of polarization (PoPs). We also\u0000evaluate the performance gains provided by polarforming under stochastic\u0000channel conditions. Specifically, we derive a closed-form expression for the\u0000relative signal-to-noise ratio (SNR) gain compared to conventional\u0000fixed-polarization antenna (FPA) systems and approximate the cumulative\u0000distribution function (CDF) for the RPA system. Our analysis reveals that\u0000polarforming offers a diversity gain of two, indicating full utilization of\u0000polarization diversity for dual-polarized antennas. Furthermore, extensive\u0000simulation results validate the effectiveness of polarforming and exhibit\u0000substantial improvements over conventional FPA systems. The results also\u0000indicate that polarforming not only can combat depolarization effects caused by\u0000wireless channels but also can overcome channel correlation when scattering is\u0000insufficient.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joao Pereira, Michael Alummoottil, Dimitrios Halatsis, Dario Farina
Biosignal acquisition is key for healthcare applications and wearable devices, with machine learning offering promising methods for processing signals like surface electromyography (sEMG) and electroencephalography (EEG). Despite high within-session performance, intersession performance is hindered by electrode shift, a known issue across modalities. Existing solutions often require large and expensive datasets and/or lack robustness and interpretability. Thus, we propose the Spatial Adaptation Layer (SAL), which can be prepended to any biosignal array model and learns a parametrized affine transformation at the input between two recording sessions. We also introduce learnable baseline normalization (LBN) to reduce baseline fluctuations. Tested on two HD-sEMG gesture recognition datasets, SAL and LBN outperform standard fine-tuning on regular arrays, achieving competitive performance even with a logistic regressor, with orders of magnitude less, physically interpretable parameters. Our ablation study shows that forearm circumferential translations account for the majority of performance improvements, in line with sEMG physiological expectations.
{"title":"Spatial Adaptation Layer: Interpretable Domain Adaptation For Biosignal Sensor Array Applications","authors":"Joao Pereira, Michael Alummoottil, Dimitrios Halatsis, Dario Farina","doi":"arxiv-2409.08058","DOIUrl":"https://doi.org/arxiv-2409.08058","url":null,"abstract":"Biosignal acquisition is key for healthcare applications and wearable\u0000devices, with machine learning offering promising methods for processing\u0000signals like surface electromyography (sEMG) and electroencephalography (EEG).\u0000Despite high within-session performance, intersession performance is hindered\u0000by electrode shift, a known issue across modalities. Existing solutions often\u0000require large and expensive datasets and/or lack robustness and\u0000interpretability. Thus, we propose the Spatial Adaptation Layer (SAL), which\u0000can be prepended to any biosignal array model and learns a parametrized affine\u0000transformation at the input between two recording sessions. We also introduce\u0000learnable baseline normalization (LBN) to reduce baseline fluctuations. Tested\u0000on two HD-sEMG gesture recognition datasets, SAL and LBN outperform standard\u0000fine-tuning on regular arrays, achieving competitive performance even with a\u0000logistic regressor, with orders of magnitude less, physically interpretable\u0000parameters. Our ablation study shows that forearm circumferential translations\u0000account for the majority of performance improvements, in line with sEMG\u0000physiological expectations.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor M. Tenorio, Elvin Isufi, Geert Leus, Antonio G. Marques
This paper introduces a probabilistic approach for tracking the dynamics of unweighted and directed graphs using state-space models (SSMs). Unlike conventional topology inference methods that assume static graphs and generate point-wise estimates, our method accounts for dynamic changes in the network structure over time. We model the network at each timestep as the state of the SSM, and use observations to update beliefs that quantify the probability of the network being in a particular state. Then, by considering the dynamics of transition and observation models through the update and prediction steps, respectively, the proposed method can incorporate the information of real-time graph signals into the beliefs. These beliefs provide a probability distribution of the network at each timestep, being able to provide both an estimate for the network and the uncertainty it entails. Our approach is evaluated through experiments with synthetic and real-world networks. The results demonstrate that our method effectively estimates network states and accounts for the uncertainty in the data, outperforming traditional techniques such as recursive least squares.
{"title":"Tracking Network Dynamics using Probabilistic State-Space Models","authors":"Victor M. Tenorio, Elvin Isufi, Geert Leus, Antonio G. Marques","doi":"arxiv-2409.08238","DOIUrl":"https://doi.org/arxiv-2409.08238","url":null,"abstract":"This paper introduces a probabilistic approach for tracking the dynamics of\u0000unweighted and directed graphs using state-space models (SSMs). Unlike\u0000conventional topology inference methods that assume static graphs and generate\u0000point-wise estimates, our method accounts for dynamic changes in the network\u0000structure over time. We model the network at each timestep as the state of the\u0000SSM, and use observations to update beliefs that quantify the probability of\u0000the network being in a particular state. Then, by considering the dynamics of\u0000transition and observation models through the update and prediction steps,\u0000respectively, the proposed method can incorporate the information of real-time\u0000graph signals into the beliefs. These beliefs provide a probability\u0000distribution of the network at each timestep, being able to provide both an\u0000estimate for the network and the uncertainty it entails. Our approach is\u0000evaluated through experiments with synthetic and real-world networks. The\u0000results demonstrate that our method effectively estimates network states and\u0000accounts for the uncertainty in the data, outperforming traditional techniques\u0000such as recursive least squares.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luc Vandendorpe, Laurence Defraigne, Guillaume Thiran, Thomas Pairon, Christophe Craeye
Cell-free network is a new paradigm, originating from distributed MIMO, that has been investigated for a few recent years as an alternative to the celebrated cellular structure. Future networks not only consider classical data transmission but also positioning, along the lines of Integrated Communications and Sensing (ISAC). The goal of this paper is to investigate at the same time the ambiguity function which is an important metric for positioning and the understanding of its associated resolution and ambiguities, and the array gain when maximum ratio transmission (MRT) or MR combining (MRC) is implemented for data communications. In particular, the role and impact of using a waveform with non-zero bandwidth is investigated. The theoretical findings are illustrated by means of computational results.
{"title":"Positioning and transmission in cell-free networks: ambiguity function, and MRC/MRT array gains","authors":"Luc Vandendorpe, Laurence Defraigne, Guillaume Thiran, Thomas Pairon, Christophe Craeye","doi":"arxiv-2409.08187","DOIUrl":"https://doi.org/arxiv-2409.08187","url":null,"abstract":"Cell-free network is a new paradigm, originating from distributed MIMO, that\u0000has been investigated for a few recent years as an alternative to the\u0000celebrated cellular structure. Future networks not only consider classical data\u0000transmission but also positioning, along the lines of Integrated Communications\u0000and Sensing (ISAC). The goal of this paper is to investigate at the same time\u0000the ambiguity function which is an important metric for positioning and the\u0000understanding of its associated resolution and ambiguities, and the array gain\u0000when maximum ratio transmission (MRT) or MR combining (MRC) is implemented for\u0000data communications. In particular, the role and impact of using a waveform\u0000with non-zero bandwidth is investigated. The theoretical findings are\u0000illustrated by means of computational results.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianghao Zhan, Yuzhe Liu, Nicholas J. Cecchi, Jessica Towns, Ashlyn A. Callan, Olivier Gevaert, Michael M. Zeineh, David B. Camarillo
Objective: Head impact information including impact directions, speeds and force are important to study traumatic brain injury, design and evaluate protective gears. This study presents a deep learning model developed to accurately predict head impact information, including location, speed, orientation, and force, based on head kinematics during helmeted impacts. Methods: Leveraging a dataset of 16,000 simulated helmeted head impacts using the Riddell helmet finite element model, we implemented a Long Short-Term Memory (LSTM) network to process the head kinematics: tri-axial linear accelerations and angular velocities. Results: The models accurately predict the impact parameters describing impact location, direction, speed, and the impact force profile with R2 exceeding 70% for all tasks. Further validation was conducted using an on-field dataset recorded by instrumented mouthguards and videos, consisting of 79 head impacts in which the impact location can be clearly identified. The deep learning model significantly outperformed existing methods, achieving a 79.7% accuracy in identifying impact locations, compared to lower accuracies with traditional methods (the highest accuracy of existing methods is 49.4%). Conclusion: The precision underscores the model's potential in enhancing helmet design and safety in sports by providing more accurate impact data. Future studies should test the models across various helmets and sports on large in vivo datasets to validate the accuracy of the models, employing techniques like transfer learning to broaden its effectiveness.
{"title":"Identification of head impact locations, speeds, and force based on head kinematics","authors":"Xianghao Zhan, Yuzhe Liu, Nicholas J. Cecchi, Jessica Towns, Ashlyn A. Callan, Olivier Gevaert, Michael M. Zeineh, David B. Camarillo","doi":"arxiv-2409.08177","DOIUrl":"https://doi.org/arxiv-2409.08177","url":null,"abstract":"Objective: Head impact information including impact directions, speeds and\u0000force are important to study traumatic brain injury, design and evaluate\u0000protective gears. This study presents a deep learning model developed to\u0000accurately predict head impact information, including location, speed,\u0000orientation, and force, based on head kinematics during helmeted impacts.\u0000Methods: Leveraging a dataset of 16,000 simulated helmeted head impacts using\u0000the Riddell helmet finite element model, we implemented a Long Short-Term\u0000Memory (LSTM) network to process the head kinematics: tri-axial linear\u0000accelerations and angular velocities. Results: The models accurately predict\u0000the impact parameters describing impact location, direction, speed, and the\u0000impact force profile with R2 exceeding 70% for all tasks. Further validation\u0000was conducted using an on-field dataset recorded by instrumented mouthguards\u0000and videos, consisting of 79 head impacts in which the impact location can be\u0000clearly identified. The deep learning model significantly outperformed existing\u0000methods, achieving a 79.7% accuracy in identifying impact locations, compared\u0000to lower accuracies with traditional methods (the highest accuracy of existing\u0000methods is 49.4%). Conclusion: The precision underscores the model's potential\u0000in enhancing helmet design and safety in sports by providing more accurate\u0000impact data. Future studies should test the models across various helmets and\u0000sports on large in vivo datasets to validate the accuracy of the models,\u0000employing techniques like transfer learning to broaden its effectiveness.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}