Pub Date : 2025-02-22DOI: 10.1016/j.jisa.2025.104002
Omid Torki , Maede Ashouri-Talouki , Mina Alishahi
The widespread availability of DNA sequencing technology has led to the genetic sequences of individuals becoming accessible data, creating opportunities to identify the genetic factors underlying various diseases. In particular, Genome-Wide Association Studies (GWAS) seek to identify Single Nucleotide Polymorphism (SNPs) associated with a specific phenotype. Although sharing such data offers valuable insights, it poses a significant challenge due to both privacy concerns and the large size of the data involved. To address these challenges, in this paper, we propose a novel framework that combines both federated learning and blockchain as a platform for conducting GWAS studies with the participation of single individuals. The proposed framework offers a mutually beneficial solution where individuals participating in the GWAS study receive insurance credit to avail medical services while research and treatment centers benefit from the study data. To safeguard model parameters and prevent inference attacks, a secure aggregation protocol has been developed. The evaluation results demonstrate the scalability and efficiency of the proposed framework in terms of runtime and communication, outperforming existing solutions.
{"title":"Fed-GWAS: Privacy-preserving individualized incentive-based cross-device federated GWAS learning","authors":"Omid Torki , Maede Ashouri-Talouki , Mina Alishahi","doi":"10.1016/j.jisa.2025.104002","DOIUrl":"10.1016/j.jisa.2025.104002","url":null,"abstract":"<div><div>The widespread availability of DNA sequencing technology has led to the genetic sequences of individuals becoming accessible data, creating opportunities to identify the genetic factors underlying various diseases. In particular, Genome-Wide Association Studies (GWAS) seek to identify Single Nucleotide Polymorphism (SNPs) associated with a specific phenotype. Although sharing such data offers valuable insights, it poses a significant challenge due to both privacy concerns and the large size of the data involved. To address these challenges, in this paper, we propose a novel framework that combines both federated learning and blockchain as a platform for conducting GWAS studies with the participation of single individuals. The proposed framework offers a mutually beneficial solution where individuals participating in the GWAS study receive insurance credit to avail medical services while research and treatment centers benefit from the study data. To safeguard model parameters and prevent inference attacks, a secure aggregation protocol has been developed. The evaluation results demonstrate the scalability and efficiency of the proposed framework in terms of runtime and communication, outperforming existing solutions.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 104002"},"PeriodicalIF":3.8,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143463734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To pave the way to a super-smart society, artificial intelligence (AI) methods are being developed to discover and analyze necessary information instantly from cyberspace and utilize it in physical space. However, privacy protection is necessary for AI to process big data in cyberspace. From the viewpoint of developing safe and secure machine learning methods, research on (1) homomorphic cryptography, (2) differential privacy, (3) secure multiparty computation, and (4) federated learning is underway. The goal of these studies is to develop useful learning methods while maintaining data privacy.
We propose a method to address the trade-off between security and usability in machine learning. This method balances usability and data confidentiality by using decomposed data to achieve secure distributed processing. However, such methods using distributed processing increase computational and communication overhead as the number of servers increases. To address this problem, we propose a method to control the computational complexity as the number of servers increases. On the basis of these studies, this study first systematically addresses the construction of secure distributed processing methods with decomposed data. A comprehensive approach is essential to advance the field and allow these methods to be effectively applied to different domains. On the basis of these methods, we propose back-propagation and neural gas learning methods with reduced computational and communication requirements. We then apply the proposed methods to numerical simulations of class classification and clustering problems and show that accuracy comparable to that of conventional models can be achieved with computational and communication complexity for distributed models with servers.
{"title":"Toward the development of learning methods with distributed processing using securely divided data","authors":"Hirofumi Miyajima , Noritaka Shigei , Hiromi Miyajima , Norio Shiratori","doi":"10.1016/j.compeleceng.2025.110160","DOIUrl":"10.1016/j.compeleceng.2025.110160","url":null,"abstract":"<div><div>To pave the way to a super-smart society, artificial intelligence (AI) methods are being developed to discover and analyze necessary information instantly from cyberspace and utilize it in physical space. However, privacy protection is necessary for AI to process big data in cyberspace. From the viewpoint of developing safe and secure machine learning methods, research on (1) homomorphic cryptography, (2) differential privacy, (3) secure multiparty computation, and (4) federated learning is underway. The goal of these studies is to develop useful learning methods while maintaining data privacy.</div><div>We propose a method to address the trade-off between security and usability in machine learning. This method balances usability and data confidentiality by using decomposed data to achieve secure distributed processing. However, such methods using distributed processing increase computational and communication overhead as the number of servers increases. To address this problem, we propose a method to control the computational complexity as the number of servers increases. On the basis of these studies, this study first systematically addresses the construction of secure distributed processing methods with decomposed data. A comprehensive approach is essential to advance the field and allow these methods to be effectively applied to different domains. On the basis of these methods, we propose back-propagation and neural gas learning methods with reduced computational and communication requirements. We then apply the proposed methods to numerical simulations of class classification and clustering problems and show that accuracy comparable to that of conventional models can be achieved with <span><math><mrow><mn>1</mn><mo>/</mo><mi>Q</mi></mrow></math></span> computational and communication complexity for distributed models with <span><math><mi>Q</mi></math></span> servers.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110160"},"PeriodicalIF":4.0,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective analysis of high-dimensional systems with intricate variable interactions is crucial for accurate modeling and engineering applications. Previous methods using sparsity techniques or dimensional analysis separately often face limitations when handling complex, large-scale systems. This study introduces a sparsity-constrained dimensional analysis framework that integrates the classical Buckingham Pi theorem with sparse optimization techniques, enabling precise nondimensionalization. The framework, formulated as a convex optimization problem, addresses computational challenges associated with sparsity in high-dimensional spaces. Rigorously tested across various datasets, including the Fanning friction factor for rough pipe flow, an international standards-based dataset of physical quantities and units, and experimental data from flow boiling studies, this method successfully identified critical dimensionless groups that encapsulate core system dynamics. This approach not only offers a more compact and interpretable representation than conventional methods but also retains more characteristics of function variability. It proves particularly effective in systems governed by high-dimensional interactions, demonstrating a lower failure rate and mean relative error compared to an algorithm for comparison. The methodology is applicable to the modeling and analysis of complex engineering physical systems such as nuclear power, wind tunnel design, and marine engineering, as well as in designing scaled verification experiments.
{"title":"SPARDA: Sparsity-constrained dimensional analysis via convex relaxation for parameter reduction in high-dimensional engineering systems","authors":"Kuang Yang, Qiang Li, Zhenghui Hou, Haifan Liao, Chaofan Yang, Haijun Wang","doi":"10.1016/j.engappai.2025.110307","DOIUrl":"10.1016/j.engappai.2025.110307","url":null,"abstract":"<div><div>Effective analysis of high-dimensional systems with intricate variable interactions is crucial for accurate modeling and engineering applications. Previous methods using sparsity techniques or dimensional analysis separately often face limitations when handling complex, large-scale systems. This study introduces a sparsity-constrained dimensional analysis framework that integrates the classical Buckingham Pi theorem with sparse optimization techniques, enabling precise nondimensionalization. The framework, formulated as a convex optimization problem, addresses computational challenges associated with sparsity in high-dimensional spaces. Rigorously tested across various datasets, including the Fanning friction factor for rough pipe flow, an international standards-based dataset of physical quantities and units, and experimental data from flow boiling studies, this method successfully identified critical dimensionless groups that encapsulate core system dynamics. This approach not only offers a more compact and interpretable representation than conventional methods but also retains more characteristics of function variability. It proves particularly effective in systems governed by high-dimensional interactions, demonstrating a lower failure rate and mean relative error compared to an algorithm for comparison. The methodology is applicable to the modeling and analysis of complex engineering physical systems such as nuclear power, wind tunnel design, and marine engineering, as well as in designing scaled verification experiments.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110307"},"PeriodicalIF":7.5,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the application of nonlinear wave modulation (NWM) using chirp signals for structural health monitoring (SHM). The implementation of NWM with monoharmonic signals (periodic signals that consist of a single frequency component) poses significant challenges due to the complexity of selecting optimal pump and carrier frequencies, leading to time-intensive processes. In contrast, analyzing NWM with chirp signals introduces additional complexities regarding signal processing compared to monoharmonic excitations. Time-frequency analysis (TFA) has been identified as a crucial method for examining non-stationary signals; however, many existing techniques face limitations in resolution, particularly in the context of chirp signals, as dictated by the Heisenberg uncertainty principle. To address these challenges, the superlet synchroextracting transform (SLSET) is introduced as an innovative TFA approach that combines the strengths of superlet (SL) and synchroextracting transforms, resulting in improved resolution. This research utilizes NWM alongside SLSET to detect boundary loosening in sandwich beams, demonstrating the method's effectiveness in identifying structural damage while maintaining robustness against noise. Results indicate that SLSET significantly enhances the damage index compared to traditional TFA methods. The high resolution achieved allows for the detection of sidebands in vibro-acoustic modulation (VAM) tests conducted at low pump frequencies. Furthermore, three machine learning (ML) models including support vector machine (SVM), Adaptive Boosting (AdaBoost), and Random Forest (RF) were trained. The stack ensemble method combined the outputs of these models, resulting in an overall accuracy of 99.2%. This approach effectively leveraged the strengths of individual models, enhancing generalization and robustness in detecting damage across complex data scenarios. The features extracted using SLSET for VAM data of faulty structure attains a classification accuracy of 98.9%. In contrast, features derived from conventional time-frequency methods fail to identify damage, even in noise-free conditions.
{"title":"A novel enhanced Superlet Synchroextracting transform ensemble learning for structural health monitoring using nonlinear wave modulation","authors":"Naserodin Sepehry , Mohammad Ehsani , Hamdireza Amindavar , Weidong Zhu , Firooz Bakhtiari Nejad","doi":"10.1016/j.engappai.2025.110341","DOIUrl":"10.1016/j.engappai.2025.110341","url":null,"abstract":"<div><div>This study investigates the application of nonlinear wave modulation (NWM) using chirp signals for structural health monitoring (SHM). The implementation of NWM with monoharmonic signals (periodic signals that consist of a single frequency component) poses significant challenges due to the complexity of selecting optimal pump and carrier frequencies, leading to time-intensive processes. In contrast, analyzing NWM with chirp signals introduces additional complexities regarding signal processing compared to monoharmonic excitations. Time-frequency analysis (TFA) has been identified as a crucial method for examining non-stationary signals; however, many existing techniques face limitations in resolution, particularly in the context of chirp signals, as dictated by the Heisenberg uncertainty principle. To address these challenges, the superlet synchroextracting transform (SLSET) is introduced as an innovative TFA approach that combines the strengths of superlet (SL) and synchroextracting transforms, resulting in improved resolution. This research utilizes NWM alongside SLSET to detect boundary loosening in sandwich beams, demonstrating the method's effectiveness in identifying structural damage while maintaining robustness against noise. Results indicate that SLSET significantly enhances the damage index compared to traditional TFA methods. The high resolution achieved allows for the detection of sidebands in vibro-acoustic modulation (VAM) tests conducted at low pump frequencies. Furthermore, three machine learning (ML) models including support vector machine (SVM), Adaptive Boosting (AdaBoost), and Random Forest (RF) were trained. The stack ensemble method combined the outputs of these models, resulting in an overall accuracy of 99.2%. This approach effectively leveraged the strengths of individual models, enhancing generalization and robustness in detecting damage across complex data scenarios. The features extracted using SLSET for VAM data of faulty structure attains a classification accuracy of 98.9%. In contrast, features derived from conventional time-frequency methods fail to identify damage, even in noise-free conditions.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"147 ","pages":"Article 110341"},"PeriodicalIF":7.5,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.engappai.2025.110283
Xin Chen , Dan Liu , Longzhou Yu , Ping Shao , Mingyan An , Shuming Wen
Flotation froth image analysis with computer vision systems has witnessed a transformative evolution through the integration of deep learning. Deep learning outperforms traditional feature design by effectively learning intricate feature representations, thus enhancing the assessment of froth flotation processes' operational performance. Flotation froth image analysis via deep learning facilitates real-time monitoring of dynamic flotation processes, guiding the adjustment of operational variables through predicting performance indicators, recognizing froth states and segmenting foam edges, which promotes resource efficiency and supports the sustainable development of beneficiation. Despite the vast potential of deep learning for time-series forecasting within the multistage flotation cycle, its capabilities remain underexplored. To fill this gap, based on recent research, we discuss the application of temporal and multistage information in flotation cycle. We introduce the development trends of deep learning in various processes of flotation froth image analysis, including data collection, dataset preprocessing, feature extraction, and modeling. We particularly discuss advanced techniques for extracting time-series features, and developing multistage models and innovative data collection methods, so as to emphasize the importance of using temporal information. Eventually, the review explores several trends and challenges for future research. This review is expected to leave readers with deeper thoughts about algorithm design and data collection in the flotation domain, thereby promoting further research and development in beneficiation automation.
{"title":"Recent advances in flotation froth image analysis via deep learning","authors":"Xin Chen , Dan Liu , Longzhou Yu , Ping Shao , Mingyan An , Shuming Wen","doi":"10.1016/j.engappai.2025.110283","DOIUrl":"10.1016/j.engappai.2025.110283","url":null,"abstract":"<div><div>Flotation froth image analysis with computer vision systems has witnessed a transformative evolution through the integration of deep learning. Deep learning outperforms traditional feature design by effectively learning intricate feature representations, thus enhancing the assessment of froth flotation processes' operational performance. Flotation froth image analysis via deep learning facilitates real-time monitoring of dynamic flotation processes, guiding the adjustment of operational variables through predicting performance indicators, recognizing froth states and segmenting foam edges, which promotes resource efficiency and supports the sustainable development of beneficiation. Despite the vast potential of deep learning for time-series forecasting within the multistage flotation cycle, its capabilities remain underexplored. To fill this gap, based on recent research, we discuss the application of temporal and multistage information in flotation cycle. We introduce the development trends of deep learning in various processes of flotation froth image analysis, including data collection, dataset preprocessing, feature extraction, and modeling. We particularly discuss advanced techniques for extracting time-series features, and developing multistage models and innovative data collection methods, so as to emphasize the importance of using temporal information. Eventually, the review explores several trends and challenges for future research. This review is expected to leave readers with deeper thoughts about algorithm design and data collection in the flotation domain, thereby promoting further research and development in beneficiation automation.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"147 ","pages":"Article 110283"},"PeriodicalIF":7.5,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples. These represent distinct forms of distributional shifts that can significantly impact DNNs’ reliability and robustness. Traditionally, research has addressed OOD detection and adversarial robustness as separate challenges. This survey focuses on the intersection of these two areas, examining how the research community has investigated them together. Consequently, we identify two key research directions: robust OOD detection and unified robustness. Robust OOD detection aims to differentiate between in-distribution (ID) data and OOD data, even when they are adversarially manipulated to deceive the OOD detector. Unified robustness seeks a single approach to make DNNs robust against both adversarial attacks and OOD inputs. Accordingly, first, we establish a taxonomy based on the concept of distributional shifts. This framework clarifies how robust OOD detection and unified robustness relate to other research areas addressing distributional shifts, such as OOD detection, open set recognition, and anomaly detection. Subsequently, we review existing work on robust OOD detection and unified robustness. Finally, we highlight the limitations of the existing work and propose promising research directions that explore adversarial and OOD inputs within a unified framework.
{"title":"Out-of-Distribution Data: An Acquaintance of Adversarial Examples - A Survey","authors":"Naveen Karunanayake, Ravin Gunawardena, Suranga Seneviratne, Sanjay Chawla","doi":"10.1145/3719292","DOIUrl":"https://doi.org/10.1145/3719292","url":null,"abstract":"Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples. These represent distinct forms of distributional shifts that can significantly impact DNNs’ reliability and robustness. Traditionally, research has addressed OOD detection and adversarial robustness as separate challenges. This survey focuses on the intersection of these two areas, examining how the research community has investigated them together. Consequently, we identify two key research directions: robust OOD detection and unified robustness. Robust OOD detection aims to differentiate between in-distribution (ID) data and OOD data, even when they are adversarially manipulated to deceive the OOD detector. Unified robustness seeks a single approach to make DNNs robust against both adversarial attacks and OOD inputs. Accordingly, first, we establish a taxonomy based on the concept of distributional shifts. This framework clarifies how robust OOD detection and unified robustness relate to other research areas addressing distributional shifts, such as OOD detection, open set recognition, and anomaly detection. Subsequently, we review existing work on robust OOD detection and unified robustness. Finally, we highlight the limitations of the existing work and propose promising research directions that explore adversarial and OOD inputs within a unified framework.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"14 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Diamantini, Tarique Khan, Domenico Potena, Emanuele Storti
Performance Indicators and metrics are essential management tools. They provide synthetic objective measures to monitor the progress of a process, set objectives and assess deviations, enabling effective decision making. They can also be used for communication purposes, facilitating the sharing of objectives and results, or improving the awareness on certain phenomena, thus motivating more responsible and sustainable behaviors. Given their strategic role, it is of paramount importance, as well as challenging, to guarantee that the intended meaning of an indicator is fully shared among stakeholders, and that its implementation is aligned with the definition provided by decision makers, as this is a precondition for data quality and trustworthiness of the information system. Formal models, such as ontologies, have been long investigated in the literature to address the issues. This paper proposes a comprehensive survey on semantic approaches aimed to specify conceptual definitions of indicators and metrics, illustrating also the advantages of these formal approaches in relevant use cases and application domains.
{"title":"Semantic Models of Performance Indicators: A Systematic Survey","authors":"Claudia Diamantini, Tarique Khan, Domenico Potena, Emanuele Storti","doi":"10.1145/3719291","DOIUrl":"https://doi.org/10.1145/3719291","url":null,"abstract":"Performance Indicators and metrics are essential management tools. They provide synthetic objective measures to monitor the progress of a process, set objectives and assess deviations, enabling effective decision making. They can also be used for communication purposes, facilitating the sharing of objectives and results, or improving the awareness on certain phenomena, thus motivating more responsible and sustainable behaviors. Given their strategic role, it is of paramount importance, as well as challenging, to guarantee that the intended meaning of an indicator is fully shared among stakeholders, and that its implementation is aligned with the definition provided by decision makers, as this is a precondition for data quality and trustworthiness of the information system. Formal models, such as ontologies, have been long investigated in the literature to address the issues. This paper proposes a comprehensive survey on semantic approaches aimed to specify conceptual definitions of indicators and metrics, illustrating also the advantages of these formal approaches in relevant use cases and application domains.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"31 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.engappai.2025.110349
Yang Wang , Hong Xiao , Chaozhi Ma , Zhihai Zhang , Xuhao Cui , Aimin Xu
Leveraging acceleration sensors affixed to the train body enables continuous surveillance of rail corrugation, delivering cost-effectiveness, operational efficiency, and portability. Establishing the correlation between vertical body acceleration and rail corrugation poses a substantial challenge. To ensure uninterrupted monitoring of rail corrugation, an initial development involved constructing a train-track integrated simulation model that accounted for the dynamics of flexible wheelsets and tracks, thereby generating a simulated dataset of vertical body acceleration. Subsequent improvements were made to the conventional Convolutional Block Attention Module (CBAM) architecture, culminating in the proposal of a deep one-dimensional convolutional residual network model named Train Body Vertical Acceleration Network (TBVA-Net), founded on an improved CBAM framework. Training was conducted using the simulated dataset, showcasing the reduced model complexity and total parameter count of the improved CBAM architecture, which notably amplified classification accuracy. The TBVA-Net, employing the refined CBAM, consistently achieved test accuracies exceeding 95%, averaging at 98.6% on the simulated dataset. Validation through field-measured data corroborated the rationale behind the proposed TBVA-Net architecture. Fine-tuning with a limited subset of labeled field data led to a transfer accuracy of 98.5%. This paper presents an innovative approach for detecting rail corrugation through vertical acceleration signals obtained from operational vehicles.
{"title":"On-board detection of rail corrugation using improved convolutional block attention mechanism","authors":"Yang Wang , Hong Xiao , Chaozhi Ma , Zhihai Zhang , Xuhao Cui , Aimin Xu","doi":"10.1016/j.engappai.2025.110349","DOIUrl":"10.1016/j.engappai.2025.110349","url":null,"abstract":"<div><div>Leveraging acceleration sensors affixed to the train body enables continuous surveillance of rail corrugation, delivering cost-effectiveness, operational efficiency, and portability. Establishing the correlation between vertical body acceleration and rail corrugation poses a substantial challenge. To ensure uninterrupted monitoring of rail corrugation, an initial development involved constructing a train-track integrated simulation model that accounted for the dynamics of flexible wheelsets and tracks, thereby generating a simulated dataset of vertical body acceleration. Subsequent improvements were made to the conventional Convolutional Block Attention Module (CBAM) architecture, culminating in the proposal of a deep one-dimensional convolutional residual network model named Train Body Vertical Acceleration Network (TBVA-Net), founded on an improved CBAM framework. Training was conducted using the simulated dataset, showcasing the reduced model complexity and total parameter count of the improved CBAM architecture, which notably amplified classification accuracy. The TBVA-Net, employing the refined CBAM, consistently achieved test accuracies exceeding 95%, averaging at 98.6% on the simulated dataset. Validation through field-measured data corroborated the rationale behind the proposed TBVA-Net architecture. Fine-tuning with a limited subset of labeled field data led to a transfer accuracy of 98.5%. This paper presents an innovative approach for detecting rail corrugation through vertical acceleration signals obtained from operational vehicles.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110349"},"PeriodicalIF":7.5,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.ijhcs.2025.103469
Yael Avni , Alexandra Danial-Saad , Julia Sheidin , Tsvi Kuflik
This research explores how Interactive Multimodal Tangible Interface (IMTIs) exploiting advanced technologies such as 3D printing and microcontrollers to enhance museum experiences for blind and low vision (BLV) visitors. It investigates the potential for these technologies to create more inclusive and engaging museum environments. Four IMTIs were developed in collaboration with two blind volunteers, with each IMTI using a different interaction technique (autoplay, pushbuttons, and scanning sensors) developed in the pilot phase and having a different shape to align with the museum installation being presented. After refining the concepts, three of the four IMTIs were redesigned and developed into high-quality IMTIs and evaluated by BLV visitors (n=30). The results showed a clear preference among BLV visitors for using pushbuttons to operate the IMTI. Additionally, the research identified key areas for improvement, including 3D printing techniques for producing replicas, audio guide content, and design decisions that can enhance users’ sense of control over the IMTI. These findings offer valuable insights for the future development of tactile replicas that promote contextual understanding, while contributing to a more inclusive and engaging museum experience.
{"title":"Enhancing museum accessibility for blind and low vision visitors through interactive multimodal tangible interfaces","authors":"Yael Avni , Alexandra Danial-Saad , Julia Sheidin , Tsvi Kuflik","doi":"10.1016/j.ijhcs.2025.103469","DOIUrl":"10.1016/j.ijhcs.2025.103469","url":null,"abstract":"<div><div>This research explores how Interactive Multimodal Tangible Interface (IMTIs) exploiting advanced technologies such as 3D printing and microcontrollers to enhance museum experiences for blind and low vision (BLV) visitors. It investigates the potential for these technologies to create more inclusive and engaging museum environments. Four IMTIs were developed in collaboration with two blind volunteers, with each IMTI using a different interaction technique (autoplay, pushbuttons, and scanning sensors) developed in the pilot phase and having a different shape to align with the museum installation being presented. After refining the concepts, three of the four IMTIs were redesigned and developed into high-quality IMTIs and evaluated by BLV visitors (n=30). The results showed a clear preference among BLV visitors for using pushbuttons to operate the IMTI. Additionally, the research identified key areas for improvement, including 3D printing techniques for producing replicas, audio guide content, and design decisions that can enhance users’ sense of control over the IMTI. These findings offer valuable insights for the future development of tactile replicas that promote contextual understanding, while contributing to a more inclusive and engaging museum experience.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"198 ","pages":"Article 103469"},"PeriodicalIF":5.3,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.visinf.2025.01.002
Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama
Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.
{"title":"Leveraging personality as a proxy of perceived transparency in hierarchical visualizations","authors":"Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama","doi":"10.1016/j.visinf.2025.01.002","DOIUrl":"10.1016/j.visinf.2025.01.002","url":null,"abstract":"<div><div>Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 43-57"},"PeriodicalIF":3.8,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}