Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3549035
Kang Han;Li Xu
The importance of examining the wake effect of wind farms for optimizing their layout and augmenting their power generation efficiency is immense. Considering that the establishment of extensive wind farms often leads to a significant number of turbines being positioned downstream of preceding ones, it significantly diminishes their power generation efficiency. In our study, we propose a graph representation learning model with improved Transformer (GRL-ITransformer) to better integrate feature information, so that the model can capture the dynamic time relationship of different variables and establish its spatial relationship, striving to enhance the precision in predicting wind turbine wake field. Different from the previous way involving handling reduced-order and separating prediction process, we combine the reduced-order technique with the proposed model to make the model more efficiently and intelligently determine the number of modes required for model prediction. After that, the data driven method is employed to update the parameters, and the superiority of GRL-ITransformer is highlighted by analyzing and comparing with the existing five classical intelligent algorithms (belongs to four categories). The comprehensive results show that GRL-ITransformer has excellent performance in wind turbine wake field prediction and reconstruction, and always possesses the lowest error for a series of error evaluation indexes among all models.
{"title":"GRL-ITransformer: An Intelligent Method for Multi-Wind-Turbine Wake Analysis Based on Graph Representation Learning With Improved Transformer","authors":"Kang Han;Li Xu","doi":"10.1109/ACCESS.2025.3549035","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3549035","url":null,"abstract":"The importance of examining the wake effect of wind farms for optimizing their layout and augmenting their power generation efficiency is immense. Considering that the establishment of extensive wind farms often leads to a significant number of turbines being positioned downstream of preceding ones, it significantly diminishes their power generation efficiency. In our study, we propose a graph representation learning model with improved Transformer (GRL-ITransformer) to better integrate feature information, so that the model can capture the dynamic time relationship of different variables and establish its spatial relationship, striving to enhance the precision in predicting wind turbine wake field. Different from the previous way involving handling reduced-order and separating prediction process, we combine the reduced-order technique with the proposed model to make the model more efficiently and intelligently determine the number of modes required for model prediction. After that, the data driven method is employed to update the parameters, and the superiority of GRL-ITransformer is highlighted by analyzing and comparing with the existing five classical intelligent algorithms (belongs to four categories). The comprehensive results show that GRL-ITransformer has excellent performance in wind turbine wake field prediction and reconstruction, and always possesses the lowest error for a series of error evaluation indexes among all models.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"43572-43592"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3548966
Firas Ramadan;Gil Shomron;Freddy Gabbay
Deep neural networks (DNNs) excel in various applications, such as computer vision, natural language processing, and other mission-critical systems. As the computational complexity of these models grows, there is an increasing need for specialized accelerators to handle the demanding workloads. In response, advancements in Very Large Scale Integration (VLSI) process nodes have significantly intensified the development of machine learning (ML) accelerators, offering enhanced transistor miniaturization and power efficiency. However, the susceptibility of these advanced nodes to transistor aging poses risks to ML accelerator performance, prediction accuracy, and reliability, which can impact the functional safety of mission-critical systems. This study focuses on the impact of asymmetric transistor aging, induced by Bias Temperature Instability (BTI), on systolic arrays (SAs), which are integral to many ML accelerators in mission-critical systems. Our aging-aware analysis indicates that SAs experience asymmetric aging, causing logical elements to age at varying rates. In addition, our simulations show that asymmetric transistor aging introduces persistent and transient faults in the SA’s datapath, compromising the overall resiliency of the ML model. Our simulation results show that even with less than 1% of transient failure events, the top-1 prediction accuracy of ResNet-18 ML model drops significantly by 32–50% and with approximately 0.8% of transient failure events PTQ4ViT drops by almost 90%. To address this issue, we propose new hardware mechanisms and design flow solutions that can successfully mitigate the impact of asymmetric transistor aging on ML accelerator reliability with minimal power and area overhead.
{"title":"The Effect of Asymmetric Transistor Aging on Systolic Arrays for Mission Critical Machine Learning Applications","authors":"Firas Ramadan;Gil Shomron;Freddy Gabbay","doi":"10.1109/ACCESS.2025.3548966","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548966","url":null,"abstract":"Deep neural networks (DNNs) excel in various applications, such as computer vision, natural language processing, and other mission-critical systems. As the computational complexity of these models grows, there is an increasing need for specialized accelerators to handle the demanding workloads. In response, advancements in Very Large Scale Integration (VLSI) process nodes have significantly intensified the development of machine learning (ML) accelerators, offering enhanced transistor miniaturization and power efficiency. However, the susceptibility of these advanced nodes to transistor aging poses risks to ML accelerator performance, prediction accuracy, and reliability, which can impact the functional safety of mission-critical systems. This study focuses on the impact of asymmetric transistor aging, induced by Bias Temperature Instability (BTI), on systolic arrays (SAs), which are integral to many ML accelerators in mission-critical systems. Our aging-aware analysis indicates that SAs experience asymmetric aging, causing logical elements to age at varying rates. In addition, our simulations show that asymmetric transistor aging introduces persistent and transient faults in the SA’s datapath, compromising the overall resiliency of the ML model. Our simulation results show that even with less than 1% of transient failure events, the top-1 prediction accuracy of ResNet-18 ML model drops significantly by 32–50% and with approximately 0.8% of transient failure events PTQ4ViT drops by almost 90%. To address this issue, we propose new hardware mechanisms and design flow solutions that can successfully mitigate the impact of asymmetric transistor aging on ML accelerator reliability with minimal power and area overhead.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"44041-44061"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915624","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3548914
Ghadeer O. Alsharif;Christos Anagnostopoulos;Angelos K. Marnerides
Locational Marginal Prices (LMPs) are critical indicators in modern energy markets, representing the cost of delivering electricity at specific locations while considering the generation and transmission constraints. LMPs facilitate the transition to dynamic energy markets by providing real-time pricing signals that reflect supply and demand conditions, thereby incentivizing efficient resource allocation and encouraging investments in renewable energy sources. However, determining LMPs requires the processing of vast amounts of data, including real-time electricity demand, generation capacities, transmission line statuses, and market bids. Owing to vulnerabilities in the underlying sensors and communication infrastructure, adversaries can launch profit-driven stealthy False Data Injection Attacks (FDIAs) to manipulate LMPs. Such manipulations can have severe consequences, including inflated electricity prices, reduced market efficiency, distorted competition, and hindered integration of renewable energy sources. Although several studies have examined the operational consequences of FDIAs, their financial impact on energy market outcomes remains insufficiently explored. This work presents a comprehensive review of FDIAs aimed at manipulating LMPs, a key pricing mechanism in modern energy markets. A detailed analysis was conducted to identify vulnerabilities arising from both the energy system infrastructure and market operations. In addition, existing energy market threat models and defense mechanisms are systematically reviewed. Finally, key research gaps are identified, and future research directions are outlined to enhance the resilience of energy markets against FDIA threats.
{"title":"Energy Market Manipulation via False-Data Injection Attacks: A Review","authors":"Ghadeer O. Alsharif;Christos Anagnostopoulos;Angelos K. Marnerides","doi":"10.1109/ACCESS.2025.3548914","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548914","url":null,"abstract":"Locational Marginal Prices (LMPs) are critical indicators in modern energy markets, representing the cost of delivering electricity at specific locations while considering the generation and transmission constraints. LMPs facilitate the transition to dynamic energy markets by providing real-time pricing signals that reflect supply and demand conditions, thereby incentivizing efficient resource allocation and encouraging investments in renewable energy sources. However, determining LMPs requires the processing of vast amounts of data, including real-time electricity demand, generation capacities, transmission line statuses, and market bids. Owing to vulnerabilities in the underlying sensors and communication infrastructure, adversaries can launch profit-driven stealthy False Data Injection Attacks (FDIAs) to manipulate LMPs. Such manipulations can have severe consequences, including inflated electricity prices, reduced market efficiency, distorted competition, and hindered integration of renewable energy sources. Although several studies have examined the operational consequences of FDIAs, their financial impact on energy market outcomes remains insufficiently explored. This work presents a comprehensive review of FDIAs aimed at manipulating LMPs, a key pricing mechanism in modern energy markets. A detailed analysis was conducted to identify vulnerabilities arising from both the energy system infrastructure and market operations. In addition, existing energy market threat models and defense mechanisms are systematically reviewed. Finally, key research gaps are identified, and future research directions are outlined to enhance the resilience of energy markets against FDIA threats.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"42559-42573"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3548903
Ahmed El-Awamry;Feng Zheng;Thomas Kaiser;Maher Khaliel
This paper investigates the impact of phase noise on range estimation accuracy in harmonic Frequency-Modulated Continuous-Wave (FMCW) radar systems. Harmonic FMCW radars offer advantages in many applications due to their ability to suppress clutter. However, phase noise, particularly in harmonic systems, presents a significant challenge by degrading range accuracy and increasing frequency measurement errors. In this study, a comprehensive theoretical model is developed to quantify the effects of phase noise on range estimation errors, providing a foundation for understanding its implications on system performance. This model is rigorously validated through both extensive simulations and real-world measurements, offering a holistic assessment of phase noise behavior under practical operating conditions. The results demonstrate that phase noise severely impacts range estimation accuracy, with its effects becoming more pronounced at greater target distances. These findings are further substantiated by experimental evaluations using a practical harmonic radar system, where the system’s range accuracy is analyzed under realistic conditions. This study provides valuable insights and design guidelines for mitigating its impact in harmonic FMCW radar architectures. The results highlight the necessity of advanced phase noise suppression techniques, including optimized hardware configurations and adaptive signal processing methods, to enhance performance in high-precision applications such as industrial positioning and biomedical sensing.
{"title":"Impact of Phase Noise on Range Estimation Accuracy in Harmonic FMCW Radar Systems","authors":"Ahmed El-Awamry;Feng Zheng;Thomas Kaiser;Maher Khaliel","doi":"10.1109/ACCESS.2025.3548903","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548903","url":null,"abstract":"This paper investigates the impact of phase noise on range estimation accuracy in harmonic Frequency-Modulated Continuous-Wave (FMCW) radar systems. Harmonic FMCW radars offer advantages in many applications due to their ability to suppress clutter. However, phase noise, particularly in harmonic systems, presents a significant challenge by degrading range accuracy and increasing frequency measurement errors. In this study, a comprehensive theoretical model is developed to quantify the effects of phase noise on range estimation errors, providing a foundation for understanding its implications on system performance. This model is rigorously validated through both extensive simulations and real-world measurements, offering a holistic assessment of phase noise behavior under practical operating conditions. The results demonstrate that phase noise severely impacts range estimation accuracy, with its effects becoming more pronounced at greater target distances. These findings are further substantiated by experimental evaluations using a practical harmonic radar system, where the system’s range accuracy is analyzed under realistic conditions. This study provides valuable insights and design guidelines for mitigating its impact in harmonic FMCW radar architectures. The results highlight the necessity of advanced phase noise suppression techniques, including optimized hardware configurations and adaptive signal processing methods, to enhance performance in high-precision applications such as industrial positioning and biomedical sensing.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"42669-42688"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915597","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem solved in the article is reduction of computational costs for the image classification process when applying structural methods. The main focus is implementing tools for granulation, screening, and clustering processing a set of elements of etalon descriptions. As a result of compression, each etalon is transformed into a reduced set of descriptors or data centroids, ensuring high speed and performance of image classification. Several variants of simple data compression schemes are assessed and compared to the traditional linear search method, along with two variants of etalon clustering. The comparison includes results achieved for the entire data set and for each of the images separately. The paper presents the results of software modeling of the proposed approaches for two experimental sets containing images of football club logos and artistic paintings. The test sample includes a set of images from the etalon database along with other images that do not belong to the database and with a set of geometric transformations of shift, scale, and rotation in the field of view applied to them. The research covers practical issues of choosing threshold parameters to set the equivalence of descriptors and minimizing the number of class votes to ensure the required level of classification accuracy. Testing has confirmed a significant processing acceleration and a sufficiently increasing level of classification accuracy due to employing compression. Particularly the conducted modeling revealed a tenfold increase in speed. It has been experimentally confirmed that using a clustering apparatus has a much higher potential in terms of classification accuracy and speed than simple sifting or granulation schemes based on close description components.
{"title":"Image Description Compression in Classification Structural Methods","authors":"Volodymyr Gorokhovatskyi;Iryna Tvoroshenko;Olena Yakovleva;Monika Hudáková","doi":"10.1109/ACCESS.2025.3548910","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548910","url":null,"abstract":"The problem solved in the article is reduction of computational costs for the image classification process when applying structural methods. The main focus is implementing tools for granulation, screening, and clustering processing a set of elements of etalon descriptions. As a result of compression, each etalon is transformed into a reduced set of descriptors or data centroids, ensuring high speed and performance of image classification. Several variants of simple data compression schemes are assessed and compared to the traditional linear search method, along with two variants of etalon clustering. The comparison includes results achieved for the entire data set and for each of the images separately. The paper presents the results of software modeling of the proposed approaches for two experimental sets containing images of football club logos and artistic paintings. The test sample includes a set of images from the etalon database along with other images that do not belong to the database and with a set of geometric transformations of shift, scale, and rotation in the field of view applied to them. The research covers practical issues of choosing threshold parameters to set the equivalence of descriptors and minimizing the number of class votes to ensure the required level of classification accuracy. Testing has confirmed a significant processing acceleration and a sufficiently increasing level of classification accuracy due to employing compression. Particularly the conducted modeling revealed a tenfold increase in speed. It has been experimentally confirmed that using a clustering apparatus has a much higher potential in terms of classification accuracy and speed than simple sifting or granulation schemes based on close description components.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"43631-43641"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3548967
Martina Toshevska;Sonja Gievska
Text style transfer is the task of altering the stylistic way in which a given sentence is written while maintaining its original meaning. The task requires models to identify and modify various stylistic properties, such as politeness, formality, and sentiment. With the advent of Large Language Models (LLMs) and their remarkable performances for a variety of tasks, numerous LLMs have emerged in the past few years. This paper provides an overview of recent advancements in text style transfer using LLMs. The discussion is focused on LLM-based approaches commonly used for text generation and their adoption for text style transfer. The paper is organized around three main groups of methods: prompting techniques for LLMs, fine-tuning techniques for LLMs, and memory-augmented LLMs. The discussion emphasizes the similarities and differences among the discussed methods and groups, along with the challenges and opportunities that are expected to direct and foster further research in the field.
{"title":"LLM-Based Text Style Transfer: Have We Taken a Step Forward?","authors":"Martina Toshevska;Sonja Gievska","doi":"10.1109/ACCESS.2025.3548967","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548967","url":null,"abstract":"Text style transfer is the task of altering the stylistic way in which a given sentence is written while maintaining its original meaning. The task requires models to identify and modify various stylistic properties, such as politeness, formality, and sentiment. With the advent of Large Language Models (LLMs) and their remarkable performances for a variety of tasks, numerous LLMs have emerged in the past few years. This paper provides an overview of recent advancements in text style transfer using LLMs. The discussion is focused on LLM-based approaches commonly used for text generation and their adoption for text style transfer. The paper is organized around three main groups of methods: prompting techniques for LLMs, fine-tuning techniques for LLMs, and memory-augmented LLMs. The discussion emphasizes the similarities and differences among the discussed methods and groups, along with the challenges and opportunities that are expected to direct and foster further research in the field.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"44707-44721"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face morphing attack detection (MAD) algorithms have become essential to overcome the vulnerability of face recognition systems. To solve the lack of large-scale and public-available datasets due to privacy concerns and restrictions, in this work we propose a new method to generate a synthetic face morphing dataset with 2450 identities and more than 100k morphs. The proposed synthetic face morphing dataset is unique for its high-quality samples, different types of morphing algorithms, and the generalization for both single and differential morphing attack detection scenarios. For experiments, we apply face image quality assessment and vulnerability analysis to evaluate the proposed synthetic face morphing dataset from the perspective of biometric sample quality and morphing attack potential on face recognition systems. The results are benchmarked with an existing SOTA synthetic dataset and a representative non-synthetic dataset and indicate improvement compared with the SOTA. Additionally, we design different protocols and study the applicability of using the proposed synthetic dataset on training morphing attack detection algorithms.
{"title":"SynMorph: Generating Synthetic Face Morphing Dataset With Mated Samples","authors":"Haoyu Zhang;Raghavendra Ramachandra;Kiran Raja;Christoph Busch","doi":"10.1109/ACCESS.2025.3548957","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548957","url":null,"abstract":"Face morphing attack detection (MAD) algorithms have become essential to overcome the vulnerability of face recognition systems. To solve the lack of large-scale and public-available datasets due to privacy concerns and restrictions, in this work we propose a new method to generate a synthetic face morphing dataset with 2450 identities and more than 100k morphs. The proposed synthetic face morphing dataset is unique for its high-quality samples, different types of morphing algorithms, and the generalization for both single and differential morphing attack detection scenarios. For experiments, we apply face image quality assessment and vulnerability analysis to evaluate the proposed synthetic face morphing dataset from the perspective of biometric sample quality and morphing attack potential on face recognition systems. The results are benchmarked with an existing SOTA synthetic dataset and a representative non-synthetic dataset and indicate improvement compared with the SOTA. Additionally, we design different protocols and study the applicability of using the proposed synthetic dataset on training morphing attack detection algorithms.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"44366-44384"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10915682","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3549081
Huilan Li;Xunjia Zheng;Xiangyang Xu
This study presents a Driving Safety Field (DSF) modeling method for quantifying the risks faced by autonomous vehicles, aiming to address the real-time and precise quantification of risks posed by other road participants during the driving process. The proposed modeling method is based on the concept that accidents involve abnormal energy transfer. By considering the interactions between the ego vehicle and other road participants, the model introduces reduced mass and vector velocity and utilizes coordinate transformations to ensure that risks are distributed along the direction of relative velocity, thereby satisfying Newton’s third law of motion regarding interactions between vehicles. Compared to previous methods, the proposed DSF modeling method offers higher interpretability and more direct quantification of instantaneous risks. Specifically, any change in the speed or position of any road participant in the environment leads to significant changes in the DSF force map. By simulation analyzing three risk-avoidance paths—driving straight, turning right, and making a U-turn—and comparing the distribution of driving risk field forces across 18 different scenarios, the results show that the proposed model can effectively identify driving risks. The U-turn strategy to the opposite lane is the most optimal risk-avoidance solution, reducing overall risk by 73.93% when the vehicle speed is 6 m/s. This method provides an intuitive and comprehensive safety situational awareness capability for autonomous vehicles. This ability is of great significance for improving the safety of autonomous driving and reducing the occurrence of traffic accidents.
{"title":"Improved Method of Driving Safety Field Modeling by Considering the Abnormal Transfer of Energy Between Vehicles","authors":"Huilan Li;Xunjia Zheng;Xiangyang Xu","doi":"10.1109/ACCESS.2025.3549081","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3549081","url":null,"abstract":"This study presents a Driving Safety Field (DSF) modeling method for quantifying the risks faced by autonomous vehicles, aiming to address the real-time and precise quantification of risks posed by other road participants during the driving process. The proposed modeling method is based on the concept that accidents involve abnormal energy transfer. By considering the interactions between the ego vehicle and other road participants, the model introduces reduced mass and vector velocity and utilizes coordinate transformations to ensure that risks are distributed along the direction of relative velocity, thereby satisfying Newton’s third law of motion regarding interactions between vehicles. Compared to previous methods, the proposed DSF modeling method offers higher interpretability and more direct quantification of instantaneous risks. Specifically, any change in the speed or position of any road participant in the environment leads to significant changes in the DSF force map. By simulation analyzing three risk-avoidance paths—driving straight, turning right, and making a U-turn—and comparing the distribution of driving risk field forces across 18 different scenarios, the results show that the proposed model can effectively identify driving risks. The U-turn strategy to the opposite lane is the most optimal risk-avoidance solution, reducing overall risk by 73.93% when the vehicle speed is 6 m/s. This method provides an intuitive and comprehensive safety situational awareness capability for autonomous vehicles. This ability is of great significance for improving the safety of autonomous driving and reducing the occurrence of traffic accidents.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"43190-43200"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916639","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective energy management in microgrids with renewable energy sources is crucial for maintaining system stability while minimizing operational costs. However, traditional Reinforcement Learning (RL) controllers often encounter challenges, including long training time and instability during the training process. This study introduces a novel approach that integrates Transfer Learning (TL) techniques with RL controllers to address these issues. By using synthetic datasets generated by advanced forecasting models, such as ResNet18+BiLSTM, the proposed method pre-trains RL agents, embedding domain knowledge to enhance performance. The results, based on one year of operational data, show that TL-enhanced RL controllers significantly reduce cumulative operation costs and system imbalance, achieving up to a 62.63% reduction in costs and an 80% improvement in balance compared to baseline models. Furthermore, the proposed method improves initial performance and shortens the training duration needed to reach operational thresholds. This approach demonstrates the potential of combining TL with RL to develop efficient, cost-effective solutions for real-time energy management in complex power systems.
{"title":"Enhancing Reinforcement Learning-Based Energy Management Through Transfer Learning With Load and PV Forecasting","authors":"Chang Xu;Masahiro Inuiguchi;Naoki Hayashi;Wong Jee Keen Raymond;Hazlie Mokhlis;Hazlee Azil Illias","doi":"10.1109/ACCESS.2025.3548990","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3548990","url":null,"abstract":"Effective energy management in microgrids with renewable energy sources is crucial for maintaining system stability while minimizing operational costs. However, traditional Reinforcement Learning (RL) controllers often encounter challenges, including long training time and instability during the training process. This study introduces a novel approach that integrates Transfer Learning (TL) techniques with RL controllers to address these issues. By using synthetic datasets generated by advanced forecasting models, such as ResNet18+BiLSTM, the proposed method pre-trains RL agents, embedding domain knowledge to enhance performance. The results, based on one year of operational data, show that TL-enhanced RL controllers significantly reduce cumulative operation costs and system imbalance, achieving up to a 62.63% reduction in costs and an 80% improvement in balance compared to baseline models. Furthermore, the proposed method improves initial performance and shortens the training duration needed to reach operational thresholds. This approach demonstrates the potential of combining TL with RL to develop efficient, cost-effective solutions for real-time energy management in complex power systems.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"43956-43972"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916641","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1109/ACCESS.2025.3549046
Songmin Lee;Jingon Joung;Jihoon Choi;Juyeop Kim
This paper proposes a practical gain calibration algorithm that estimates and compensates for channel amplitude asymmetry in a space-frequency line code (SFLC) orthogonal frequency division multiplexing (OFDM) system. During uplink (UL) communications from the base station (BS) to user equipment (UE), the SFLC BS can acquire channel state information (CSI) for downlink (DL) communications from the BS to the UE by utilizing UL-DL channel reciprocity. However, the channel amplitudes between UL and DL are generally observed to be asymmetric in practical channels. This asymmetry results in an unexpected phase rotation of the decoded SFLC symbols at the UE, as the CSI used for SFLC encoding at the BS does not align with the actual DL channels. To address this issue, we have designed a data-aided channel-gain calibration algorithm (CCA). Based on the mathematical derivation of the phase rotation of received SFLC data symbols at the UE, we developed the CCA to adjust the received gain and restore channel amplitude symmetry. To practically validate the designed CCA, we implemented an off-the-shelf SFLC-OFDM communication system on a software modem testbed using Universal Software Radio Peripherals (USRPs). Experiments conducted with real-time SFLC-OFDM signals demonstrate that the proposed CCA significantly enhances SFLC decoding performance.
{"title":"Data-Aided Channel-Gain Calibration Algorithm for Recovering Channel Amplitude Reciprocity in SFLC-OFDM Systems","authors":"Songmin Lee;Jingon Joung;Jihoon Choi;Juyeop Kim","doi":"10.1109/ACCESS.2025.3549046","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3549046","url":null,"abstract":"This paper proposes a practical gain calibration algorithm that estimates and compensates for channel amplitude asymmetry in a space-frequency line code (SFLC) orthogonal frequency division multiplexing (OFDM) system. During uplink (UL) communications from the base station (BS) to user equipment (UE), the SFLC BS can acquire channel state information (CSI) for downlink (DL) communications from the BS to the UE by utilizing UL-DL channel reciprocity. However, the channel amplitudes between UL and DL are generally observed to be asymmetric in practical channels. This asymmetry results in an unexpected phase rotation of the decoded SFLC symbols at the UE, as the CSI used for SFLC encoding at the BS does not align with the actual DL channels. To address this issue, we have designed a data-aided channel-gain calibration algorithm (CCA). Based on the mathematical derivation of the phase rotation of received SFLC data symbols at the UE, we developed the CCA to adjust the received gain and restore channel amplitude symmetry. To practically validate the designed CCA, we implemented an off-the-shelf SFLC-OFDM communication system on a software modem testbed using Universal Software Radio Peripherals (USRPs). Experiments conducted with real-time SFLC-OFDM signals demonstrate that the proposed CCA significantly enhances SFLC decoding performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"44231-44242"},"PeriodicalIF":3.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916665","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}