Pub Date : 2024-09-18DOI: 10.3390/electronics13183704
Hyeongrae Kim, Junho Kwak, Jeonghun Cho
The rapid evolution of automotive software necessitates efficient and accurate development and verification processes. This study proposes a virtual electronic control unit (vECU) that allows for precise software testing without the need for hardware, thereby reducing developmental costs and enabling cloud-native development. The software was configured and built on a Hyundai Autoever AUTomotive Open System Architecture (AUTOSAR) classic platform, Mobilgene, and Renode was used for high-fidelity emulations. Custom peripherals in C# were implemented for the FlexTimer, system clock generator, and analog-to-digital converter to ensure the proper functionality of the vECU. Renode’s GNU debugger server function facilitates detailed software debugging in a cloud environment, further accelerating the developmental cycle. Additionally, automated testing was implemented using a vECU tester to enable the verification of the vECU. Performance evaluations demonstrated that the vECU’s execution order and timing of tasks and runnable entities closely matched those of the actual ECU. The vECU tester also enabled fast and accurate verification. These findings confirm the potential of the AUTOSAR-compatible Level-4 vECU to replace hardware in development processes. Future efforts will focus on extending capabilities to emulate a broader range of hardware components and complex system integration scenarios, supporting more diverse research and development efforts.
{"title":"AUTOSAR-Compatible Level-4 Virtual ECU for the Verification of the Target Binary for Cloud-Native Development","authors":"Hyeongrae Kim, Junho Kwak, Jeonghun Cho","doi":"10.3390/electronics13183704","DOIUrl":"https://doi.org/10.3390/electronics13183704","url":null,"abstract":"The rapid evolution of automotive software necessitates efficient and accurate development and verification processes. This study proposes a virtual electronic control unit (vECU) that allows for precise software testing without the need for hardware, thereby reducing developmental costs and enabling cloud-native development. The software was configured and built on a Hyundai Autoever AUTomotive Open System Architecture (AUTOSAR) classic platform, Mobilgene, and Renode was used for high-fidelity emulations. Custom peripherals in C# were implemented for the FlexTimer, system clock generator, and analog-to-digital converter to ensure the proper functionality of the vECU. Renode’s GNU debugger server function facilitates detailed software debugging in a cloud environment, further accelerating the developmental cycle. Additionally, automated testing was implemented using a vECU tester to enable the verification of the vECU. Performance evaluations demonstrated that the vECU’s execution order and timing of tasks and runnable entities closely matched those of the actual ECU. The vECU tester also enabled fast and accurate verification. These findings confirm the potential of the AUTOSAR-compatible Level-4 vECU to replace hardware in development processes. Future efforts will focus on extending capabilities to emulate a broader range of hardware components and complex system integration scenarios, supporting more diverse research and development efforts.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"17 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183700
Yuhan Yan, Haiyan Fu, Fan Wu
Due to the explosive rise of multimodal content in online social communities, cross-modal learning is crucial for accurate fake news detection. However, current multimodal fake news detection techniques face challenges in extracting features from multiple modalities and fusing cross-modal information, failing to fully exploit the correlations and complementarities between different modalities. To address these issues, this paper proposes a fake news detection model based on a one-dimensional CCNet (1D-CCNet) attention mechanism, named BTCM. This method first utilizes BERT and BLIP-2 encoders to extract text and image features. Then, it employs the proposed 1D-CCNet attention mechanism module to process the input text and image sequences, enhancing the important aspects of the bimodal features. Meanwhile, this paper uses the pre-trained BLIP-2 model for object detection in images, generating image descriptions and augmenting text data to enhance the dataset. This operation aims to further strengthen the correlations between different modalities. Finally, this paper proposes a heterogeneous cross-feature fusion method (HCFFM) to integrate image and text features. Comparative experiments were conducted on three public datasets: Twitter, Weibo, and Gossipcop. The results show that the proposed model achieved excellent performance.
{"title":"Multimodal Social Media Fake News Detection Based on 1D-CCNet Attention Mechanism","authors":"Yuhan Yan, Haiyan Fu, Fan Wu","doi":"10.3390/electronics13183700","DOIUrl":"https://doi.org/10.3390/electronics13183700","url":null,"abstract":"Due to the explosive rise of multimodal content in online social communities, cross-modal learning is crucial for accurate fake news detection. However, current multimodal fake news detection techniques face challenges in extracting features from multiple modalities and fusing cross-modal information, failing to fully exploit the correlations and complementarities between different modalities. To address these issues, this paper proposes a fake news detection model based on a one-dimensional CCNet (1D-CCNet) attention mechanism, named BTCM. This method first utilizes BERT and BLIP-2 encoders to extract text and image features. Then, it employs the proposed 1D-CCNet attention mechanism module to process the input text and image sequences, enhancing the important aspects of the bimodal features. Meanwhile, this paper uses the pre-trained BLIP-2 model for object detection in images, generating image descriptions and augmenting text data to enhance the dataset. This operation aims to further strengthen the correlations between different modalities. Finally, this paper proposes a heterogeneous cross-feature fusion method (HCFFM) to integrate image and text features. Comparative experiments were conducted on three public datasets: Twitter, Weibo, and Gossipcop. The results show that the proposed model achieved excellent performance.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"25 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183698
Juan Huang, Songlin Sun, Kai Long, Lairong Yin, Zhiyong Zhang
The overtaking process for autonomous vehicles must prioritize both efficiency and safety, with safe distance being a crucial parameter. To address this, we propose an automatic overtaking path planning method based on minimal safe distance, ensuring both maneuvering efficiency and safety. This method combines the steady movement and comfort of the constant velocity offset model with the smoothness of the sine function model, creating a mixed-function model that is effective for planning lateral motion. For precise longitudinal motion planning, the overtaking process is divided into five stages, with each stage’s velocity and travel time calculated. To enhance the control system, the model predictive control (MPC) algorithm is applied, establishing a robust trajectory tracking control system for overtaking. Numerical simulation results demonstrate that the proposed overtaking path planning method can generate smooth and continuous paths. Under the MPC framework, the autonomous vehicle efficiently and safely performs automatic overtaking maneuvers, showcasing the method’s potential to improve the performance and reliability of autonomous driving systems.
{"title":"Automatic Overtaking Path Planning and Trajectory Tracking Control Based on Critical Safety Distance","authors":"Juan Huang, Songlin Sun, Kai Long, Lairong Yin, Zhiyong Zhang","doi":"10.3390/electronics13183698","DOIUrl":"https://doi.org/10.3390/electronics13183698","url":null,"abstract":"The overtaking process for autonomous vehicles must prioritize both efficiency and safety, with safe distance being a crucial parameter. To address this, we propose an automatic overtaking path planning method based on minimal safe distance, ensuring both maneuvering efficiency and safety. This method combines the steady movement and comfort of the constant velocity offset model with the smoothness of the sine function model, creating a mixed-function model that is effective for planning lateral motion. For precise longitudinal motion planning, the overtaking process is divided into five stages, with each stage’s velocity and travel time calculated. To enhance the control system, the model predictive control (MPC) algorithm is applied, establishing a robust trajectory tracking control system for overtaking. Numerical simulation results demonstrate that the proposed overtaking path planning method can generate smooth and continuous paths. Under the MPC framework, the autonomous vehicle efficiently and safely performs automatic overtaking maneuvers, showcasing the method’s potential to improve the performance and reliability of autonomous driving systems.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"79 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183703
Yuxuan He, Kunda Wang, Qicheng Song, Huixin Li, Bozhi Zhang
Specific emitter identification is a challenge in the field of radar signal processing. Its aims to extract individual fingerprint features of the signal. However, early works are all designed using either signal or time–frequency image and heavily rely on the calculation of hand-crafted features or complex interactions in high-dimensional feature space. This paper introduces the time–frequency multimodal feature fusion network, a novel architecture based on multimodal feature interaction. Specifically, we designed a time–frequency signal feature encoding module, a wvd image feature encoding module, and a multimodal feature fusion module. Additionally, we propose a feature point filtering mechanism named FMM for signal embedding. Our algorithm demonstrates high performance in comparison with the state-of-the-art mainstream identification methods. The results indicate that our algorithm outperforms others, achieving the highest accuracy, precision, recall, and F1-score, surpassing the second-best by 9.3%, 8.2%, 9.2%, and 9%. Notably, the visual results show that the proposed method aligns with the signal generation mechanism, effectively capturing the distinctive fingerprint features of radar data. This paper establishes a foundational architecture for the subsequent multimodal research in SEI tasks.
特定发射器识别是雷达信号处理领域的一项挑战。其目的是提取信号的个体指纹特征。然而,早期的工作都是利用信号或时频图像设计的,严重依赖于手工创建的特征计算或高维特征空间中的复杂交互。本文介绍了基于多模态特征交互的新型架构--时频多模态特征融合网络。具体来说,我们设计了一个时频信号特征编码模块、一个 wvd 图像特征编码模块和一个多模态特征融合模块。此外,我们还提出了一种名为 FMM 的特征点过滤机制,用于信号嵌入。与最先进的主流识别方法相比,我们的算法表现出很高的性能。结果表明,我们的算法优于其他算法,在准确率、精确度、召回率和 F1 分数上都达到了最高水平,分别比第二名高出 9.3%、8.2%、9.2% 和 9%。值得注意的是,直观结果表明,所提出的方法与信号生成机制相一致,能有效捕捉雷达数据的独特指纹特征。本文为 SEI 任务的后续多模态研究建立了一个基础架构。
{"title":"Specific Emitter Identification Algorithm Based on Time–Frequency Sequence Multimodal Feature Fusion Network","authors":"Yuxuan He, Kunda Wang, Qicheng Song, Huixin Li, Bozhi Zhang","doi":"10.3390/electronics13183703","DOIUrl":"https://doi.org/10.3390/electronics13183703","url":null,"abstract":"Specific emitter identification is a challenge in the field of radar signal processing. Its aims to extract individual fingerprint features of the signal. However, early works are all designed using either signal or time–frequency image and heavily rely on the calculation of hand-crafted features or complex interactions in high-dimensional feature space. This paper introduces the time–frequency multimodal feature fusion network, a novel architecture based on multimodal feature interaction. Specifically, we designed a time–frequency signal feature encoding module, a wvd image feature encoding module, and a multimodal feature fusion module. Additionally, we propose a feature point filtering mechanism named FMM for signal embedding. Our algorithm demonstrates high performance in comparison with the state-of-the-art mainstream identification methods. The results indicate that our algorithm outperforms others, achieving the highest accuracy, precision, recall, and F1-score, surpassing the second-best by 9.3%, 8.2%, 9.2%, and 9%. Notably, the visual results show that the proposed method aligns with the signal generation mechanism, effectively capturing the distinctive fingerprint features of radar data. This paper establishes a foundational architecture for the subsequent multimodal research in SEI tasks.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"7 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, deep-learning-based image dehazing methods occupy a dominant position in image dehazing applications. Although many complicated dehazing models have achieved competitive dehazing performance, effective methods for extracting useful features are still under-researched. Thus, an adaptive multi-feature attention network (AMFAN) consisting of the point-weighted attention (PWA) mechanism and the multi-layer feature fusion (AMLFF) is presented in this paper. We start by enhancing pixel-level attention for each feature map. Specifically, we design a PWA block, which aggregates global and local information of the feature map. We also employ PWA to make the model adaptively focus on significant channels/regions. Then, we design a feature fusion block (FFB), which can accomplish feature-level fusion by exploiting a PWA block. The FFB and PWA constitute our AMLFF. We design an AMLFF, which can integrate three different levels of feature maps to effectively balance the weights of the inputs to the encoder and decoder. We also utilize the contrastive loss function to train the dehazing network so that the recovered image is far from the negative sample and close to the positive sample. Experimental results on both synthetic and real-world images demonstrate that this dehazing approach surpasses numerous other advanced techniques, both visually and quantitatively, showcasing its superiority in image dehazing.
{"title":"Adaptive Multi-Feature Attention Network for Image Dehazing","authors":"Hongyuan Jing, Jiaxing Chen, Chenyang Zhang, Shuang Wei, Aidong Chen, Mengmeng Zhang","doi":"10.3390/electronics13183706","DOIUrl":"https://doi.org/10.3390/electronics13183706","url":null,"abstract":"Currently, deep-learning-based image dehazing methods occupy a dominant position in image dehazing applications. Although many complicated dehazing models have achieved competitive dehazing performance, effective methods for extracting useful features are still under-researched. Thus, an adaptive multi-feature attention network (AMFAN) consisting of the point-weighted attention (PWA) mechanism and the multi-layer feature fusion (AMLFF) is presented in this paper. We start by enhancing pixel-level attention for each feature map. Specifically, we design a PWA block, which aggregates global and local information of the feature map. We also employ PWA to make the model adaptively focus on significant channels/regions. Then, we design a feature fusion block (FFB), which can accomplish feature-level fusion by exploiting a PWA block. The FFB and PWA constitute our AMLFF. We design an AMLFF, which can integrate three different levels of feature maps to effectively balance the weights of the inputs to the encoder and decoder. We also utilize the contrastive loss function to train the dehazing network so that the recovered image is far from the negative sample and close to the positive sample. Experimental results on both synthetic and real-world images demonstrate that this dehazing approach surpasses numerous other advanced techniques, both visually and quantitatively, showcasing its superiority in image dehazing.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"214 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183699
Bao Wu, Xingzhong Xiong, Yong Wang
In computer vision, the task of semantic segmentation is crucial for applications such as autonomous driving and intelligent surveillance. However, achieving a balance between real-time performance and segmentation accuracy remains a significant challenge. Although Fast-SCNN is favored for its efficiency and low computational complexity, it still faces difficulties when handling complex street scene images. To address this issue, this paper presents an improved Fast-SCNN, aiming to enhance the accuracy and efficiency of semantic segmentation by incorporating a novel attention mechanism and an enhanced feature extraction module. Firstly, the integrated SimAM (Simple, Parameter-Free Attention Module) increases the network’s sensitivity to critical regions of the image and effectively adjusts the feature space weights across channels. Additionally, the refined pyramid pooling module in the global feature extraction module captures a broader range of contextual information through refined pooling levels. During the feature fusion stage, the introduction of an enhanced DAB (Depthwise Asymmetric Bottleneck) block and SE (Squeeze-and-Excitation) attention optimizes the network’s ability to process multi-scale information. Furthermore, the classifier module is extended by incorporating deeper convolutions and more complex convolutional structures, leading to a further improvement in model performance. These enhancements significantly improve the model’s ability to capture details and overall segmentation performance. Experimental results demonstrate that the proposed method excels in processing complex street scene images, achieving a mean Intersection over Union (mIoU) of 71.7% and 69.4% on the Cityscapes and CamVid datasets, respectively, while maintaining inference speeds of 81.4 fps and 113.6 fps. These results indicate that the proposed model effectively improves segmentation quality in complex street scenes while ensuring real-time processing capabilities.
{"title":"Real-Time Semantic Segmentation Algorithm for Street Scenes Based on Attention Mechanism and Feature Fusion","authors":"Bao Wu, Xingzhong Xiong, Yong Wang","doi":"10.3390/electronics13183699","DOIUrl":"https://doi.org/10.3390/electronics13183699","url":null,"abstract":"In computer vision, the task of semantic segmentation is crucial for applications such as autonomous driving and intelligent surveillance. However, achieving a balance between real-time performance and segmentation accuracy remains a significant challenge. Although Fast-SCNN is favored for its efficiency and low computational complexity, it still faces difficulties when handling complex street scene images. To address this issue, this paper presents an improved Fast-SCNN, aiming to enhance the accuracy and efficiency of semantic segmentation by incorporating a novel attention mechanism and an enhanced feature extraction module. Firstly, the integrated SimAM (Simple, Parameter-Free Attention Module) increases the network’s sensitivity to critical regions of the image and effectively adjusts the feature space weights across channels. Additionally, the refined pyramid pooling module in the global feature extraction module captures a broader range of contextual information through refined pooling levels. During the feature fusion stage, the introduction of an enhanced DAB (Depthwise Asymmetric Bottleneck) block and SE (Squeeze-and-Excitation) attention optimizes the network’s ability to process multi-scale information. Furthermore, the classifier module is extended by incorporating deeper convolutions and more complex convolutional structures, leading to a further improvement in model performance. These enhancements significantly improve the model’s ability to capture details and overall segmentation performance. Experimental results demonstrate that the proposed method excels in processing complex street scene images, achieving a mean Intersection over Union (mIoU) of 71.7% and 69.4% on the Cityscapes and CamVid datasets, respectively, while maintaining inference speeds of 81.4 fps and 113.6 fps. These results indicate that the proposed model effectively improves segmentation quality in complex street scenes while ensuring real-time processing capabilities.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"3 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183693
Gabriele Ciravegna, Franco Galante, Danilo Giordano, Tania Cerquitelli, Marco Mellia
Resistance spot welding is widely adopted in manufacturing and is characterized by high reliability and simple automation in the production line. The detection of defective welds is a difficult task that requires either destructive or expensive and slow non-destructive testing (e.g., ultrasound). The robots performing the welding automatically collect contextual and process-specific data. In this paper, we test whether these data can be used to predict defective welds. To do so, we use a dataset collected in a real industrial plant that describes welding-related data labeled with ultrasonic quality checks. We use these data to develop several pipelines based on shallow and deep learning machine learning algorithms and test the performance of these pipelines in predicting defective welds. Our results show that, despite the development of different pipelines and complex models, the machine-learning-based defect detection algorithms achieve limited performance. Using a qualitative analysis of model predictions, we show that correct predictions are often a consequence of inherent biases and intrinsic limitations in the data. We therefore conclude that the automatically collected data have limitations that hamper fault detection in a running production plant.
{"title":"Fault Prediction in Resistance Spot Welding: A Comparison of Machine Learning Approaches","authors":"Gabriele Ciravegna, Franco Galante, Danilo Giordano, Tania Cerquitelli, Marco Mellia","doi":"10.3390/electronics13183693","DOIUrl":"https://doi.org/10.3390/electronics13183693","url":null,"abstract":"Resistance spot welding is widely adopted in manufacturing and is characterized by high reliability and simple automation in the production line. The detection of defective welds is a difficult task that requires either destructive or expensive and slow non-destructive testing (e.g., ultrasound). The robots performing the welding automatically collect contextual and process-specific data. In this paper, we test whether these data can be used to predict defective welds. To do so, we use a dataset collected in a real industrial plant that describes welding-related data labeled with ultrasonic quality checks. We use these data to develop several pipelines based on shallow and deep learning machine learning algorithms and test the performance of these pipelines in predicting defective welds. Our results show that, despite the development of different pipelines and complex models, the machine-learning-based defect detection algorithms achieve limited performance. Using a qualitative analysis of model predictions, we show that correct predictions are often a consequence of inherent biases and intrinsic limitations in the data. We therefore conclude that the automatically collected data have limitations that hamper fault detection in a running production plant.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"46 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183702
Xingquan Li, Hongxia Zheng, Chunlong He, Yong Wang, Guoqing Wang
Data transmission is one of the critical factors in the future of the Internet of Things (IoT). The techniques of a reconfigurable intelligent surface (RIS) and backscatter communication (BackCom) are in need of a solution of realizing low-power sustainable transmission, which shows great potential in wireless communication. Hence, this paper introduces an RIS-based BackCom system, where the RIS receives energy from a base station (BS) and sends information by backscattering the signals from the BS. To maximize the sum rate of all IoT devices (IoTDs), we jointly optimized the time allocation, the RIS-reflecting phase shifts and the transmit power of the BS by exploiting an alternative optimization algorithm. The simulation results illustrate the effectiveness and the feasibility of the proposed wireless communication scheme and the proposed algorithm in IoT networks.
{"title":"Reconfigurable Intelligent Surface-Based Backscatter Communication for Data transmission","authors":"Xingquan Li, Hongxia Zheng, Chunlong He, Yong Wang, Guoqing Wang","doi":"10.3390/electronics13183702","DOIUrl":"https://doi.org/10.3390/electronics13183702","url":null,"abstract":"Data transmission is one of the critical factors in the future of the Internet of Things (IoT). The techniques of a reconfigurable intelligent surface (RIS) and backscatter communication (BackCom) are in need of a solution of realizing low-power sustainable transmission, which shows great potential in wireless communication. Hence, this paper introduces an RIS-based BackCom system, where the RIS receives energy from a base station (BS) and sends information by backscattering the signals from the BS. To maximize the sum rate of all IoT devices (IoTDs), we jointly optimized the time allocation, the RIS-reflecting phase shifts and the transmit power of the BS by exploiting an alternative optimization algorithm. The simulation results illustrate the effectiveness and the feasibility of the proposed wireless communication scheme and the proposed algorithm in IoT networks.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"118 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.3390/electronics13183690
Wei Cai, Weijie Gao, Xinhao Jiang, Xin Wang, Xingyu Di
Camouflaged object detection (COD) is a challenging task that involves identifying objects that closely resemble their background. In order to detect camouflaged objects more accurately, we propose a diffusion model for the COD network called DMNet. DMNet formulates COD as a denoising diffusion process from noisy boxes to prediction boxes. During the training stage, random boxes diffuse from ground-truth boxes, and DMNet learns to reverse this process. In the sampling stage, DMNet progressively refines random boxes to prediction boxes. In addition, due to the camouflaged object’s blurred appearance and the low contrast between it and the background, the feature extraction stage of the network is challenging. Firstly, we proposed a parallel fusion module (PFM) to enhance the information extracted from the backbone. Then, we designed a progressive feature pyramid network (PFPN) for feature fusion, in which the upsample adaptive spatial fusion module (UAF) balances the different feature information by assigning weights to different layers. Finally, a location refinement module (LRM) is constructed to make DMNet pay attention to the boundary details. We compared DMNet with other classical object-detection models on the COD10K dataset. Experimental results indicated that DMNet outperformed others, achieving optimal effects across six evaluation metrics and significantly enhancing detection accuracy.
{"title":"Denoising Diffusion Implicit Model for Camouflaged Object Detection","authors":"Wei Cai, Weijie Gao, Xinhao Jiang, Xin Wang, Xingyu Di","doi":"10.3390/electronics13183690","DOIUrl":"https://doi.org/10.3390/electronics13183690","url":null,"abstract":"Camouflaged object detection (COD) is a challenging task that involves identifying objects that closely resemble their background. In order to detect camouflaged objects more accurately, we propose a diffusion model for the COD network called DMNet. DMNet formulates COD as a denoising diffusion process from noisy boxes to prediction boxes. During the training stage, random boxes diffuse from ground-truth boxes, and DMNet learns to reverse this process. In the sampling stage, DMNet progressively refines random boxes to prediction boxes. In addition, due to the camouflaged object’s blurred appearance and the low contrast between it and the background, the feature extraction stage of the network is challenging. Firstly, we proposed a parallel fusion module (PFM) to enhance the information extracted from the backbone. Then, we designed a progressive feature pyramid network (PFPN) for feature fusion, in which the upsample adaptive spatial fusion module (UAF) balances the different feature information by assigning weights to different layers. Finally, a location refinement module (LRM) is constructed to make DMNet pay attention to the boundary details. We compared DMNet with other classical object-detection models on the COD10K dataset. Experimental results indicated that DMNet outperformed others, achieving optimal effects across six evaluation metrics and significantly enhancing detection accuracy.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"40 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.3390/electronics13183683
Alain Aoun, Nadine Kashmar, Mehdi Adda, Hussein Ibrahim
The idea of a decentralized electric grid has shifted from being a concept to a reality. The growing integration of distributed energy resources (DERs) has transformed the traditional centralized electric grid into a decentralized one. However, while most efforts to manage and optimize this decentralization focus on the electrical infrastructure layer, the operational and control layer, as well as the data management layer, have received less attention. Current electric grids rely on centralized control centers (CCCs) that serve as the electric grid’s brain, where operators monitor, control, and manage the entire grid infrastructure. Hence, any disruption caused by a cyberattack or a natural event, disconnecting the CCC, could have numerous negative effects on grid operations, including socioeconomic impacts, equipment damage, market repercussions, and blackouts. This article introduces the idea of a fully decentralized electric grid that leverages autonomous smart substations and blockchain integration for decentralized data management and control. The aim is to propose a blockchain-enabled decentralized electric grid model and its potential impact on energy markets, sustainability, and resilience. The model presented underlines the transformative potential of decentralized autonomous grids in revolutionizing energy systems for better operability, management, and flexibility.
{"title":"From Bottom-Up Towards a Completely Decentralized Autonomous Electric Grid Based on the Concept of a Decentralized Autonomous Substation","authors":"Alain Aoun, Nadine Kashmar, Mehdi Adda, Hussein Ibrahim","doi":"10.3390/electronics13183683","DOIUrl":"https://doi.org/10.3390/electronics13183683","url":null,"abstract":"The idea of a decentralized electric grid has shifted from being a concept to a reality. The growing integration of distributed energy resources (DERs) has transformed the traditional centralized electric grid into a decentralized one. However, while most efforts to manage and optimize this decentralization focus on the electrical infrastructure layer, the operational and control layer, as well as the data management layer, have received less attention. Current electric grids rely on centralized control centers (CCCs) that serve as the electric grid’s brain, where operators monitor, control, and manage the entire grid infrastructure. Hence, any disruption caused by a cyberattack or a natural event, disconnecting the CCC, could have numerous negative effects on grid operations, including socioeconomic impacts, equipment damage, market repercussions, and blackouts. This article introduces the idea of a fully decentralized electric grid that leverages autonomous smart substations and blockchain integration for decentralized data management and control. The aim is to propose a blockchain-enabled decentralized electric grid model and its potential impact on energy markets, sustainability, and resilience. The model presented underlines the transformative potential of decentralized autonomous grids in revolutionizing energy systems for better operability, management, and flexibility.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"25 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}