Pub Date : 2024-09-18DOI: 10.3390/electronics13183707
Eyal Weiss
This study presents an advanced methodology for improving electronic assembly quality through real-time, inline inspection utilizing state-of-the-art artificial intelligence (AI) and deep learning technologies. The primary goal is to ensure compliance with stringent manufacturing standards, notably IPC-A-610 and IPC-J-STD-001. Employing the existing infrastructure of pick-and-place machines, this system captures high-resolution images of electronic components during the assembly process. These images are analyzed instantly by AI algorithms capable of detecting a variety of defects, including damage, corrosion, counterfeit, and structural irregularities in components and their leads. This proactive approach shifts from conventional reactive quality assurance methods by integrating real-time defect detection and strict adherence to industry standards into the assembly process. With an accuracy rate exceeding 99.5% and processing speeds of about 5 milliseconds per component, this system enables manufacturers to identify and address defects promptly, thereby significantly enhancing manufacturing quality and reliability. The implementation leverages big data analytics, analyzing over a billion components to refine detection algorithms and ensure robust performance. By pre-empting and resolving defects before they escalate, the methodology minimizes production disruptions and fosters a more efficient workflow, ultimately resulting in considerable cost reductions. This paper showcases multiple case studies of component defects, highlighting the diverse types of defects identified through AI and deep learning. These examples, combined with detailed performance metrics, provide insights into optimizing electronic component assembly processes, contributing to elevated production efficiency and quality.
{"title":"Advancements in Electronic Component Assembly: Real-Time AI-Driven Inspection Techniques","authors":"Eyal Weiss","doi":"10.3390/electronics13183707","DOIUrl":"https://doi.org/10.3390/electronics13183707","url":null,"abstract":"This study presents an advanced methodology for improving electronic assembly quality through real-time, inline inspection utilizing state-of-the-art artificial intelligence (AI) and deep learning technologies. The primary goal is to ensure compliance with stringent manufacturing standards, notably IPC-A-610 and IPC-J-STD-001. Employing the existing infrastructure of pick-and-place machines, this system captures high-resolution images of electronic components during the assembly process. These images are analyzed instantly by AI algorithms capable of detecting a variety of defects, including damage, corrosion, counterfeit, and structural irregularities in components and their leads. This proactive approach shifts from conventional reactive quality assurance methods by integrating real-time defect detection and strict adherence to industry standards into the assembly process. With an accuracy rate exceeding 99.5% and processing speeds of about 5 milliseconds per component, this system enables manufacturers to identify and address defects promptly, thereby significantly enhancing manufacturing quality and reliability. The implementation leverages big data analytics, analyzing over a billion components to refine detection algorithms and ensure robust performance. By pre-empting and resolving defects before they escalate, the methodology minimizes production disruptions and fosters a more efficient workflow, ultimately resulting in considerable cost reductions. This paper showcases multiple case studies of component defects, highlighting the diverse types of defects identified through AI and deep learning. These examples, combined with detailed performance metrics, provide insights into optimizing electronic component assembly processes, contributing to elevated production efficiency and quality.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"11 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183692
Md Abu Taher, Mohd Tariq, Arif I. Sarwat
In this study, we address the challenge of detecting and mitigating cyber attacks in the distributed cooperative control of islanded AC microgrids, with a particular focus on detecting False Data Injection Attacks (FDIAs), a significant threat to the Smart Grid (SG). The SG integrates traditional power systems with communication networks, creating a complex system with numerous vulnerable links, making it a prime target for cyber attacks. These attacks can lead to the disclosure of private data, control network failures, and even blackouts. Unlike machine learning-based approaches that require extensive datasets and mathematical models dependent on accurate system modeling, our method is free from such dependencies. To enhance the microgrid’s resilience against these threats, we propose a resilient control algorithm by introducing a novel trustworthiness parameter into the traditional cooperative control algorithm. Our method evaluates the trustworthiness of distributed energy resources (DERs) based on their voltage measurements and exchanged information, using Kullback-Leibler (KL) divergence to dynamically adjust control actions. We validated our approach through simulations on both the IEEE-34 bus feeder system with eight DERs and a larger microgrid with twenty-two DERs. The results demonstrated a detection accuracy of around 100%, with millisecond range mitigation time, ensuring rapid system recovery. Additionally, our method improved system stability by up to almost 100% under attack scenarios, showcasing its effectiveness in promptly detecting attacks and maintaining system resilience. These findings highlight the potential of our approach to enhance the security and stability of microgrid systems in the face of cyber threats.
在本研究中,我们探讨了在孤岛式交流微电网的分布式协同控制中检测和缓解网络攻击的挑战,尤其侧重于检测虚假数据注入攻击(FDIAs),这是智能电网(SG)面临的一个重大威胁。智能电网将传统电力系统与通信网络整合在一起,形成了一个具有众多脆弱环节的复杂系统,使其成为网络攻击的首要目标。这些攻击可能导致私人数据泄露、控制网络故障甚至停电。基于机器学习的方法需要大量数据集和依赖于精确系统建模的数学模型,而我们的方法与之不同,不存在此类依赖关系。为了增强微电网抵御这些威胁的能力,我们在传统的合作控制算法中引入了一个新颖的可信度参数,从而提出了一种弹性控制算法。我们的方法基于分布式能源资源(DER)的电压测量和交换信息来评估其可信度,并利用库尔贝克-莱布勒(KL)发散来动态调整控制行动。我们在装有八个 DER 的 IEEE-34 总线馈电系统和装有二十二个 DER 的更大的微电网上进行了仿真,验证了我们的方法。结果表明,检测精度约为 100%,毫秒级的范围缓解时间,确保了系统的快速恢复。此外,在受到攻击的情况下,我们的方法几乎 100% 地提高了系统稳定性,展示了其在及时发现攻击和保持系统恢复能力方面的有效性。这些发现凸显了我们的方法在面对网络威胁时提高微电网系统安全性和稳定性的潜力。
{"title":"Trust-Based Detection and Mitigation of Cyber Attacks in Distributed Cooperative Control of Islanded AC Microgrids","authors":"Md Abu Taher, Mohd Tariq, Arif I. Sarwat","doi":"10.3390/electronics13183692","DOIUrl":"https://doi.org/10.3390/electronics13183692","url":null,"abstract":"In this study, we address the challenge of detecting and mitigating cyber attacks in the distributed cooperative control of islanded AC microgrids, with a particular focus on detecting False Data Injection Attacks (FDIAs), a significant threat to the Smart Grid (SG). The SG integrates traditional power systems with communication networks, creating a complex system with numerous vulnerable links, making it a prime target for cyber attacks. These attacks can lead to the disclosure of private data, control network failures, and even blackouts. Unlike machine learning-based approaches that require extensive datasets and mathematical models dependent on accurate system modeling, our method is free from such dependencies. To enhance the microgrid’s resilience against these threats, we propose a resilient control algorithm by introducing a novel trustworthiness parameter into the traditional cooperative control algorithm. Our method evaluates the trustworthiness of distributed energy resources (DERs) based on their voltage measurements and exchanged information, using Kullback-Leibler (KL) divergence to dynamically adjust control actions. We validated our approach through simulations on both the IEEE-34 bus feeder system with eight DERs and a larger microgrid with twenty-two DERs. The results demonstrated a detection accuracy of around 100%, with millisecond range mitigation time, ensuring rapid system recovery. Additionally, our method improved system stability by up to almost 100% under attack scenarios, showcasing its effectiveness in promptly detecting attacks and maintaining system resilience. These findings highlight the potential of our approach to enhance the security and stability of microgrid systems in the face of cyber threats.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"214 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current automated prediction methods often require human intervention and have poor interpretability, making it difficult to be widely applied in telemedicine scenarios. Therefore, an efficient, automated prediction model with good interpretability is urgently needed to improve the treatment outcomes of DME patients in telemedicine settings. In this study, we propose a multimodal algorithm based on a semi-supervised learning framework, which aims to combine optical coherence tomography (OCT) images and clinical data to automatically predict the VA values of patients after anti-VEGF treatment. Our approach first performs retinal segmentation of OCT images via a semi-supervised learning framework, which in turn extracts key biomarkers such as central retinal thickness (CST). Subsequently, these features are combined with the patient’s clinical data and fed into a multimodal learning algorithm for VA prediction. Our model performed well in the Asia Pacific Tele-Ophthalmology Society (APTOS) Big Data Competition, earning fifth place in the overall score and third place in VA prediction accuracy. Retinal segmentation achieved an accuracy of 99.03 ± 0.19% on the HZO dataset. This multimodal algorithmic framework is important in the context of telemedicine, especially for the treatment of DME patients.
远程医疗技术的发展为眼底病患者的诊断和治疗提供了新的途径,尤其是在抗血管内皮生长因子(VEGF)治疗后,准确预测患者的视力(VA)对于优化后续治疗计划非常重要。然而,目前的自动预测方法往往需要人工干预,可解释性差,因此难以广泛应用于远程医疗场景。因此,迫切需要一种高效、可解释性强的自动预测模型,以改善远程医疗环境下 DME 患者的治疗效果。在本研究中,我们提出了一种基于半监督学习框架的多模态算法,旨在结合光学相干断层扫描(OCT)图像和临床数据,自动预测抗血管内皮生长因子治疗后患者的视力值。我们的方法首先通过半监督学习框架对 OCT 图像进行视网膜分割,进而提取视网膜中央厚度(CST)等关键生物标志物。随后,将这些特征与患者的临床数据相结合,并输入多模态学习算法,进行视网膜病变预测。我们的模型在亚太远程眼科协会(APTOS)大数据竞赛中表现出色,获得了总分第五名和视网膜缺损预测准确率第三名的好成绩。在 HZO 数据集上,视网膜分割的准确率达到 99.03 ± 0.19%。这种多模态算法框架在远程医疗方面非常重要,尤其是在治疗重度视网膜病变患者方面。
{"title":"Attention-Enhanced Guided Multimodal and Semi-Supervised Networks for Visual Acuity (VA) Prediction after Anti-VEGF Therapy","authors":"Yizhen Wang , Yaqi Wang, Xianwen Liu, Weiwei Cui, Peng Jin, Yuxia Cheng, Gangyong Jia","doi":"10.3390/electronics13183701","DOIUrl":"https://doi.org/10.3390/electronics13183701","url":null,"abstract":"The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current automated prediction methods often require human intervention and have poor interpretability, making it difficult to be widely applied in telemedicine scenarios. Therefore, an efficient, automated prediction model with good interpretability is urgently needed to improve the treatment outcomes of DME patients in telemedicine settings. In this study, we propose a multimodal algorithm based on a semi-supervised learning framework, which aims to combine optical coherence tomography (OCT) images and clinical data to automatically predict the VA values of patients after anti-VEGF treatment. Our approach first performs retinal segmentation of OCT images via a semi-supervised learning framework, which in turn extracts key biomarkers such as central retinal thickness (CST). Subsequently, these features are combined with the patient’s clinical data and fed into a multimodal learning algorithm for VA prediction. Our model performed well in the Asia Pacific Tele-Ophthalmology Society (APTOS) Big Data Competition, earning fifth place in the overall score and third place in VA prediction accuracy. Retinal segmentation achieved an accuracy of 99.03 ± 0.19% on the HZO dataset. This multimodal algorithmic framework is important in the context of telemedicine, especially for the treatment of DME patients.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"39 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carbon nanotube field-effect transistors (CNT-FETs) have shown great promise in infrared image detection due to their high mobility, low cost, and compatibility with silicon-based technologies. This paper presents the design and simulation of a column-level analog front-end (AFE) circuit tailored for carbon-based short-wave infrared (SWIR) photodetectors. The AFE integrates a Capacitor Trans-impedance Amplifier (CTIA) for current-to-voltage conversion, coupled with Correlated Double Sampling (CDS) for noise reduction and operational amplifier offset suppression. A 10-bit/125 kHz Successive Approximation analog-to-digital converter (SAR ADC) completes the signal processing chain, achieving rail-to-rail input/output with minimized component count. Fabricated using 0.18 μm CMOS technology, the AFE demonstrates a high signal-to-noise ratio (SNR) of 59.27 dB and an Effective Number of Bits (ENOB) of 9.35, with a detectable current range from 500 pA to 100.5 nA and a total power consumption of 7.5 mW. These results confirm the suitability of the proposed AFE for high-precision, low-power SWIR detection systems, with potential applications in medical imaging, night vision, and autonomous driving systems.
{"title":"A Low-Power, High-Resolution Analog Front-End Circuit for Carbon-Based SWIR Photodetector","authors":"Yuyan Zhang, Zhifeng Chen, Wenli Liao, Weirong Xi, Chengying Chen, Jianhua Jiang","doi":"10.3390/electronics13183708","DOIUrl":"https://doi.org/10.3390/electronics13183708","url":null,"abstract":"Carbon nanotube field-effect transistors (CNT-FETs) have shown great promise in infrared image detection due to their high mobility, low cost, and compatibility with silicon-based technologies. This paper presents the design and simulation of a column-level analog front-end (AFE) circuit tailored for carbon-based short-wave infrared (SWIR) photodetectors. The AFE integrates a Capacitor Trans-impedance Amplifier (CTIA) for current-to-voltage conversion, coupled with Correlated Double Sampling (CDS) for noise reduction and operational amplifier offset suppression. A 10-bit/125 kHz Successive Approximation analog-to-digital converter (SAR ADC) completes the signal processing chain, achieving rail-to-rail input/output with minimized component count. Fabricated using 0.18 μm CMOS technology, the AFE demonstrates a high signal-to-noise ratio (SNR) of 59.27 dB and an Effective Number of Bits (ENOB) of 9.35, with a detectable current range from 500 pA to 100.5 nA and a total power consumption of 7.5 mW. These results confirm the suitability of the proposed AFE for high-precision, low-power SWIR detection systems, with potential applications in medical imaging, night vision, and autonomous driving systems.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"10 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-defined networking (SDN) is an emerging networking technology with a central point, called the controller, on the control plane. This controller communicates with the application and data planes. In fifth-generation (5G) mobile wireless networks and beyond, specific levels of service quality are defined for different traffic types. Ultra-reliable low-latency communication (URLLC) is one of the key services in 5G. This paper presents a fuzzy logic (FL)-based dynamic routing (FLDR) mechanism with congestion avoidance for URLLC on SDN-based 5G networks. By periodically monitoring the network status and making forwarding decisions on the basis of fuzzy inference rules, the FLDR mechanism not only can reroute in real time, but also can cope with network status uncertainty owing to FL’s fault tolerance capabilities. Three input parameters, normalized throughput, packet delay, and link utilization, were employed as crisp inputs to the FL control system because they had a more accurate correlation with the network performance measures we studied. The crisp output of the FL control system, i.e., path weight, and a predefined threshold of packet loss ratio on a path were applied to make routing decisions. We evaluated the performance of the proposed FLDR mechanism on the Mininet simulator by installing three additional modules, topology discovery, monitoring, and rerouting with FL, on the traditional control plane of SDN. The superiority of the proposed FLDR over the other existing FL-based routing schemes was demonstrated using three performance measures, system throughput, packet loss rate, and packet delay versus traffic load in the system.
{"title":"Dynamic Routing Using Fuzzy Logic for URLLC in 5G Networks Based on Software-Defined Networking","authors":"Yan-Jing Wu, Menq-Chyun Chen, Wen-Shyang Hwang, Ming-Hua Cheng","doi":"10.3390/electronics13183694","DOIUrl":"https://doi.org/10.3390/electronics13183694","url":null,"abstract":"Software-defined networking (SDN) is an emerging networking technology with a central point, called the controller, on the control plane. This controller communicates with the application and data planes. In fifth-generation (5G) mobile wireless networks and beyond, specific levels of service quality are defined for different traffic types. Ultra-reliable low-latency communication (URLLC) is one of the key services in 5G. This paper presents a fuzzy logic (FL)-based dynamic routing (FLDR) mechanism with congestion avoidance for URLLC on SDN-based 5G networks. By periodically monitoring the network status and making forwarding decisions on the basis of fuzzy inference rules, the FLDR mechanism not only can reroute in real time, but also can cope with network status uncertainty owing to FL’s fault tolerance capabilities. Three input parameters, normalized throughput, packet delay, and link utilization, were employed as crisp inputs to the FL control system because they had a more accurate correlation with the network performance measures we studied. The crisp output of the FL control system, i.e., path weight, and a predefined threshold of packet loss ratio on a path were applied to make routing decisions. We evaluated the performance of the proposed FLDR mechanism on the Mininet simulator by installing three additional modules, topology discovery, monitoring, and rerouting with FL, on the traditional control plane of SDN. The superiority of the proposed FLDR over the other existing FL-based routing schemes was demonstrated using three performance measures, system throughput, packet loss rate, and packet delay versus traffic load in the system.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"17 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183705
Jing Peng, Shouhao Wang, Xiaoning Li, Ke Wang
A novel 10-Watt-Level high-power microwave rectifier with an inverse Class-F harmonic network for microwave power transmission (MPT) is presented in this paper. The high-power microwave rectifier circuit comprises four sub-rectifier circuits, a 1 × 4 power divider, and a parallel-series dc synthesis network. The simple inverse Class-F harmonic control network serves dual roles: harmonic control and impedance matching. The 1 × 4 power divider increases the RF input power fourfold, reaching 40 dBm (10 W). The parallel-series dc synthesis network enhances the resistance to load variation. The high-power rectifier circuit is simulated, fabricated, and measured. The measurement results demonstrate that the rectifier circuit can reach a maximum RF input power of 10 W at 2.45 GHz, with a maximum rectifier efficiency of 61.1% and an output dc voltage of 23.9 V, which has a large application potential in MPT.
{"title":"A Novel 10-Watt-Level High-Power Microwave Rectifier with an Inverse Class-F Harmonic Network for Microwave Power Transmission","authors":"Jing Peng, Shouhao Wang, Xiaoning Li, Ke Wang","doi":"10.3390/electronics13183705","DOIUrl":"https://doi.org/10.3390/electronics13183705","url":null,"abstract":"A novel 10-Watt-Level high-power microwave rectifier with an inverse Class-F harmonic network for microwave power transmission (MPT) is presented in this paper. The high-power microwave rectifier circuit comprises four sub-rectifier circuits, a 1 × 4 power divider, and a parallel-series dc synthesis network. The simple inverse Class-F harmonic control network serves dual roles: harmonic control and impedance matching. The 1 × 4 power divider increases the RF input power fourfold, reaching 40 dBm (10 W). The parallel-series dc synthesis network enhances the resistance to load variation. The high-power rectifier circuit is simulated, fabricated, and measured. The measurement results demonstrate that the rectifier circuit can reach a maximum RF input power of 10 W at 2.45 GHz, with a maximum rectifier efficiency of 61.1% and an output dc voltage of 23.9 V, which has a large application potential in MPT.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"23 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183696
Shuai Xu, Yanwu Li, Qiuyang Li
The flexible job shop scheduling problem (FJSSP), which can significantly enhance production efficiency, is a mathematical optimization problem widely applied in modern manufacturing industries. However, due to its NP-hard nature, finding an optimal solution for all scenarios within a reasonable time frame faces serious challenges. This paper proposes a solution that transforms the FJSSP into a Markov Decision Process (MDP) and employs deep reinforcement learning (DRL) techniques for resolution. First, we represent the state features of the scheduling environment using seven feature vectors and utilize a transformer encoder as a feature extraction module to effectively capture the relationships between state features and enhance representation capability. Second, based on the features of the jobs and machines, we design 16 composite dispatching rules from multiple dimensions, including the job completion rate, processing time, waiting time, and manufacturing resource utilization, to achieve flexible and efficient scheduling decisions. Furthermore, we project an intuitive and dense reward function with the objective of minimizing the total idle time of machines. Finally, to verify the performance and feasibility of the algorithm, we evaluate the proposed policy model on the Brandimarte, Hurink, and Dauzere datasets. Our experimental results demonstrate that the proposed framework consistently outperforms traditional dispatching rules, surpasses metaheuristic methods on larger-scale instances, and exceeds the performance of existing DRL-based scheduling methods across most datasets.
{"title":"A Deep Reinforcement Learning Method Based on a Transformer Model for the Flexible Job Shop Scheduling Problem","authors":"Shuai Xu, Yanwu Li, Qiuyang Li","doi":"10.3390/electronics13183696","DOIUrl":"https://doi.org/10.3390/electronics13183696","url":null,"abstract":"The flexible job shop scheduling problem (FJSSP), which can significantly enhance production efficiency, is a mathematical optimization problem widely applied in modern manufacturing industries. However, due to its NP-hard nature, finding an optimal solution for all scenarios within a reasonable time frame faces serious challenges. This paper proposes a solution that transforms the FJSSP into a Markov Decision Process (MDP) and employs deep reinforcement learning (DRL) techniques for resolution. First, we represent the state features of the scheduling environment using seven feature vectors and utilize a transformer encoder as a feature extraction module to effectively capture the relationships between state features and enhance representation capability. Second, based on the features of the jobs and machines, we design 16 composite dispatching rules from multiple dimensions, including the job completion rate, processing time, waiting time, and manufacturing resource utilization, to achieve flexible and efficient scheduling decisions. Furthermore, we project an intuitive and dense reward function with the objective of minimizing the total idle time of machines. Finally, to verify the performance and feasibility of the algorithm, we evaluate the proposed policy model on the Brandimarte, Hurink, and Dauzere datasets. Our experimental results demonstrate that the proposed framework consistently outperforms traditional dispatching rules, surpasses metaheuristic methods on larger-scale instances, and exceeds the performance of existing DRL-based scheduling methods across most datasets.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"2 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183697
Bereket Endale Bekele, Krzysztof Tokarz, Nebiyat Yilikal Gebeyehu, Bolesław Pochopień, Dariusz Mrozek
The rapid expansion of Internet-of-Things (IoT) applications necessitates a thorough understanding of network configurations to address unique challenges across various use cases. This paper presents an in-depth analysis of three IoT network topologies: linear chain, structured tree, and dynamic transition networks, each designed to meet the specific requirements of industrial automation, home automation, and environmental monitoring. Key performance metrics, including round-trip time (RTT), server processing time (SPT), and power consumption, are evaluated through both simulation and hardware experiments. Additionally, this study introduces an enhanced UDP protocol featuring an acknowledgment mechanism and a power consumption evaluation, aiming to improve data transmission reliability over the standard UDP protocol. Packet loss is specifically measured in hardware experiments to compare the performance of standard and enhanced UDP protocols. The findings show that the enhanced UDP significantly reduces packet loss compared to the standard UDP, enhancing data delivery reliability across dynamic and structured networks, though it comes at the cost of slightly higher power consumption due to additional processing. For network topology performance, the linear chain topology provides stable processing but higher RTT, making it suitable for applications such as tunnel monitoring; the structured tree topology offers low energy consumption and fast communication, ideal for home automation; and the dynamic transition network, suited for industrial Automated Guided Vehicles (AGVs), encounters challenges with adaptive routing. These insights guide the optimization of communication protocols and network configurations for more efficient and reliable IoT deployments.
{"title":"Performance Evaluation of UDP-Based Data Transmission with Acknowledgment for Various Network Topologies in IoT Environments","authors":"Bereket Endale Bekele, Krzysztof Tokarz, Nebiyat Yilikal Gebeyehu, Bolesław Pochopień, Dariusz Mrozek","doi":"10.3390/electronics13183697","DOIUrl":"https://doi.org/10.3390/electronics13183697","url":null,"abstract":"The rapid expansion of Internet-of-Things (IoT) applications necessitates a thorough understanding of network configurations to address unique challenges across various use cases. This paper presents an in-depth analysis of three IoT network topologies: linear chain, structured tree, and dynamic transition networks, each designed to meet the specific requirements of industrial automation, home automation, and environmental monitoring. Key performance metrics, including round-trip time (RTT), server processing time (SPT), and power consumption, are evaluated through both simulation and hardware experiments. Additionally, this study introduces an enhanced UDP protocol featuring an acknowledgment mechanism and a power consumption evaluation, aiming to improve data transmission reliability over the standard UDP protocol. Packet loss is specifically measured in hardware experiments to compare the performance of standard and enhanced UDP protocols. The findings show that the enhanced UDP significantly reduces packet loss compared to the standard UDP, enhancing data delivery reliability across dynamic and structured networks, though it comes at the cost of slightly higher power consumption due to additional processing. For network topology performance, the linear chain topology provides stable processing but higher RTT, making it suitable for applications such as tunnel monitoring; the structured tree topology offers low energy consumption and fast communication, ideal for home automation; and the dynamic transition network, suited for industrial Automated Guided Vehicles (AGVs), encounters challenges with adaptive routing. These insights guide the optimization of communication protocols and network configurations for more efficient and reliable IoT deployments.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"18 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183695
Yifan Xiao, Zhilong Zhang, Zhouli Li
Convolutional Neural Networks (CNNs) have achieved remarkable results in the field of infrared image enhancement. However, the research on the visual perception mechanism and the objective evaluation indicators for enhanced infrared images is still not in-depth enough. To make the subjective and objective evaluation more consistent, this paper uses a perceptual metric to evaluate the enhancement effect of infrared images. The perceptual metric mimics the early conversion process of the human visual system and uses the normalized Laplacian pyramid distance (NLPD) between the enhanced image and the original scene radiance to evaluate the image enhancement effect. Based on this, this paper designs an infrared image-enhancement algorithm that is more conducive to human visual perception. The algorithm uses a lightweight Fully Convolutional Network (FCN), with NLPD as the similarity measure, and trains the network in a self-supervised manner by minimizing the NLPD between the enhanced image and the original scene radiance to achieve infrared image enhancement. The experimental results show that the infrared image enhancement method in this paper outperforms existing methods in terms of visual perception quality, and due to the use of a lightweight network, it is also the fastest enhancement method currently.
{"title":"A Light-Weight Self-Supervised Infrared Image Perception Enhancement Method","authors":"Yifan Xiao, Zhilong Zhang, Zhouli Li","doi":"10.3390/electronics13183695","DOIUrl":"https://doi.org/10.3390/electronics13183695","url":null,"abstract":"Convolutional Neural Networks (CNNs) have achieved remarkable results in the field of infrared image enhancement. However, the research on the visual perception mechanism and the objective evaluation indicators for enhanced infrared images is still not in-depth enough. To make the subjective and objective evaluation more consistent, this paper uses a perceptual metric to evaluate the enhancement effect of infrared images. The perceptual metric mimics the early conversion process of the human visual system and uses the normalized Laplacian pyramid distance (NLPD) between the enhanced image and the original scene radiance to evaluate the image enhancement effect. Based on this, this paper designs an infrared image-enhancement algorithm that is more conducive to human visual perception. The algorithm uses a lightweight Fully Convolutional Network (FCN), with NLPD as the similarity measure, and trains the network in a self-supervised manner by minimizing the NLPD between the enhanced image and the original scene radiance to achieve infrared image enhancement. The experimental results show that the infrared image enhancement method in this paper outperforms existing methods in terms of visual perception quality, and due to the use of a lightweight network, it is also the fastest enhancement method currently.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"29 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-18DOI: 10.3390/electronics13183704
Hyeongrae Kim, Junho Kwak, Jeonghun Cho
The rapid evolution of automotive software necessitates efficient and accurate development and verification processes. This study proposes a virtual electronic control unit (vECU) that allows for precise software testing without the need for hardware, thereby reducing developmental costs and enabling cloud-native development. The software was configured and built on a Hyundai Autoever AUTomotive Open System Architecture (AUTOSAR) classic platform, Mobilgene, and Renode was used for high-fidelity emulations. Custom peripherals in C# were implemented for the FlexTimer, system clock generator, and analog-to-digital converter to ensure the proper functionality of the vECU. Renode’s GNU debugger server function facilitates detailed software debugging in a cloud environment, further accelerating the developmental cycle. Additionally, automated testing was implemented using a vECU tester to enable the verification of the vECU. Performance evaluations demonstrated that the vECU’s execution order and timing of tasks and runnable entities closely matched those of the actual ECU. The vECU tester also enabled fast and accurate verification. These findings confirm the potential of the AUTOSAR-compatible Level-4 vECU to replace hardware in development processes. Future efforts will focus on extending capabilities to emulate a broader range of hardware components and complex system integration scenarios, supporting more diverse research and development efforts.
{"title":"AUTOSAR-Compatible Level-4 Virtual ECU for the Verification of the Target Binary for Cloud-Native Development","authors":"Hyeongrae Kim, Junho Kwak, Jeonghun Cho","doi":"10.3390/electronics13183704","DOIUrl":"https://doi.org/10.3390/electronics13183704","url":null,"abstract":"The rapid evolution of automotive software necessitates efficient and accurate development and verification processes. This study proposes a virtual electronic control unit (vECU) that allows for precise software testing without the need for hardware, thereby reducing developmental costs and enabling cloud-native development. The software was configured and built on a Hyundai Autoever AUTomotive Open System Architecture (AUTOSAR) classic platform, Mobilgene, and Renode was used for high-fidelity emulations. Custom peripherals in C# were implemented for the FlexTimer, system clock generator, and analog-to-digital converter to ensure the proper functionality of the vECU. Renode’s GNU debugger server function facilitates detailed software debugging in a cloud environment, further accelerating the developmental cycle. Additionally, automated testing was implemented using a vECU tester to enable the verification of the vECU. Performance evaluations demonstrated that the vECU’s execution order and timing of tasks and runnable entities closely matched those of the actual ECU. The vECU tester also enabled fast and accurate verification. These findings confirm the potential of the AUTOSAR-compatible Level-4 vECU to replace hardware in development processes. Future efforts will focus on extending capabilities to emulate a broader range of hardware components and complex system integration scenarios, supporting more diverse research and development efforts.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"17 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}