Giseok Jeong, Kookjin Kim, Sukjoon Yoon, Dongkyoo Shin, Jiwon Kang
As the world undergoes rapid digitalization, individuals and objects are becoming more extensively connected through the advancement of Internet networks. This phenomenon has been observed in governmental and military domains as well, accompanied by a rise in cyber threats consequently. The United States (U.S.), in response to this, has been strongly urging its allies to adhere to the RMF standard to bolster the security of primary defense systems. An agreement has been signed between the Republic of Korea and the U.S. to collaboratively operate major defense systems and cooperate on cyber threats. However, the methodologies and tools required for RMF implementation have not yet been fully provided to several allied countries, including the Republic of Korea, causing difficulties in its implementation. In this study, the U.S. RMF process was applied to a specific system of the Republic of Korea Ministry of National Defense, and the outcomes were analyzed. Emphasis was placed on the initial two stages of the RMF: ‘system categorization’ and ‘security control selection’, presenting actual application cases. Additionally, a detailed description of the methodology used by the Republic of Korea Ministry of National Defense for RMF implementation in defense systems is provided, introducing a keyword-based overlay application methodology. An introduction to the K-RMF Baseline, Overlay, and Tailoring Tool is also given. The methodologies and tools presented are expected to serve as valuable references for ally countries, including the U.S., in effectively implementing the RMF. It is anticipated that the results of this research will contribute to enhancing cyber security and threat management among allies.
{"title":"Exploring Effective Approaches to the Risk Management Framework (RMF) in the Republic of Korea: A Study","authors":"Giseok Jeong, Kookjin Kim, Sukjoon Yoon, Dongkyoo Shin, Jiwon Kang","doi":"10.3390/info14100561","DOIUrl":"https://doi.org/10.3390/info14100561","url":null,"abstract":"As the world undergoes rapid digitalization, individuals and objects are becoming more extensively connected through the advancement of Internet networks. This phenomenon has been observed in governmental and military domains as well, accompanied by a rise in cyber threats consequently. The United States (U.S.), in response to this, has been strongly urging its allies to adhere to the RMF standard to bolster the security of primary defense systems. An agreement has been signed between the Republic of Korea and the U.S. to collaboratively operate major defense systems and cooperate on cyber threats. However, the methodologies and tools required for RMF implementation have not yet been fully provided to several allied countries, including the Republic of Korea, causing difficulties in its implementation. In this study, the U.S. RMF process was applied to a specific system of the Republic of Korea Ministry of National Defense, and the outcomes were analyzed. Emphasis was placed on the initial two stages of the RMF: ‘system categorization’ and ‘security control selection’, presenting actual application cases. Additionally, a detailed description of the methodology used by the Republic of Korea Ministry of National Defense for RMF implementation in defense systems is provided, introducing a keyword-based overlay application methodology. An introduction to the K-RMF Baseline, Overlay, and Tailoring Tool is also given. The methodologies and tools presented are expected to serve as valuable references for ally countries, including the U.S., in effectively implementing the RMF. It is anticipated that the results of this research will contribute to enhancing cyber security and threat management among allies.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136014474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stemming from the overlap of objects and undertraining due to few samples, road dense object detection is confronted with poor object identification performance and the inability to recognize edge objects. Based on this, one transfer learning-based YOLOv3 approach for identifying dense objects on the road has been proposed. Firstly, the Darknet-53 network structure is adopted to obtain a pre-trained YOLOv3 model. Then, the transfer training is introduced as the output layer for the special dataset of 2000 images containing vehicles. In the proposed model, one random function is adapted to initialize and optimize the weights of the transfer training model, which is separately designed from the pre-trained YOLOv3. The object detection classifier replaces the fully connected layer, which further improves the detection effect. The reduced size of the network model can further reduce the training and detection time. As a result, it can be better applied to actual scenarios. The experimental results demonstrate that the object detection accuracy of the presented approach is 87.75% for the Pascal VOC 2007 dataset, which is superior to the traditional YOLOv3 and the YOLOv5 by 4% and 0.59%, respectively. Additionally, the test was carried out using UA-DETRAC, a public road vehicle detection dataset. The object detection accuracy of the presented approach reaches 79.23% in detecting images, which is 4.13% better than the traditional YOLOv3, and compared with the existing relatively new object detection algorithm YOLOv5, the detection accuracy is 1.36% better. Moreover, the detection speed of the proposed YOLOv3 method reaches 31.2 Fps/s in detecting images, which is 7.6 Fps/s faster than the traditional YOLOv3, and compared with the existing new object detection algorithm YOLOv7, the speed is 1.5 Fps/s faster. The proposed YOLOv3 performs 67.36 Bn of floating point operations per second in detecting video, which is obviously less than the traditional YOLOv3 and the newer object detection algorithm YOLOv5.
{"title":"Transfer Learning-Based YOLOv3 Model for Road Dense Object Detection","authors":"Chunhua Zhu, Jiarui Liang, Fei Zhou","doi":"10.3390/info14100560","DOIUrl":"https://doi.org/10.3390/info14100560","url":null,"abstract":"Stemming from the overlap of objects and undertraining due to few samples, road dense object detection is confronted with poor object identification performance and the inability to recognize edge objects. Based on this, one transfer learning-based YOLOv3 approach for identifying dense objects on the road has been proposed. Firstly, the Darknet-53 network structure is adopted to obtain a pre-trained YOLOv3 model. Then, the transfer training is introduced as the output layer for the special dataset of 2000 images containing vehicles. In the proposed model, one random function is adapted to initialize and optimize the weights of the transfer training model, which is separately designed from the pre-trained YOLOv3. The object detection classifier replaces the fully connected layer, which further improves the detection effect. The reduced size of the network model can further reduce the training and detection time. As a result, it can be better applied to actual scenarios. The experimental results demonstrate that the object detection accuracy of the presented approach is 87.75% for the Pascal VOC 2007 dataset, which is superior to the traditional YOLOv3 and the YOLOv5 by 4% and 0.59%, respectively. Additionally, the test was carried out using UA-DETRAC, a public road vehicle detection dataset. The object detection accuracy of the presented approach reaches 79.23% in detecting images, which is 4.13% better than the traditional YOLOv3, and compared with the existing relatively new object detection algorithm YOLOv5, the detection accuracy is 1.36% better. Moreover, the detection speed of the proposed YOLOv3 method reaches 31.2 Fps/s in detecting images, which is 7.6 Fps/s faster than the traditional YOLOv3, and compared with the existing new object detection algorithm YOLOv7, the speed is 1.5 Fps/s faster. The proposed YOLOv3 performs 67.36 Bn of floating point operations per second in detecting video, which is obviously less than the traditional YOLOv3 and the newer object detection algorithm YOLOv5.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136014258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growth of structured, semi-structured, and unstructured data produced by the new applications is a result of the development and expansion of social networks, the Internet of Things, web technology, mobile devices, and other technologies. However, as traditional databases became less suitable to manage the rapidly growing quantity of data and variety of data structures, a new class of database management systems named NoSQL was required to satisfy the new requirements. Although NoSQL databases are generally schema-less, significant research has been conducted on their design. A literature review presented in this paper lets us claim the need to create modeling techniques to describe how to structure data in NoSQL databases. Key-value is one of the NoSQL families that has received too little attention, especially in terms of its design methodology. Most studies have focused on the other families, like column-oriented and document-oriented. This paper aims to present a design approach named KVMod (key-value modeling) specific to key-value databases. The purpose is to provide to the scientific community and engineers with a methodology for the design of key-value stores using the maximum automation and therefore the minimum human intervention, which equals the minimum number of errors. A software tool called KVDesign has been implemented to automate the proposed methodology and, thus, the most time-consuming database modeling tasks. The complexity is also discussed to assess the efficiency of our proposed algorithms.
{"title":"KVMod—A Novel Approach to Design Key-Value NoSQL Databases","authors":"Ahmed Dourhri, Mohamed Hanine, Hassan Ouahmane","doi":"10.3390/info14100563","DOIUrl":"https://doi.org/10.3390/info14100563","url":null,"abstract":"The growth of structured, semi-structured, and unstructured data produced by the new applications is a result of the development and expansion of social networks, the Internet of Things, web technology, mobile devices, and other technologies. However, as traditional databases became less suitable to manage the rapidly growing quantity of data and variety of data structures, a new class of database management systems named NoSQL was required to satisfy the new requirements. Although NoSQL databases are generally schema-less, significant research has been conducted on their design. A literature review presented in this paper lets us claim the need to create modeling techniques to describe how to structure data in NoSQL databases. Key-value is one of the NoSQL families that has received too little attention, especially in terms of its design methodology. Most studies have focused on the other families, like column-oriented and document-oriented. This paper aims to present a design approach named KVMod (key-value modeling) specific to key-value databases. The purpose is to provide to the scientific community and engineers with a methodology for the design of key-value stores using the maximum automation and therefore the minimum human intervention, which equals the minimum number of errors. A software tool called KVDesign has been implemented to automate the proposed methodology and, thus, the most time-consuming database modeling tasks. The complexity is also discussed to assess the efficiency of our proposed algorithms.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abderahman Rejeb, Karim Rejeb, Steve Simske, John G. Keogh
Blockchain technology has emerged as a tool with the potential to enhance transparency, trust, security, and decentralization in supply chain management (SCM). This study presents a comprehensive review of the interplay between blockchain technology and SCM. By analyzing an extensive dataset of 943 articles, our exploration utilizes the Latent Dirichlet Allocation (LDA) method to delve deep into the thematic structure of the discourse. This investigation revealed ten central topics ranging from blockchain’s transformative role in supply chain finance and e-commerce operations to its application in specialized areas, such as the halal food supply chain and humanitarian contexts. Particularly pronounced were discussions on the challenges and transformations of blockchain integration in supply chains and its impact on pricing strategies and decision-making. Visualization tools, including PyLDAvis, further illuminated the interconnectedness of these themes, highlighting the intertwined nature of blockchain adoption challenges with aspects such as traceability and pricing. Despite the breadth of topics covered, the paper acknowledges its limitations due to the fast-evolving nature of blockchain developments during and after our analysis period. Ultimately, this review provides a holistic academic snapshot, emphasizing both well-developed and nascent research areas and guiding future research in the evolving domain of blockchain in SCM.
{"title":"Exploring Blockchain Research in Supply Chain Management: A Latent Dirichlet Allocation-Driven Systematic Review","authors":"Abderahman Rejeb, Karim Rejeb, Steve Simske, John G. Keogh","doi":"10.3390/info14100557","DOIUrl":"https://doi.org/10.3390/info14100557","url":null,"abstract":"Blockchain technology has emerged as a tool with the potential to enhance transparency, trust, security, and decentralization in supply chain management (SCM). This study presents a comprehensive review of the interplay between blockchain technology and SCM. By analyzing an extensive dataset of 943 articles, our exploration utilizes the Latent Dirichlet Allocation (LDA) method to delve deep into the thematic structure of the discourse. This investigation revealed ten central topics ranging from blockchain’s transformative role in supply chain finance and e-commerce operations to its application in specialized areas, such as the halal food supply chain and humanitarian contexts. Particularly pronounced were discussions on the challenges and transformations of blockchain integration in supply chains and its impact on pricing strategies and decision-making. Visualization tools, including PyLDAvis, further illuminated the interconnectedness of these themes, highlighting the intertwined nature of blockchain adoption challenges with aspects such as traceability and pricing. Despite the breadth of topics covered, the paper acknowledges its limitations due to the fast-evolving nature of blockchain developments during and after our analysis period. Ultimately, this review provides a holistic academic snapshot, emphasizing both well-developed and nascent research areas and guiding future research in the evolving domain of blockchain in SCM.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stroke remains a predominant cause of mortality and disability worldwide. The endeavor to diagnose stroke through biomechanical time-series data coupled with Artificial Intelligence (AI) poses a formidable challenge, especially amidst constrained participant numbers. The challenge escalates when dealing with small datasets, a common scenario in preliminary medical research. While recent advances have ushered in few-shot learning algorithms adept at handling sparse data, this paper pioneers a distinctive methodology involving a visualization-centric approach to navigating the small-data challenge in diagnosing stroke survivors based on gait-analysis-derived biomechanical data. Employing Siamese neural networks (SNNs), our method transforms a biomechanical time series into visually intuitive images, facilitating a unique analytical lens. The kinematic data encapsulated comprise a spectrum of gait metrics, including movements of the ankle, knee, hip, and center of mass in three dimensions for both paretic and non-paretic legs. Following the visual transformation, the SNN serves as a potent feature extractor, mapping the data into a high-dimensional feature space conducive to classification. The extracted features are subsequently fed into various machine learning (ML) models like support vector machines (SVMs), Random Forest (RF), or neural networks (NN) for classification. In pursuit of heightened interpretability, a cornerstone in medical AI applications, we employ the Grad-CAM (Class Activation Map) tool to visually highlight the critical regions influencing the model’s decision. Our methodology, though exploratory, showcases a promising avenue for leveraging visualized biomechanical data in stroke diagnosis, achieving a perfect classification rate in our preliminary dataset. The visual inspection of generated images elucidates a clear separation of classes (100%), underscoring the potential of this visualization-driven approach in the realm of small data. This proof-of-concept study accentuates the novelty of visual data transformation in enhancing both interpretability and performance in stroke diagnosis using limited data, laying a robust foundation for future research in larger-scale evaluations.
{"title":"Innovative Visualization Approach for Biomechanical Time Series in Stroke Diagnosis Using Explainable Machine Learning Methods: A Proof-of-Concept Study","authors":"Kyriakos Apostolidis, Christos Kokkotis, Evangelos Karakasis, Evangeli Karampina, Serafeim Moustakidis, Dimitrios Menychtas, Georgios Giarmatzis, Dimitrios Tsiptsios, Konstantinos Vadikolias, Nikolaos Aggelousis","doi":"10.3390/info14100559","DOIUrl":"https://doi.org/10.3390/info14100559","url":null,"abstract":"Stroke remains a predominant cause of mortality and disability worldwide. The endeavor to diagnose stroke through biomechanical time-series data coupled with Artificial Intelligence (AI) poses a formidable challenge, especially amidst constrained participant numbers. The challenge escalates when dealing with small datasets, a common scenario in preliminary medical research. While recent advances have ushered in few-shot learning algorithms adept at handling sparse data, this paper pioneers a distinctive methodology involving a visualization-centric approach to navigating the small-data challenge in diagnosing stroke survivors based on gait-analysis-derived biomechanical data. Employing Siamese neural networks (SNNs), our method transforms a biomechanical time series into visually intuitive images, facilitating a unique analytical lens. The kinematic data encapsulated comprise a spectrum of gait metrics, including movements of the ankle, knee, hip, and center of mass in three dimensions for both paretic and non-paretic legs. Following the visual transformation, the SNN serves as a potent feature extractor, mapping the data into a high-dimensional feature space conducive to classification. The extracted features are subsequently fed into various machine learning (ML) models like support vector machines (SVMs), Random Forest (RF), or neural networks (NN) for classification. In pursuit of heightened interpretability, a cornerstone in medical AI applications, we employ the Grad-CAM (Class Activation Map) tool to visually highlight the critical regions influencing the model’s decision. Our methodology, though exploratory, showcases a promising avenue for leveraging visualized biomechanical data in stroke diagnosis, achieving a perfect classification rate in our preliminary dataset. The visual inspection of generated images elucidates a clear separation of classes (100%), underscoring the potential of this visualization-driven approach in the realm of small data. This proof-of-concept study accentuates the novelty of visual data transformation in enhancing both interpretability and performance in stroke diagnosis using limited data, laying a robust foundation for future research in larger-scale evaluations.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inspired by the migration and reproduction of species in nature to explore suitable habitats, this paper proposed a new swarm intelligence algorithm called the Migration and Reproduction Algorithm (MARA). This new algorithm discusses how to transform the behavior of an organism looking for a suitable habitat into a mathematical model, which can solve optimization problems. MARA has some common features with other optimization methods such as particle swarm optimization (PSO) and the fireworks algorithm (FWA), which means MARA can also solve the optimization problems that PSO and FWA are used to, namely, high-dimensional optimization problems. MARA also has some unique features among biology-based optimization methods. In this paper, we articulated the structure of MARA by correlating it with natural biogeography; then, we demonstrated the performance of MARA on sets of 12 benchmark functions. In the end, we applied it to optimize a practical problem of power dispatching in a multi-microgrid system that proved it has certain value in practical applications.
{"title":"A New Migration and Reproduction Intelligence Algorithm: Case Study in Cloud-Based Microgrid","authors":"Renwu Yan, Yunzhang Liu, Ning Yu","doi":"10.3390/info14100562","DOIUrl":"https://doi.org/10.3390/info14100562","url":null,"abstract":"Inspired by the migration and reproduction of species in nature to explore suitable habitats, this paper proposed a new swarm intelligence algorithm called the Migration and Reproduction Algorithm (MARA). This new algorithm discusses how to transform the behavior of an organism looking for a suitable habitat into a mathematical model, which can solve optimization problems. MARA has some common features with other optimization methods such as particle swarm optimization (PSO) and the fireworks algorithm (FWA), which means MARA can also solve the optimization problems that PSO and FWA are used to, namely, high-dimensional optimization problems. MARA also has some unique features among biology-based optimization methods. In this paper, we articulated the structure of MARA by correlating it with natural biogeography; then, we demonstrated the performance of MARA on sets of 12 benchmark functions. In the end, we applied it to optimize a practical problem of power dispatching in a multi-microgrid system that proved it has certain value in practical applications.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous vehicles (AVs) have emerged as a promising technology for enhancing road safety and mobility. However, designing AVs involves various critical aspects, such as software and system requirements, that must be carefully addressed. This paper investigates safety-aware approaches for AVs, focusing on the software and system requirements aspect. It reviews the existing methods based on software and system design and analyzes them according to their algorithms, parameters, evaluation criteria, and challenges. This paper also examines the state-of-the-art artificial intelligence-based techniques for AVs, as AI has been a crucial element in advancing this technology. This paper reveals that 63% of the reviewed studies use various AI methods, with deep learning being the most prevalent (34%). The article also identifies the current gaps and future directions for AV safety research. This paper can be a valuable reference for researchers and practitioners on AV safety.
{"title":"Artificial Intelligence and Software Modeling Approaches in Autonomous Vehicles for Safety Management: A Systematic Review","authors":"Shirin Abbasi, Amir Masoud Rahmani","doi":"10.3390/info14100555","DOIUrl":"https://doi.org/10.3390/info14100555","url":null,"abstract":"Autonomous vehicles (AVs) have emerged as a promising technology for enhancing road safety and mobility. However, designing AVs involves various critical aspects, such as software and system requirements, that must be carefully addressed. This paper investigates safety-aware approaches for AVs, focusing on the software and system requirements aspect. It reviews the existing methods based on software and system design and analyzes them according to their algorithms, parameters, evaluation criteria, and challenges. This paper also examines the state-of-the-art artificial intelligence-based techniques for AVs, as AI has been a crucial element in advancing this technology. This paper reveals that 63% of the reviewed studies use various AI methods, with deep learning being the most prevalent (34%). The article also identifies the current gaps and future directions for AV safety research. This paper can be a valuable reference for researchers and practitioners on AV safety.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136210811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asier del Rio, Oscar Barambones, Jokin Uralde, Eneko Artetxe, Isidro Calvo
Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are commonly employed in the industry due to their simplicity and ease of implementation. Nonetheless, intelligent approaches like Particle Swarm Optimization (PSO) offer enhanced accuracy in tracking efficiency with reduced oscillations. The PSO algorithm, inspired by collective intelligence and animal swarm behavior, stands out as a promising solution due to its efficiency and ease of integration, relying only on standard current and voltage sensors commonly found in these systems, not like most intelligent techniques, which require additional modeling or sensoring, significantly increasing the cost of the installation. The primary contribution of this study lies in the implementation and validation of an advanced control system based on the PSO algorithm for real-time Maximum Power Point Tracking (MPPT) in a commercial photovoltaic system to assess its viability by testing it against the industry-standard controller, Perturbation and Observation (P&O), to highlight its advantages and limitations. Through rigorous experiments and comparisons with other methods, the proposed PSO-based control system’s performance and feasibility have been thoroughly evaluated. A sensitivity analysis of the algorithm’s search dynamics parameters has been conducted to identify the most effective combination for optimal real-time tracking. Notably, experimental comparisons with the P&O algorithm have revealed the PSO algorithm’s remarkable ability to significantly reduce settling time up to threefold under similar conditions, resulting in a substantial decrease in energy losses during transient states from 31.96% with P&O to 9.72% with PSO.
{"title":"Particle Swarm Optimization-Based Control for Maximum Power Point Tracking Implemented in a Real Time Photovoltaic System","authors":"Asier del Rio, Oscar Barambones, Jokin Uralde, Eneko Artetxe, Isidro Calvo","doi":"10.3390/info14100556","DOIUrl":"https://doi.org/10.3390/info14100556","url":null,"abstract":"Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are commonly employed in the industry due to their simplicity and ease of implementation. Nonetheless, intelligent approaches like Particle Swarm Optimization (PSO) offer enhanced accuracy in tracking efficiency with reduced oscillations. The PSO algorithm, inspired by collective intelligence and animal swarm behavior, stands out as a promising solution due to its efficiency and ease of integration, relying only on standard current and voltage sensors commonly found in these systems, not like most intelligent techniques, which require additional modeling or sensoring, significantly increasing the cost of the installation. The primary contribution of this study lies in the implementation and validation of an advanced control system based on the PSO algorithm for real-time Maximum Power Point Tracking (MPPT) in a commercial photovoltaic system to assess its viability by testing it against the industry-standard controller, Perturbation and Observation (P&O), to highlight its advantages and limitations. Through rigorous experiments and comparisons with other methods, the proposed PSO-based control system’s performance and feasibility have been thoroughly evaluated. A sensitivity analysis of the algorithm’s search dynamics parameters has been conducted to identify the most effective combination for optimal real-time tracking. Notably, experimental comparisons with the P&O algorithm have revealed the PSO algorithm’s remarkable ability to significantly reduce settling time up to threefold under similar conditions, resulting in a substantial decrease in energy losses during transient states from 31.96% with P&O to 9.72% with PSO.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136214057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous-variable quantum key distribution (CV-QKD) shows potential for the rapid development of an information-theoretic secure global communication network; however, the complexities of CV-QKD implementation remain a restrictive factor. Machine learning (ML) has recently shown promise in alleviating these complexities. ML has been applied to almost every stage of CV-QKD protocols, including ML-assisted phase error estimation, excess noise estimation, state discrimination, parameter estimation and optimization, key sifting, information reconciliation, and key rate estimation. This survey provides a comprehensive analysis of the current literature on ML-assisted CV-QKD. In addition, the survey compares the ML algorithms assisting CV-QKD with the traditional algorithms they aim to augment, as well as providing recommendations for future directions for ML-assisted CV-QKD research.
{"title":"A Survey of Machine Learning Assisted Continuous-Variable Quantum Key Distribution","authors":"Nathan K. Long, Robert Malaney, Kenneth J. Grant","doi":"10.3390/info14100553","DOIUrl":"https://doi.org/10.3390/info14100553","url":null,"abstract":"Continuous-variable quantum key distribution (CV-QKD) shows potential for the rapid development of an information-theoretic secure global communication network; however, the complexities of CV-QKD implementation remain a restrictive factor. Machine learning (ML) has recently shown promise in alleviating these complexities. ML has been applied to almost every stage of CV-QKD protocols, including ML-assisted phase error estimation, excess noise estimation, state discrimination, parameter estimation and optimization, key sifting, information reconciliation, and key rate estimation. This survey provides a comprehensive analysis of the current literature on ML-assisted CV-QKD. In addition, the survey compares the ML algorithms assisting CV-QKD with the traditional algorithms they aim to augment, as well as providing recommendations for future directions for ML-assisted CV-QKD research.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136294653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Banchelli, Marta Garcia-Gasulla, Filippo Mantovani
Top-Down models are defined by hardware architects to provide information on the utilization of different hardware components. The target is to isolate the users from the complexity of the hardware architecture while giving them insight into how efficiently the code uses the resources. In this paper, we explore the applicability of four Top-Down models defined for different hardware architectures powering state-of-the-art HPC clusters (Intel Skylake, Fujitsu A64FX, IBM Power9, and Huawei Kunpeng 920) and propose a model for AMD Zen 2. We study a parallel CFD code used for scientific production to compare these five Top-Down models. We evaluate the level of insight achieved, the clarity of the information, the ease of use, and the conclusions each allows us to reach. Our study indicates that the Top-Down model makes it very difficult for a performance analyst to spot inefficiencies in complex scientific codes without delving deep into micro-architecture details.
自顶向下模型由硬件架构师定义,以提供关于不同硬件组件使用情况的信息。目标是将用户与硬件架构的复杂性隔离开来,同时让他们了解代码如何有效地使用资源。在本文中,我们探讨了四种自顶向下模型的适用性,这些模型适用于支持最先进的高性能计算集群的不同硬件架构(英特尔Skylake,富士通A64FX, IBM Power9和华为鲲鹏920),并提出了一种适用于AMD Zen 2的模型。我们研究了一个用于科学生产的并行CFD代码来比较这五种自上而下的模型。我们评估所获得的洞察力水平、信息的清晰度、易用性,以及每个工具能让我们得出的结论。我们的研究表明,自上而下的模型使得性能分析师很难在不深入研究微架构细节的情况下发现复杂科学代码中的低效之处。
{"title":"Top-Down Models across CPU Architectures: Applicability and Comparison in a High-Performance Computing Environment","authors":"Fabio Banchelli, Marta Garcia-Gasulla, Filippo Mantovani","doi":"10.3390/info14100554","DOIUrl":"https://doi.org/10.3390/info14100554","url":null,"abstract":"Top-Down models are defined by hardware architects to provide information on the utilization of different hardware components. The target is to isolate the users from the complexity of the hardware architecture while giving them insight into how efficiently the code uses the resources. In this paper, we explore the applicability of four Top-Down models defined for different hardware architectures powering state-of-the-art HPC clusters (Intel Skylake, Fujitsu A64FX, IBM Power9, and Huawei Kunpeng 920) and propose a model for AMD Zen 2. We study a parallel CFD code used for scientific production to compare these five Top-Down models. We evaluate the level of insight achieved, the clarity of the information, the ease of use, and the conclusions each allows us to reach. Our study indicates that the Top-Down model makes it very difficult for a performance analyst to spot inefficiencies in complex scientific codes without delving deep into micro-architecture details.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136295353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}