Raúl Pazmiño, Wilson Pavón, Matthew Armstrong, Silvio Simani
This article presents an in-depth analysis of three advanced strategies to tune fractional PID (FOPID) controllers for a nonlinear system of interconnected tanks, simulated using MATLAB. The study focuses on evaluating the performance characteristics of system responses controlled by FOPID controllers tuned through three heuristic algorithms: Ant Colony Optimization (ACO), Grey Wolf Optimizer (GWO), and Flower Pollination Algorithm (FPA). Each algorithm aims to minimize its respective cost function using various performance metrics. The nonlinear model was linearized around an equilibrium point using Taylor Series expansion and Laplace transforms to facilitate control. The FPA algorithm performed better with the lowest Integral Square Error (ISE) criterion value (297.83) and faster convergence in constant values and fractional orders. This comprehensive evaluation underscores the importance of selecting the appropriate tuning strategy and performance index, demonstrating that the FPA provides the most efficient and robust tuning for FOPID controllers in nonlinear systems. The results highlight the efficacy of meta-heuristic algorithms in optimizing complex control systems, providing valuable insights for future research and practical applications, thereby contributing to the advancement of control systems engineering.
{"title":"Performance Evaluation of Fractional Proportional–Integral–Derivative Controllers Tuned by Heuristic Algorithms for Nonlinear Interconnected Tanks","authors":"Raúl Pazmiño, Wilson Pavón, Matthew Armstrong, Silvio Simani","doi":"10.3390/a17070306","DOIUrl":"https://doi.org/10.3390/a17070306","url":null,"abstract":"This article presents an in-depth analysis of three advanced strategies to tune fractional PID (FOPID) controllers for a nonlinear system of interconnected tanks, simulated using MATLAB. The study focuses on evaluating the performance characteristics of system responses controlled by FOPID controllers tuned through three heuristic algorithms: Ant Colony Optimization (ACO), Grey Wolf Optimizer (GWO), and Flower Pollination Algorithm (FPA). Each algorithm aims to minimize its respective cost function using various performance metrics. The nonlinear model was linearized around an equilibrium point using Taylor Series expansion and Laplace transforms to facilitate control. The FPA algorithm performed better with the lowest Integral Square Error (ISE) criterion value (297.83) and faster convergence in constant values and fractional orders. This comprehensive evaluation underscores the importance of selecting the appropriate tuning strategy and performance index, demonstrating that the FPA provides the most efficient and robust tuning for FOPID controllers in nonlinear systems. The results highlight the efficacy of meta-heuristic algorithms in optimizing complex control systems, providing valuable insights for future research and practical applications, thereby contributing to the advancement of control systems engineering.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141659759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transportation asset management has historically overlooked equity considerations. However, recently, there has been a significant increase in concerns about this issue, leading to a range of research and practices aimed at achieving more equitable outcomes. Yet, addressing equity is challenging and time-consuming, given its complexity and multifaceted nature. Several factors can significantly impact the outcome of an analysis, including the definition of equity, the evaluation and quantification of its impacts, and the community classification. As a result, there can be a wide range of interpretations of what constitutes equity. Therefore, there is no single correct or incorrect approach for equity evaluation, and different perspectives, impacts, and analysis methods could be considered for this purpose. This study reviews previous research on how transportation agencies are integrating equity into transportation asset management, particularly pavement management systems. The primary objective is to investigate important equity factors for pavement management and propose a prototype framework that integrates economic, environmental, and social equity considerations into the decision-making process for pavement maintenance, rehabilitation, and reconstruction projects. The proposed framework consists of two main steps: (1) defining objectives based on the three equity dimensions, and (2) analyzing key factors for identifying underserved areas through a case study approach. The case study analyzed pavement condition and sociodemographic data for California’s Bay Area. Statistical analysis and a machine learning method revealed that areas with higher poverty rates and worse air quality tend to have poorer pavement conditions, highlighting the need to consider these factors when defining underserved areas in Bay Area and promoting equity in pavement management decision-making. The proposed framework incorporates an optimization problem to simultaneously minimize disparities in pavement conditions between underserved and other areas, reduce greenhouse gas emissions from construction and traffic disruptions, and maximize overall network pavement condition subject to budget constraints. By incorporating all three equity aspects into a quantitative decision-support framework with specific objectives, this study proposes a novel approach for transportation agencies to promote sustainable and equitable asset management practices.
{"title":"Equity in Transportation Asset Management: A Proposed Framework","authors":"Sara Arezoumand, Omar Smadi","doi":"10.3390/a17070305","DOIUrl":"https://doi.org/10.3390/a17070305","url":null,"abstract":"Transportation asset management has historically overlooked equity considerations. However, recently, there has been a significant increase in concerns about this issue, leading to a range of research and practices aimed at achieving more equitable outcomes. Yet, addressing equity is challenging and time-consuming, given its complexity and multifaceted nature. Several factors can significantly impact the outcome of an analysis, including the definition of equity, the evaluation and quantification of its impacts, and the community classification. As a result, there can be a wide range of interpretations of what constitutes equity. Therefore, there is no single correct or incorrect approach for equity evaluation, and different perspectives, impacts, and analysis methods could be considered for this purpose. This study reviews previous research on how transportation agencies are integrating equity into transportation asset management, particularly pavement management systems. The primary objective is to investigate important equity factors for pavement management and propose a prototype framework that integrates economic, environmental, and social equity considerations into the decision-making process for pavement maintenance, rehabilitation, and reconstruction projects. The proposed framework consists of two main steps: (1) defining objectives based on the three equity dimensions, and (2) analyzing key factors for identifying underserved areas through a case study approach. The case study analyzed pavement condition and sociodemographic data for California’s Bay Area. Statistical analysis and a machine learning method revealed that areas with higher poverty rates and worse air quality tend to have poorer pavement conditions, highlighting the need to consider these factors when defining underserved areas in Bay Area and promoting equity in pavement management decision-making. The proposed framework incorporates an optimization problem to simultaneously minimize disparities in pavement conditions between underserved and other areas, reduce greenhouse gas emissions from construction and traffic disruptions, and maximize overall network pavement condition subject to budget constraints. By incorporating all three equity aspects into a quantitative decision-support framework with specific objectives, this study proposes a novel approach for transportation agencies to promote sustainable and equitable asset management practices.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141664494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such that the feasibility condition is held. Since we use the information from the previous iteration to find the search direction, the inverse of the system is evaluated only once every iteration. A detailed empirical evaluation is performed on NETLIB instances, which compares two variants of the approach to the primal-dual log barrier interior point method. Results show that the proposed method is faster. The method reduces the number of iterations and CPU time(s) by 27% and 18%, respectively, on NETLIB instances tested compared to the classical interior point algorithm.
本文提出了一种新的两步内点法,用于求解线性方程组。该技术使用辅助点和中心点的凸组合来计算搜索方向。为了更新中心点,我们要找到步长的最佳值,从而保证可行性条件。由于我们使用前一次迭代的信息来寻找搜索方向,因此每次迭代只需评估一次系统的逆。我们在 NETLIB 实例上进行了详细的实证评估,将该方法的两个变体与原始双对数障碍内部点法进行了比较。结果表明,建议的方法速度更快。在测试的 NETLIB 实例上,与经典的内点算法相比,该方法的迭代次数和 CPU 时间分别减少了 27% 和 18%。
{"title":"On Implementing a Two-Step Interior Point Method for Solving Linear Programs","authors":"Sajad Fathi Hafshejani, D. Gaur, R. Benkoczi","doi":"10.3390/a17070303","DOIUrl":"https://doi.org/10.3390/a17070303","url":null,"abstract":"A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such that the feasibility condition is held. Since we use the information from the previous iteration to find the search direction, the inverse of the system is evaluated only once every iteration. A detailed empirical evaluation is performed on NETLIB instances, which compares two variants of the approach to the primal-dual log barrier interior point method. Results show that the proposed method is faster. The method reduces the number of iterations and CPU time(s) by 27% and 18%, respectively, on NETLIB instances tested compared to the classical interior point algorithm.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141668382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier.
咳嗽是人体常见的自然生理反应,它试图将空气和其他废物彻底排出呼吸道。由于环境因素、过敏反应、污染或某些疾病,咳嗽时有发生。咳嗽可以是干咳,也可以是湿咳,这取决于产生的粘液量。咳嗽的一个特征是声音,主要是 "嘎嘎 "声。人类的咳嗽声可以被连续监测到,因此,近十年来,咳嗽声分类引起了研究界的极大兴趣。本研究提出了三种用于咳嗽音频信号分类的系统集合模型(SCM)。第一种组合技术利用了鲁棒模型的概念,如交叉相关函数(CCF)和部分交叉相关函数(PCCF)模型、最小绝对收缩和选择操作器(LASSO)模型、带有 Gabor 字典分析的弹性网正则化模型和高效的集合机器学习技术;第二种技术利用了堆叠条件自动编码器(SAE)的概念;第三种技术利用了一些高效特征提取方案的概念,如可调 Q 小波变换(TQWT)、稀疏 Q 小波变换(TQWT)、最大信息系数(MIC)、距离相关系数(DCC)等高效特征提取方案,以及一些特征选择技术,如二元调谐群算法(BTSA)、聚合函数(AF)、因子分析(FA)、解释性因子分析(EFA)和机器学习分类器、内核极端学习机(KELM)、弧余弦 ELM、基于鼠群优化(RSO)的 KELM 等。这些技术被用于公开的数据集,结果表明,当使用弧余弦 ELM 分类器实现带有 AF 的稀疏 TQWT 时,分类准确率最高,达到 98.99%。
{"title":"SCMs: Systematic Conglomerated Models for Audio Cough Signal Classification","authors":"S. Prabhakar, Dong-Ok Won","doi":"10.3390/a17070302","DOIUrl":"https://doi.org/10.3390/a17070302","url":null,"abstract":"A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141668676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract: In this paper, an algorithm is proposed to solve the non-convex optimization problem using sequential convex programming. An approximation method was used to solve the collision avoidance constraint. An iterative approach was utilized to estimate the non-convex constraints, replacing them with their linear approximations. Through the simulation, we can see that this method allows for quadcopters to take off from a given initial position and fly to the desired final position within a specified flight time. It is guaranteed that the quadcopters will not collide with each other in different scenarios.
{"title":"Sequential Convex Programming for Nonlinear Optimal Control in UAV Trajectory Planning","authors":"Yong Li, Qidan Zhu, A. Elahi","doi":"10.3390/a17070304","DOIUrl":"https://doi.org/10.3390/a17070304","url":null,"abstract":"Abstract: In this paper, an algorithm is proposed to solve the non-convex optimization problem using sequential convex programming. An approximation method was used to solve the collision avoidance constraint. An iterative approach was utilized to estimate the non-convex constraints, replacing them with their linear approximations. Through the simulation, we can see that this method allows for quadcopters to take off from a given initial position and fly to the desired final position within a specified flight time. It is guaranteed that the quadcopters will not collide with each other in different scenarios.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141667643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous recognition algorithm for capturing teachers’ dynamic gesture signals. This algorithm aims to offer instructional attention cues for students with attention deficits. According to the body landmarks of the teacher’s skeleton by using vision and machine learning-based MediaPipe BlazePose, the proposed method uses simple rules to detect the teacher’s hand signals dynamically and provides three kinds of attention cues (Pointing to left, Pointing to right, and Non-pointing) during the class. Experimental results show the average accuracy, sensitivity, specificity, precision, and F1 score achieved 88.31%, 91.03%, 93.99%, 86.32%, and 88.03%, respectively. By analyzing non-verbal behavior, our method of competent performance can replace verbal reminders from the teacher and be helpful for students with attention deficits in inclusive education.
在全纳教育时代,注意力有缺陷的学生被纳入普通课堂。为了确保学生的注意力在整个课程中无缝过渡到教师的教学,并与教学进度保持一致,本文提出了一种捕捉教师动态手势信号的连续识别算法。该算法旨在为有注意力缺陷的学生提供教学注意力提示。根据教师骨架的身体地标,利用视觉和基于机器学习的 MediaPipe BlazePose,本文提出的方法使用简单的规则动态检测教师的手势,并在上课时提供三种注意力提示(向左指、向右指和不指)。实验结果表明,该方法的平均准确率、灵敏度、特异性、精确度和 F1 分数分别达到了 88.31%、91.03%、93.99%、86.32% 和 88.03%。通过分析非语言行为,我们的称职表现方法可以取代教师的口头提醒,对全纳教育中的注意力缺陷学生有所帮助。
{"title":"Continuous Recognition of Teachers’ Hand Signals for Students with Attention Deficits","authors":"Ivane Delos Santos Chen, Chieh-Ming Yang, Shang-Shu Wu, Chih-Kang Yang, Mei-Juan Chen, Chia-Hung Yeh, Yuan-Hong Lin","doi":"10.3390/a17070300","DOIUrl":"https://doi.org/10.3390/a17070300","url":null,"abstract":"In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous recognition algorithm for capturing teachers’ dynamic gesture signals. This algorithm aims to offer instructional attention cues for students with attention deficits. According to the body landmarks of the teacher’s skeleton by using vision and machine learning-based MediaPipe BlazePose, the proposed method uses simple rules to detect the teacher’s hand signals dynamically and provides three kinds of attention cues (Pointing to left, Pointing to right, and Non-pointing) during the class. Experimental results show the average accuracy, sensitivity, specificity, precision, and F1 score achieved 88.31%, 91.03%, 93.99%, 86.32%, and 88.03%, respectively. By analyzing non-verbal behavior, our method of competent performance can replace verbal reminders from the teacher and be helpful for students with attention deficits in inclusive education.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141670980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload’s footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate.
{"title":"To Cache or Not to Cache","authors":"Steven Lyons, R. Rangaswami","doi":"10.3390/a17070301","DOIUrl":"https://doi.org/10.3390/a17070301","url":null,"abstract":"Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload’s footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141670533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m
降维是一种用于将数据从高维空间转换到低维空间的技术,目的是尽可能多地保留原始信息。这种方法在工程学、生物学、天文学和经济学等许多学科中都至关重要。在本文中,我们考虑以下降维实例:给定一个 n 维概率分布 p 和一个整数 m
{"title":"Hardness and Approximability of Dimension Reduction on the Probability Simplex","authors":"R. Bruno","doi":"10.3390/a17070296","DOIUrl":"https://doi.org/10.3390/a17070296","url":null,"abstract":"Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m<n, we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141673141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate relationships between scheduling problems with the bottleneck objective functions (minimising makespan or maximal lateness) and problems of optimal colourings of the mixed graphs. The investigated scheduling problems have integer durations of the multi-processor tasks (operations), integer release dates and integer due dates of the given jobs. In the studied scheduling problems, it is required to find an optimal schedule for processing the partially ordered operations, given that operation interruptions are allowed and indicated subsets of the unit-time operations must be processed simultaneously. First, we show that the input data for any considered scheduling problem can be completely determined by the corresponding mixed graph. Second, we prove that solvable scheduling problems can be reduced to problems of finding optimal colourings of corresponding mixed graphs. Third, finding an optimal colouring of the mixed graph is equivalent to the considered scheduling problem determined by the same mixed graph. Finally, due to the proven equivalence of the considered optimisation problems, most of the results that were proven for the optimal colourings of mixed graphs generate similar results for considered scheduling problems, and vice versa.
{"title":"Mixed Graph Colouring as Scheduling a Partially Ordered Set of Interruptible Multi-Processor Tasks with Integer Due Dates","authors":"Evangelina I. Mihova, Y. Sotskov","doi":"10.3390/a17070299","DOIUrl":"https://doi.org/10.3390/a17070299","url":null,"abstract":"We investigate relationships between scheduling problems with the bottleneck objective functions (minimising makespan or maximal lateness) and problems of optimal colourings of the mixed graphs. The investigated scheduling problems have integer durations of the multi-processor tasks (operations), integer release dates and integer due dates of the given jobs. In the studied scheduling problems, it is required to find an optimal schedule for processing the partially ordered operations, given that operation interruptions are allowed and indicated subsets of the unit-time operations must be processed simultaneously. First, we show that the input data for any considered scheduling problem can be completely determined by the corresponding mixed graph. Second, we prove that solvable scheduling problems can be reduced to problems of finding optimal colourings of corresponding mixed graphs. Third, finding an optimal colouring of the mixed graph is equivalent to the considered scheduling problem determined by the same mixed graph. Finally, due to the proven equivalence of the considered optimisation problems, most of the results that were proven for the optimal colourings of mixed graphs generate similar results for considered scheduling problems, and vice versa.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141673078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A metaheuristic algorithm named the Crystal Structure Algorithm (CrSA), which is inspired by the symmetric arrangement of atoms, molecules, or ions in crystalline minerals, has been used for the accurate modeling of Mono Passivated Emitter and Rear Cell (PERC) WSMD-545 and CS7L-590 MS solar photovoltaic (PV) modules. The suggested algorithm is a concise and parameter-free approach that does not need the identification of any intrinsic parameter during the optimization stage. It is based on crystal structure generation by combining the basis and lattice point. The proposed algorithm is adopted to minimize the sum of the squares of the errors at the maximum power point, as well as the short circuit and open circuit points. Several runs are carried out to examine the V-I characteristics of the PV panels under consideration and the nature of the derived parameters. The parameters generated by the proposed technique offer the lowest error over several executions, indicating that it should be implemented in the present scenario. To validate the performance of the proposed approach, convergence curves of Mono PERC WSMD-545 and CS7L-590 MS PV modules obtained using the CrSA are compared with the convergence curves obtained using the recent optimization algorithms (OAs) in the literature. It has been observed that the proposed approach exhibited the fastest rate of convergence on each of the PV panels.
受晶体矿物中原子、分子或离子对称排列的启发,一种名为晶体结构算法(CrSA)的元启发式算法被用于单钝化发射器和后部电池(PERC)WSMD-545 和 CS7L-590 MS 太阳能光伏(PV)模块的精确建模。所建议的算法是一种简洁的无参数方法,在优化阶段不需要确定任何固有参数。它基于晶体结构的生成,结合了基点和晶格点。所提出的算法可使最大功率点、短路点和开路点的误差平方和最小。通过多次运行,考察了所考虑的光伏电池板的 V-I 特性和推导参数的性质。在多次执行过程中,拟议技术生成的参数误差最小,这表明在当前情况下应采用该技术。为了验证所提方法的性能,将使用 CrSA 得出的 Mono PERC WSMD-545 和 CS7L-590 MS 光伏组件的收敛曲线与使用文献中最新优化算法(OA)得出的收敛曲线进行了比较。结果表明,所提出的方法在每种光伏面板上的收敛速度都最快。
{"title":"Crystal Symmetry-Inspired Algorithm for Optimal Design of Contemporary Mono Passivated Emitter and Rear Cell Solar Photovoltaic Modules","authors":"Ram Ishwar Vais, K. Sahay, Tirumalasetty Chiranjeevi, Ramesh Devarapalli, Ł. Knypiński","doi":"10.3390/a17070297","DOIUrl":"https://doi.org/10.3390/a17070297","url":null,"abstract":"A metaheuristic algorithm named the Crystal Structure Algorithm (CrSA), which is inspired by the symmetric arrangement of atoms, molecules, or ions in crystalline minerals, has been used for the accurate modeling of Mono Passivated Emitter and Rear Cell (PERC) WSMD-545 and CS7L-590 MS solar photovoltaic (PV) modules. The suggested algorithm is a concise and parameter-free approach that does not need the identification of any intrinsic parameter during the optimization stage. It is based on crystal structure generation by combining the basis and lattice point. The proposed algorithm is adopted to minimize the sum of the squares of the errors at the maximum power point, as well as the short circuit and open circuit points. Several runs are carried out to examine the V-I characteristics of the PV panels under consideration and the nature of the derived parameters. The parameters generated by the proposed technique offer the lowest error over several executions, indicating that it should be implemented in the present scenario. To validate the performance of the proposed approach, convergence curves of Mono PERC WSMD-545 and CS7L-590 MS PV modules obtained using the CrSA are compared with the convergence curves obtained using the recent optimization algorithms (OAs) in the literature. It has been observed that the proposed approach exhibited the fastest rate of convergence on each of the PV panels.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141672680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}