首页 > 最新文献

Technologies最新文献

英文 中文
Multistage Malware Detection Method for Backup Systems 备份系统的多级恶意软件检测方法
Pub Date : 2024-02-05 DOI: 10.3390/technologies12020023
Pavel Novák, V. Oujezský, Patrik Kaura, T. Horvath, M. Holik
This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices.
本文提出了一种创新的解决方案,以应对在备份系统中检测潜伏恶意软件的挑战。所提出的检测系统采用了一种多方面的方法,将相似性分析与机器学习算法相结合,以提高恶意软件的检测能力。研究结果表明,由 Faiss 模型驱动的高级相似性搜索技术在加强系统备份和网络流量中恶意软件的发现方面具有潜力。采用这些技术将提高网络安全实践的弹性,保护重要系统免受隐藏恶意软件的威胁。本文的研究结果强调了高级相似性搜索技术在加强系统备份和网络流量中恶意软件发现方面的潜力,实施这些技术的意义包括提高网络安全实践的弹性,保护重要系统免受隐藏在备份档案和网络数据中的恶意软件威胁。人工智能方法的集成提高了系统的效率和速度,使所提出的系统在现实世界的网络安全中更加实用。本文的贡献在于提出了一个新颖而全面的解决方案,旨在检测备份中潜藏的恶意软件,防止备份被入侵的系统。该系统由多个分析组件组成,包括系统文件变化检测器、网络流量监控代理和防火墙,所有组件都集成到一个中央决策单元中。报告讨论了研究的当前进展和未来步骤,强调了本项目的贡献以及改进网络安全实践的潜在改进措施。
{"title":"Multistage Malware Detection Method for Backup Systems","authors":"Pavel Novák, V. Oujezský, Patrik Kaura, T. Horvath, M. Holik","doi":"10.3390/technologies12020023","DOIUrl":"https://doi.org/10.3390/technologies12020023","url":null,"abstract":"This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139804040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficiency in Additive Manufacturing: Condensed Review 快速成型制造中的能效:简要回顾
Pub Date : 2024-02-05 DOI: 10.3390/technologies12020021
Ismail Fidan, Vivekanand Naikwadi, Suhas Alkunte, Roshan Mishra, Khalid Tantawi
Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community.
如今,增材制造(AM)的应用几乎已深入日常生活的方方面面。许多行业都在其领域内调整和实施这一革命性的生产技术,以提高产量、降低生产成本、在短时间内制造出轻质复杂的零件,并满足客户的制造需求。显然,AM 技术在完成每个零件的生产任务时都会消耗能源。因此,必须了解能源效率的影响,以便经济、合理地使用这些先进技术。本文从工艺、材料科学、工业和倡议等角度对这一重要概念进行了全面评述。本研究的目标是从最近的一些技术资源中收集并展示与 AM 技术能耗相关的最新知识块。总体而言,它们是调查、观察、实验、案例研究、内容分析和档案研究的集合。本研究强调了与能源效率相关的当前趋势和技术及其对 AM 界的影响。
{"title":"Energy Efficiency in Additive Manufacturing: Condensed Review","authors":"Ismail Fidan, Vivekanand Naikwadi, Suhas Alkunte, Roshan Mishra, Khalid Tantawi","doi":"10.3390/technologies12020021","DOIUrl":"https://doi.org/10.3390/technologies12020021","url":null,"abstract":"Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139864434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine 基于角度计算的 10 千瓦风力涡轮机叶片推力确定方法
Pub Date : 2024-02-05 DOI: 10.3390/technologies12020022
J. R. Dorrego-Portela, A. E. Ponce-Martínez, Eduardo Pérez-Chaltell, Jaime Peña-Antonio, Carlos Alberto Mateos-Mendoza, J. Robles-Ocampo, P. Y. Sevilla-Camacho, Marcos Aviles, J. Rodríguez-Reséndíz
In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions.
本文通过考虑特万特佩克地峡的特征风速,获得了 10 千瓦风力涡轮机叶片上推力的行为。由于推力与位置点和叶片旋转有关,因此分析机械力对于高效、安全地设计组成风力涡轮机的不同元件至关重要。为此,我们分析了小功率风力涡轮机三个叶片各自产生的推力。每个叶片的角度位置 (θ) 从 0° 到 120° 不等,叶片被分段 (r),并测试了不同的风速,如切割风速、设计风速、平均风速和最大风速。结果表明,推力随风速和高度的增加而成正比增加,但在每个叶片段和每个角度位置上的推力表现不同。这种方法可以确定发生最小和最大推力的角度位置和确切的叶片段。位于 90° 角位置的叶片 1 是 P15 上受推力影响最大的叶片。当叶片旋转 180° 时,推力减少了 9.09 N,即减少了 66.74%。此外,这项研究还能让设计人员了解推力导致的叶片偏转。这一信息可用于避免与塔架发生碰撞。就本研究中使用的转子半径而言,推力造成的叶片偏转为 10% 至 13%。这些结果保证了测试发电机在工作条件下的运行。
{"title":"Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine","authors":"J. R. Dorrego-Portela, A. E. Ponce-Martínez, Eduardo Pérez-Chaltell, Jaime Peña-Antonio, Carlos Alberto Mateos-Mendoza, J. Robles-Ocampo, P. Y. Sevilla-Camacho, Marcos Aviles, J. Rodríguez-Reséndíz","doi":"10.3390/technologies12020022","DOIUrl":"https://doi.org/10.3390/technologies12020022","url":null,"abstract":"In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139802847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation 基于最优传输的参数元模型应用于不确定性评估
Pub Date : 2024-02-02 DOI: 10.3390/technologies12020020
S. Torregrosa, David Muñoz, Vincent Herbert, F. Chinesta
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.
在训练参数代用程序以实时表示真实世界的复杂系统时,通常的假设是,定义系统的参数值是绝对可信的。因此,在训练过程中,我们的重点完全放在优化代理输出的准确性上。然而,真实物理的特点是复杂性和不可预测性增加。值得注意的是,在确定系统参数时可能存在一定程度的不确定性。因此,在本文中,我们使用标准蒙特卡洛方法对这些不确定性通过代理系统的传播进行了说明。随后,我们提出了一种基于最优传输的新型回归技术,用于实时推断代理输入的不确定性对其输出精度的影响。与经典回归技术(包括高级回归技术)相比,基于 OT 的回归技术能更准确地推断出模拟物理现实的场。
{"title":"Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation","authors":"S. Torregrosa, David Muñoz, Vincent Herbert, F. Chinesta","doi":"10.3390/technologies12020020","DOIUrl":"https://doi.org/10.3390/technologies12020020","url":null,"abstract":"When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139870237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation 基于最优传输的参数元模型应用于不确定性评估
Pub Date : 2024-02-02 DOI: 10.3390/technologies12020020
S. Torregrosa, David Muñoz, Vincent Herbert, F. Chinesta
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.
在训练参数代用程序以实时表示真实世界的复杂系统时,通常的假设是,定义系统的参数值是绝对可信的。因此,在训练过程中,我们的重点完全放在优化代理输出的准确性上。然而,真实物理的特点是复杂性和不可预测性增加。值得注意的是,在确定系统参数时可能存在一定程度的不确定性。因此,在本文中,我们使用标准蒙特卡洛方法对这些不确定性通过代理系统的传播进行了说明。随后,我们提出了一种基于最优传输的新型回归技术,用于实时推断代理输入的不确定性对其输出精度的影响。与经典回归技术(包括高级回归技术)相比,基于 OT 的回归技术能更准确地推断出模拟物理现实的场。
{"title":"Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation","authors":"S. Torregrosa, David Muñoz, Vincent Herbert, F. Chinesta","doi":"10.3390/technologies12020020","DOIUrl":"https://doi.org/10.3390/technologies12020020","url":null,"abstract":"When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139810229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence 基于人工智能的智能电网最佳负荷预测策略(OLFS)
Pub Date : 2024-02-01 DOI: 10.3390/technologies12020019
A. H. Rabie, Ahmed I. Saleh, Said H. Abd Elkhalik, Ali E. Takieldeen
Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.
最近,人工智能(AI)在许多生活领域的应用提高了系统的效率,并将其转化为智能系统,尤其是在能源领域。将人工智能与电力系统相结合,可以使电网变得足够智能,从而预测未来的负荷,这就是所谓的智能负荷预测(ILF)。因此,可以相应地为电力系统规划和运行程序做出适当的决策。此外,ILF 还能在电力需求响应中发挥重要作用,从而保证电力系统的可靠过渡。本文介绍了一种基于人工智能技术的最佳负荷预测策略(OLFS),用于预测智能电网的未来负荷。所提出的 OLFS 包括两个连续阶段,分别是数据预处理阶段(DPP)和负荷预测阶段(LFP)。在前一个阶段,在进行实际预测之前,先要准备好输入的电力负荷数据集,然后再进行两项基本任务,即特征选择和异常值剔除。特征选择是通过高级豹印优化(ALSO)这一全新的自然启发优化技术来实现的,而离群值剔除则是通过四分位数间距(IQR)这一统计离散度量来完成的。另一方面,在 LFP 中使用一种名为加权 K 近邻(WKNN)算法的新预测器进行实际负荷预测。拟议的 OLFS 已通过大量实验进行了测试。结果表明,OLFS 的性能优于最新的负荷预测技术,因为它能以最小的均方根误差实现最高的预测精度。
{"title":"An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence","authors":"A. H. Rabie, Ahmed I. Saleh, Said H. Abd Elkhalik, Ali E. Takieldeen","doi":"10.3390/technologies12020019","DOIUrl":"https://doi.org/10.3390/technologies12020019","url":null,"abstract":"Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139827270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence 基于人工智能的智能电网最佳负荷预测策略(OLFS)
Pub Date : 2024-02-01 DOI: 10.3390/technologies12020019
A. H. Rabie, Ahmed I. Saleh, Said H. Abd Elkhalik, Ali E. Takieldeen
Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.
最近,人工智能(AI)在许多生活领域的应用提高了系统的效率,并将其转化为智能系统,尤其是在能源领域。将人工智能与电力系统相结合,可以使电网变得足够智能,从而预测未来的负荷,这就是所谓的智能负荷预测(ILF)。因此,可以相应地为电力系统规划和运行程序做出适当的决策。此外,ILF 还能在电力需求响应中发挥重要作用,从而保证电力系统的可靠过渡。本文介绍了一种基于人工智能技术的最佳负荷预测策略(OLFS),用于预测智能电网的未来负荷。所提出的 OLFS 包括两个连续阶段,分别是数据预处理阶段(DPP)和负荷预测阶段(LFP)。在前一个阶段,在进行实际预测之前,先要准备好输入的电力负荷数据集,然后再进行两项基本任务,即特征选择和异常值剔除。特征选择是通过高级豹印优化(ALSO)这一全新的自然启发优化技术来实现的,而离群值剔除则是通过四分位数间距(IQR)这一统计离散度量来完成的。另一方面,在 LFP 中使用一种名为加权 K 近邻(WKNN)算法的新预测器进行实际负荷预测。拟议的 OLFS 已通过大量实验进行了测试。结果表明,OLFS 的性能优于最新的负荷预测技术,因为它能以最小的均方根误差实现最高的预测精度。
{"title":"An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence","authors":"A. H. Rabie, Ahmed I. Saleh, Said H. Abd Elkhalik, Ali E. Takieldeen","doi":"10.3390/technologies12020019","DOIUrl":"https://doi.org/10.3390/technologies12020019","url":null,"abstract":"Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139887348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis 数字成像和医学通信数据分析中的压缩和纹理整合综合研究
Pub Date : 2024-01-24 DOI: 10.3390/technologies12020017
A. Shakya, Anurag Vidyarthi
In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging.
为应对 COVID-19 大流行及其对医疗资源造成的压力,本研究全面回顾了可用于整合图像压缩技术和统计纹理分析的各种技术,以优化医学数字成像和通信(DICOM)文件的存储。在评估四种主流图像压缩算法(即离散余弦变换 (DCT)、离散小波变换 (DWT)、分形压缩算法 (FCA) 和矢量量化算法 (VQA))时,本研究重点关注它们在压缩数据的同时保留基本纹理特征(如对比度、相关性、角秒矩 (ASM) 和反差矩 (IDM))的能力。在 DICOM 分析中,与方向无关的灰度级共现矩阵(GLCM)是一个关键观察点,它揭示了两个中间扫描纹理特征测量之间的有趣变化。从性能上看,DCT、DWT、FCA 和 VQA 算法的最小压缩比 (CR) 分别为 27.87、37.91、33.26 和 27.39,最大压缩比分别为 34.48、68.96、60.60 和 38.74。这项研究还对 COVID-19 患者的不同 CT 胸部扫描进行了统计分析,突出显示了不断变化的纹理模式。最后,这项工作强调了将图像压缩和纹理特征量化结合起来监测与人体胸部状况有关的变化的潜力,为重要医学影像的高效存储和诊断评估提供了一个前景广阔的途径。
{"title":"Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis","authors":"A. Shakya, Anurag Vidyarthi","doi":"10.3390/technologies12020017","DOIUrl":"https://doi.org/10.3390/technologies12020017","url":null,"abstract":"In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision 机器视觉和机器人视觉中用于物体检测、语义分割和人类动作识别的机器学习和深度学习综述
Pub Date : 2024-01-23 DOI: 10.3390/technologies12020015
Nikoleta Manakitsa, George S. Maraslidis, L. Moysis, G. Fragulis
Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications.
机器视觉是一个旨在用计算机复制人类视觉感知的跨学科领域,它的发展日新月异,贡献巨大。本文追溯了机器视觉的起源,从早期的图像处理算法到与计算机科学、数学和机器人学的融合,最终形成人工智能的一个独特分支。机器学习技术(尤其是深度学习)的融合推动了机器视觉的发展,并在日常设备中得到广泛应用。本研究侧重于计算机视觉系统的目标:复制人类的视觉能力,包括识别、理解和解释。值得注意的是,图像分类、物体检测和图像分割是需要强大数学基础的关键任务。尽管取得了进步,但挑战依然存在,例如明确人工智能、机器学习和深度学习的相关术语。精确的定义和解释对于建立坚实的研究基础至关重要。机器视觉的发展反映了模拟人类视觉感知的雄心壮志。跨学科合作和深度学习技术的整合推动了在模拟人类行为和感知方面的显著进步。通过这项研究,机器视觉领域将继续塑造计算机系统和人工智能应用的未来。
{"title":"A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision","authors":"Nikoleta Manakitsa, George S. Maraslidis, L. Moysis, G. Fragulis","doi":"10.3390/technologies12020015","DOIUrl":"https://doi.org/10.3390/technologies12020015","url":null,"abstract":"Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation 预测心房颤动主要不良心血管事件的机器学习方法
Pub Date : 2024-01-23 DOI: 10.3390/technologies12020013
Pedro Moltó-Balado, Sílvia Reverté-Villarroya, Victor Alonso-Barberán, Cinta Monclús-Arasa, M. T. Balado-Albiol, Josep Clua-Queralt, J. Clua-Espuny
The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF.
心房颤动(房颤)发病率的不断上升及其与重大不良心血管事件(MACE)的关联给早期识别和治疗带来了挑战。尽管现有的风险因素、生物标志物、基因变异和成像参数可以预测 MACE,但新出现的因素可能更具决定性。人工智能和机器学习技术(ML)为更有效地预测房颤演变提供了一条很有前景的途径。为了获得房颤患者 MACE 的预测因素,我们开发了五个 ML 模型。三分之二的数据用于训练,采用不同的方法并进行优化,以最大限度地减少预测误差,其余三分之一用于测试和验证。AdaBoost成为表现最好的模型(准确率:0.9999;召回率:1;F1得分:0.9997)。影响预测的值得注意的特征包括夏尔森合并症指数(CCI)、糖尿病、癌症、韦尔斯量表和 CHA2DS2-VASc,并确定了特定的关联。观察到 MACE 风险升高,CCI 评分超过 2.67 ± 1.31(p < 0.001),CHA2DS2-VASc 评分为 4.62 ± 1.02(p < 0.001),威尔斯量表分级为中危。总之,AdaBoost ML 提供了另一种预测方法,有助于在评估房颤患者时早期识别 MACE 风险。
{"title":"Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation","authors":"Pedro Moltó-Balado, Sílvia Reverté-Villarroya, Victor Alonso-Barberán, Cinta Monclús-Arasa, M. T. Balado-Albiol, Josep Clua-Queralt, J. Clua-Espuny","doi":"10.3390/technologies12020013","DOIUrl":"https://doi.org/10.3390/technologies12020013","url":null,"abstract":"The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139604515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1