首页 > 最新文献

Algorithms最新文献

英文 中文
A Sparsity-Invariant Model via Unifying Depth Prediction and Completion 统一深度预测和完成的稀疏性不变模型
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-06 DOI: 10.3390/a17070298
Shuling Wang, Fengze Jiang, Xiaojin Gong
The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information.
在实际应用中,开发一种能够处理不同程度的输入深度稀疏性的稀疏不变深度补全模型是非常理想的。然而,当输入深度点极其稀疏时,现有的稀疏不变模型往往会退化。在本文中,我们提出了一种新模型,它结合了深度补全和单目深度估算任务的优势设计,以实现稀疏不变性。具体来说,我们构建了一个双分支架构,其中一个分支用于深度预测,另一个分支用于深度补全。此外,我们还在两个分支的解码器中集成了多尺度局部平面模块。在纽约大学深度 V2 基准和配备 Spot-iToF316 传感器的 OPPO 原型数据集上的实验结果表明,即使在深度信息分布不规则、有限或缺失的情况下,我们的模型也能获得可靠的结果。
{"title":"A Sparsity-Invariant Model via Unifying Depth Prediction and Completion","authors":"Shuling Wang, Fengze Jiang, Xiaojin Gong","doi":"10.3390/a17070298","DOIUrl":"https://doi.org/10.3390/a17070298","url":null,"abstract":"The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141672509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Logical Execution Time and Time-Division Multiple Access in Multicore Embedded Systems: A Case Study 多核嵌入式系统中的逻辑执行时间和时分多址:案例研究
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.3390/a17070294
Carlos-Antonio Mosqueda-Arvizu, J. Romero-González, Diana-Margarita Córdova-Esparza, Juan R. Terven, Ricardo Chaparro-Sánchez, J. Rodríguez-Reséndíz
The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.
汽车行业最近采用了多核处理器和微控制器,以满足自动驾驶等新功能的要求,并符合最新的安全标准。然而,内核间通信在确保时间确定性和低延迟等实时性要求方面提出了挑战。对共享缓冲区的并发访问使得预测数据流变得困难,从而导致算法性能下降。本研究探讨了在多核嵌入式系统中整合逻辑执行时间(LET)和时分多址(TDMA)模型,通过同步不同内核间的读/写操作来应对内核间通信的挑战,从而显著降低延迟变化,提高系统的可预测性和一致性。实验结果表明,这种集成方法消除了数据丢失,保持了固定的操作速率,实现了 11 毫秒的一致延迟。LET-TDMA 方法将延迟变异性降低到约 1 毫秒,最大延迟为 1.002 毫秒,最小延迟为 1.001 毫秒,相比之下,纯 LET 方法在不同配置下的延迟变异性从 3.2846 毫秒到 8.9257 毫秒不等。
{"title":"Logical Execution Time and Time-Division Multiple Access in Multicore Embedded Systems: A Case Study","authors":"Carlos-Antonio Mosqueda-Arvizu, J. Romero-González, Diana-Margarita Córdova-Esparza, Juan R. Terven, Ricardo Chaparro-Sánchez, J. Rodríguez-Reséndíz","doi":"10.3390/a17070294","DOIUrl":"https://doi.org/10.3390/a17070294","url":null,"abstract":"The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141675250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMP-ER: An Efficient Virtual Machine Placement Algorithm for Energy and Resources Optimization in Cloud Data Center VMP-ER:云数据中心能源和资源优化的高效虚拟机放置算法
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.3390/a17070295
Hasanein D. Rjeib, Gabor Kecskemeti
Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation.
云服务提供商采用基础设施即服务(IaaS)模式按需提供计算服务。在云数据中心,借助虚拟化技术,一台物理机(PM)上可以托管多台虚拟机(VM)。虚拟机放置(VMP)涉及在不同物理机之间分配虚拟机,这是影响云数据中心能源消耗和资源使用的关键过程。然而,由于硬件的异构性和云数据中心的可扩展性等因素,找到有效的安置方法具有挑战性。本文提出了一种名为 VMP-ER 的高效算法,旨在优化能耗和减少资源浪费。我们的算法通过减少运行物理机的数量来实现这一目标,并优先考虑高能效服务器。此外,它还能提高物理机之间的资源利用率,从而最大限度地减少浪费,确保资源分配均衡。
{"title":"VMP-ER: An Efficient Virtual Machine Placement Algorithm for Energy and Resources Optimization in Cloud Data Center","authors":"Hasanein D. Rjeib, Gabor Kecskemeti","doi":"10.3390/a17070295","DOIUrl":"https://doi.org/10.3390/a17070295","url":null,"abstract":"Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141676244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Histogram Publishing Method under Differential Privacy That Involves Balancing Small-Bin Availability First 差异隐私条件下的直方图发布方法,涉及先平衡小范围可用性
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-04 DOI: 10.3390/a17070293
Jianzhang Chen, Shuo Zhou, Jie Qiu, Yixin Xu, Bozhe Zeng, Wanchuan Fang, Xiangying Chen, Yipeng Huang, Zhengquan Xu, Youqin Chen
Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication.
差异隐私是隐私保护技术的基石,在确保安全处理和共享人口普查、医疗保健和社交网络等领域的敏感数据分析方面发挥着不可或缺的作用。直方图是展示分析结果的直观工具,在这些领域被广泛使用。目前,人们已经开发出许多在不同隐私条件下发布直方图的算法,力求在保护隐私和提供有用数据之间取得平衡。然而,如何有效提高直方图中的小区间(定义狭窄或包含相对较少数据点的区间)的精确度这一关键挑战尚未得到专家的足够重视和深入研究。在标准 DP 直方图发布中,不考虑小区间大小而添加噪声会导致小数据区间受噪声影响过大,从而可能严重影响直方图的整体准确性。为了应对这一挑战,本文介绍了 SReB_GCA 净化算法,该算法旨在提高 DP 直方图中小数据仓的准确性。SReB_GCA 方法包括将小仓从小到大排序,并应用贪婪分组策略,预先确定一个小仓纳入一个组所需的平均相对误差下限。我们的理论分析表明,在分组之前按升序对数据仓进行排序能有效地优先保证较小数据仓的准确性。SReB_GCA 可确保严格遵守ϵ-DP,并在重建误差和噪声误差之间取得谨慎的平衡,从而不仅初步提高了小分仓的精度,而且近似优化了整个直方图的平均相对误差。为了验证我们提出的 SReB_GCA 方法的效率,我们使用四个不同的数据集进行了广泛的实验,其中包括两个真实数据集和两个合成数据集。通过库尔巴克-莱伯勒发散(KLD)量化的实验结果表明,与基线方法(DP_BASE)和其他几种成熟的差异隐私直方图发布方法相比,SReB_GCA 算法的性能得到了大幅提升。
{"title":"A Histogram Publishing Method under Differential Privacy That Involves Balancing Small-Bin Availability First","authors":"Jianzhang Chen, Shuo Zhou, Jie Qiu, Yixin Xu, Bozhe Zeng, Wanchuan Fang, Xiangying Chen, Yipeng Huang, Zhengquan Xu, Youqin Chen","doi":"10.3390/a17070293","DOIUrl":"https://doi.org/10.3390/a17070293","url":null,"abstract":"Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141680038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training 利用新颖的端到端变压器训练进行库尔德语中央文本到语音合成
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 DOI: 10.3390/a17070292
Hawraz A. Ahmad, Rebwar Khalid Hamad
Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.
文本到语音(TTS)模型的最新进展旨在将两阶段过程简化为单阶段训练方法。然而,许多单阶段模型在音频质量方面仍然落后,尤其是在处理库尔德语文本和语音时。库尔德语,尤其是索拉尼方言的文本到语音转换亟待加强,而索拉尼方言在最近的文本到语音转换进展中相对被忽视,代表性不足。本研究介绍了一种端到端 TTS 模型,用于高效生成高质量的库尔德语音频。所提出的方法利用变异自动编码器 (VAE) 进行音频波形重构的预训练,并通过对抗训练进行增强。这包括将预先训练的编码器建立的先验分布与文本编码器在潜在变量中的后验分布相一致。此外,还加入了随机时长预测器,使合成的库尔德语音具有不同的节奏。通过调整潜变量分布和整合随机时长预测器,所提出的方法有助于实时生成自然的库尔德语语音音频,并提供音调和节奏的灵活性。在定制数据集上通过平均意见分(MOS)进行的实证评估证实,与单级系统和其他两级系统相比,我们的方法具有更优越的性能(MOS 为 3.94)。
{"title":"Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training","authors":"Hawraz A. Ahmad, Rebwar Khalid Hamad","doi":"10.3390/a17070292","DOIUrl":"https://doi.org/10.3390/a17070292","url":null,"abstract":"Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141681172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prime Time Tactics—Sieve Tweaks and Boosters 黄金时间战术--筛网调整和增强器
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 DOI: 10.3390/a17070291
Mircea Ghidarcea, Decebal Popescu
In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike.
在人们对主筛分技术兴趣减退、从业人员寥寥无几的今天,我们仍希望在兴趣复苏和新一轮创新浪潮的推动下,实现该领域的复兴。在多年广泛研究和实验的基础上,本文提出了一系列旨在振兴原矿筛分方法的通用调整和改进措施,旨在为这一领域做出贡献。我们从大量重现的知识和精炼的筛分算法、技术和优化中汲取养分,揭示了一系列旨在提升质谱筛分算法的效率、准确性和可扩展性的策略;这些调整和助推器综合了旧智慧和新发现,为研究人员和从业人员提供了实用的指导。
{"title":"Prime Time Tactics—Sieve Tweaks and Boosters","authors":"Mircea Ghidarcea, Decebal Popescu","doi":"10.3390/a17070291","DOIUrl":"https://doi.org/10.3390/a17070291","url":null,"abstract":"In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141681881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning-Based Security Attack Detection for Multi-Controller Software-Defined Networks 基于联合学习的多控制器软件定义网络安全攻击检测
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-02 DOI: 10.3390/a17070290
A. Alkhamisi, Iyad Katib, Seyed M. Buhari
A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics.
多控制器软件定义网络(MC-SDN)这一革命性的概念,是追求不断发展的复杂、广阔的大规模现代网络环境的一种有前途的结构。尽管 MC-SDN 具有丰富的操作灵活性,但仍必须保护网络部署,防止潜在漏洞导致数据平面上的滥用和恶意活动。MC-SDN 中的安全漏洞严重影响了网络的生存能力,因此数据平面很容易受到潜在的安全威胁和意外后果的影响。因此,这项工作打算设计一种基于学习的联邦安全(FedSec)策略,以检测 MC-SDN 攻击。FedSec 通过维护一个根据全局模型知识频繁更新的流量表,确保节点之间的数据包路由服务。FedSec 算法只在根据重要性测量结果选定的网络中心节点上执行,从而降低了系统复杂性,并提高了攻击检测和分类的准确性。最后,实验结果表明了所提出的 FedSec 策略在各种指标方面的重要性。
{"title":"Federated Learning-Based Security Attack Detection for Multi-Controller Software-Defined Networks","authors":"A. Alkhamisi, Iyad Katib, Seyed M. Buhari","doi":"10.3390/a17070290","DOIUrl":"https://doi.org/10.3390/a17070290","url":null,"abstract":"A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141686922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Program Synthesis with Large Language Models Using Many-Objective Grammar-Guided Genetic Programming 使用多目标语法引导的遗传编程,利用大型语言模型加强程序综合
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.3390/a17070287
Ning Tao, Anthony Ventresque, Vivek Nallur, Takfarinas Saber
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.
自动生成代码的能力,即程序合成,是人工智能(AI)最重要的应用之一。目前,有两种人工智能技术处于领先地位:大型语言模型(LLM)和遗传编程(GP)方法--它们各有优缺点。虽然 LLMs 在根据任务描述进行程序合成方面取得了成功,但由于任务规格不明确、编程语法复杂以及生成的代码缺乏可靠性,它们往往难以生成正确的代码。此外,它们的生成特性也限制了它们通过迭代 LLM 提示来修复错误代码的能力。语法指导的遗传编程(G3P,即顶级 GP 方法之一)已被证明能够根据一组输入/输出测试,进化出符合定义的 Backus-Naur-form (BNF) 语法的程序,这些测试有助于指导搜索过程,同时确保生成的代码不包含对不可信库或结构不良代码段的调用。然而,G3P 在为复杂任务生成代码时仍面临一些问题。最近的一项研究尝试将两种方法(G3P 和 LLM)结合起来,将 LLM 生成的程序播种到 G3P 的初始种群中,结果很有希望。然而,这种方法会在进化过程中迅速丢失种子信息,从而影响其性能。在这项工作中,我们建议将 LLM(特别是 ChatGPT)与多目标 G3P(MaOG3P)框架结合起来,分为两个部分:(i) 在语法映射阶段之后,将 LLM 生成的代码作为种子提供给进化过程,为程序进化和纠错创造途径;(ii) 在整个进化过程中,利用针对 LLM 生成的代码的多目标相似性度量来指导搜索过程。使用相似性度量背后的理念是,LLM 生成的代码很可能接近正确的拟合代码。我们的方法迫使任何生成的程序都遵守 BNF 语法,最终降低了安全风险,提高了代码质量。在一个著名的、广泛使用的程序合成数据集上进行的实验表明,我们的方法成功地改进了多个任务的语法拟合代码的合成。
{"title":"Enhancing Program Synthesis with Large Language Models Using Many-Objective Grammar-Guided Genetic Programming","authors":"Ning Tao, Anthony Ventresque, Vivek Nallur, Takfarinas Saber","doi":"10.3390/a17070287","DOIUrl":"https://doi.org/10.3390/a17070287","url":null,"abstract":"The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Fractional Brownian Motion: Review and Extension 模糊分数布朗运动:回顾与扩展
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.3390/a17070289
Georgy Urumov, Panagiotis Chountas, Thierry Chaussalet
In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions.
在传统金融学中,期权价格通常是通过一组清晰的变量来计算的。然而,正如文献小说所报道的,这些参数具有一定程度的模糊性或不确定性。这使得参与者可以根据自己的风险偏好和信念来估计期权价格,同时考虑到参数可能的取值范围。本文全面回顾了现有的模糊分数布朗运动研究,并提出了在金融期权定价方面的扩展。在本文中,我们定义了一个将分数布朗运动与模糊过程相结合的统一框架,创建了一个同时捕捉随机性和模糊性的联合乘积度量空间。这种方法允许考虑个人风险偏好和对参数不确定性的信念。通过扩展默顿跃迁扩散模型,将模糊分数布朗运动纳入其中,本文满足了具有不确定变量的混合系统的建模需求。所提出的模型包括模糊泊松过程和模糊波动率,具有长程依赖性和自相似性等优势特性,为金融市场建模提供了一个稳健的工具。通过纳入模糊数和信念度,这种方法为从业人员做出投资决策提供了一个更加灵活的框架。
{"title":"Fuzzy Fractional Brownian Motion: Review and Extension","authors":"Georgy Urumov, Panagiotis Chountas, Thierry Chaussalet","doi":"10.3390/a17070289","DOIUrl":"https://doi.org/10.3390/a17070289","url":null,"abstract":"In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141702857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Design of I-PD and PI-D Industrial Controllers Based on Artificial Intelligence Algorithm 基于人工智能算法的 I-PD 和 PI-D 工业控制器优化设计
IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.3390/a17070288
Olga Shiryayeva, B. Suleimenov, Ye. S. Kulakova
This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure.
本研究旨在应用人工智能(AI)方法,特别是人工免疫系统(AIS),为多变量控制工厂设计最佳控制策略。研究了两种特定的工业控制方法:I-PD(积分-比例微分)和 PI-D(比例-积分微分)控制。使用这些 PID 控制器变体的动机是,它们在现代工业控制器中实现了功能,提供了精确的过程控制。研究成果为工业系统的控制合成问题提供了新的解决方案。特别是,研究涉及蒸馏塔技术过程中双回路系统的 I-P 控制合成。该合成采用 AIS 算法,这是该技术在这一特定环境中的首次应用。通过有效利用优化算法和建立修改后的质量标准,提出了提高工业多变量控制系统性能的方法。数值性能指标 ISE 证明了基于 AIS 的控制器与传统 PID 控制器相比的有效性(ISE1 = 1.865,ISE2 = 1.56)。考虑到解耦过程中的相互联系,解决了多输入多输出 (MIMO) 控制系统的合成问题。
{"title":"Optimal Design of I-PD and PI-D Industrial Controllers Based on Artificial Intelligence Algorithm","authors":"Olga Shiryayeva, B. Suleimenov, Ye. S. Kulakova","doi":"10.3390/a17070288","DOIUrl":"https://doi.org/10.3390/a17070288","url":null,"abstract":"This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141706765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1