The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information.
{"title":"A Sparsity-Invariant Model via Unifying Depth Prediction and Completion","authors":"Shuling Wang, Fengze Jiang, Xiaojin Gong","doi":"10.3390/a17070298","DOIUrl":"https://doi.org/10.3390/a17070298","url":null,"abstract":"The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141672509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos-Antonio Mosqueda-Arvizu, J. Romero-González, Diana-Margarita Córdova-Esparza, Juan R. Terven, Ricardo Chaparro-Sánchez, J. Rodríguez-Reséndíz
The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.
{"title":"Logical Execution Time and Time-Division Multiple Access in Multicore Embedded Systems: A Case Study","authors":"Carlos-Antonio Mosqueda-Arvizu, J. Romero-González, Diana-Margarita Córdova-Esparza, Juan R. Terven, Ricardo Chaparro-Sánchez, J. Rodríguez-Reséndíz","doi":"10.3390/a17070294","DOIUrl":"https://doi.org/10.3390/a17070294","url":null,"abstract":"The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141675250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation.
{"title":"VMP-ER: An Efficient Virtual Machine Placement Algorithm for Energy and Resources Optimization in Cloud Data Center","authors":"Hasanein D. Rjeib, Gabor Kecskemeti","doi":"10.3390/a17070295","DOIUrl":"https://doi.org/10.3390/a17070295","url":null,"abstract":"Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141676244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication.
{"title":"A Histogram Publishing Method under Differential Privacy That Involves Balancing Small-Bin Availability First","authors":"Jianzhang Chen, Shuo Zhou, Jie Qiu, Yixin Xu, Bozhe Zeng, Wanchuan Fang, Xiangying Chen, Yipeng Huang, Zhengquan Xu, Youqin Chen","doi":"10.3390/a17070293","DOIUrl":"https://doi.org/10.3390/a17070293","url":null,"abstract":"Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141680038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.
{"title":"Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training","authors":"Hawraz A. Ahmad, Rebwar Khalid Hamad","doi":"10.3390/a17070292","DOIUrl":"https://doi.org/10.3390/a17070292","url":null,"abstract":"Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141681172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike.
{"title":"Prime Time Tactics—Sieve Tweaks and Boosters","authors":"Mircea Ghidarcea, Decebal Popescu","doi":"10.3390/a17070291","DOIUrl":"https://doi.org/10.3390/a17070291","url":null,"abstract":"In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141681881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics.
{"title":"Federated Learning-Based Security Attack Detection for Multi-Controller Software-Defined Networks","authors":"A. Alkhamisi, Iyad Katib, Seyed M. Buhari","doi":"10.3390/a17070290","DOIUrl":"https://doi.org/10.3390/a17070290","url":null,"abstract":"A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141686922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ning Tao, Anthony Ventresque, Vivek Nallur, Takfarinas Saber
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.
{"title":"Enhancing Program Synthesis with Large Language Models Using Many-Objective Grammar-Guided Genetic Programming","authors":"Ning Tao, Anthony Ventresque, Vivek Nallur, Takfarinas Saber","doi":"10.3390/a17070287","DOIUrl":"https://doi.org/10.3390/a17070287","url":null,"abstract":"The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions.
{"title":"Fuzzy Fractional Brownian Motion: Review and Extension","authors":"Georgy Urumov, Panagiotis Chountas, Thierry Chaussalet","doi":"10.3390/a17070289","DOIUrl":"https://doi.org/10.3390/a17070289","url":null,"abstract":"In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141702857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure.
{"title":"Optimal Design of I-PD and PI-D Industrial Controllers Based on Artificial Intelligence Algorithm","authors":"Olga Shiryayeva, B. Suleimenov, Ye. S. Kulakova","doi":"10.3390/a17070288","DOIUrl":"https://doi.org/10.3390/a17070288","url":null,"abstract":"This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141706765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}