Prince Mahmud, Anisur Rahman, Kamrul Hasan Talukder
Pattern matching algorithms have gained a lot of importance in computer science, primarily because they are used in various domains such as computational biology, video retrieval, intrusion detection systems, and fraud detection. Finding one or more patterns in a given text is known as pattern matching. Two important things that are used to judge how well exact pattern matching algorithms work are the total number of attempts and the character comparisons that are made during the matching process. The primary focus of our proposed method is reducing the size of both components wherever possible. Despite sprinting, hash-based pattern matching algorithms may have hash collisions. The Efficient Hashing Method (EHM) algorithm is improved in this research. Despite the EHM algorithm’s effectiveness, it takes a lot of time in the preprocessing phase, and some hash collisions are generated. A novel hashing method has been proposed, which has reduced the preprocessing time and hash collision of the EHM algorithm. We devised the Hashing Approach for Pattern Matching (HAPM) algorithm by taking the best parts of the EHM and Quick Search (QS) algorithms and adding a way to avoid hash collisions. The preprocessing step of this algorithm combines the bad character table from the QS algorithm, the hashing strategy from the EHM algorithm, and the collision-reducing mechanism. To analyze the performance of our HAPM algorithm, we have used three types of datasets: E. coli, DNA sequences, and protein sequences. We looked at six algorithms discussed in the literature and compared our proposed method. The Hash-q with Unique FNG (HqUF) algorithm was only compared with E. coli and DNA datasets because it creates unique bits for DNA sequences. Our proposed HAPM algorithm also overcomes the problems of the HqUF algorithm. The new method beats older ones regarding average runtime, number of attempts, and character comparisons for long and short text patterns, though it did worse on some short patterns.
模式匹配算法在计算机科学中的重要性日益凸显,这主要是因为它们被广泛应用于计算生物学、视频检索、入侵检测系统和欺诈检测等多个领域。在给定文本中找到一个或多个模式被称为模式匹配。判断精确模式匹配算法效果的两个重要指标是尝试的总次数和匹配过程中进行的字符比较。我们提出的方法的主要重点是尽可能减少这两个部分的大小。尽管进行了冲刺,但基于散列的模式匹配算法可能会发生散列碰撞。本研究改进了高效散列法(EHM)算法。尽管 EHM 算法很有效,但它在预处理阶段需要花费大量时间,而且会产生一些散列碰撞。我们提出了一种新的散列方法,它减少了 EHM 算法的预处理时间和散列碰撞。我们汲取了 EHM 算法和快速搜索(QS)算法的精华,并增加了避免散列碰撞的方法,从而设计出了模式匹配散列方法(HAPM)算法。该算法的预处理步骤结合了 QS 算法的坏字符表、EHM 算法的散列策略和减少碰撞机制。为了分析 HAPM 算法的性能,我们使用了三种数据集:大肠杆菌、DNA 序列和蛋白质序列。我们研究了文献中讨论的六种算法,并对我们提出的方法进行了比较。Hash-q with Unique FNG (HqUF) 算法只与大肠杆菌和 DNA 数据集进行了比较,因为它能为 DNA 序列创建唯一比特。我们提出的 HAPM 算法也克服了 HqUF 算法的问题。在长文本和短文本模式的平均运行时间、尝试次数和字符比较方面,新方法优于旧方法,但在某些短模式上表现较差。
{"title":"An Improved Hashing Approach for Biological Sequence to Solve Exact Pattern Matching Problems","authors":"Prince Mahmud, Anisur Rahman, Kamrul Hasan Talukder","doi":"10.1155/2023/3278505","DOIUrl":"https://doi.org/10.1155/2023/3278505","url":null,"abstract":"Pattern matching algorithms have gained a lot of importance in computer science, primarily because they are used in various domains such as computational biology, video retrieval, intrusion detection systems, and fraud detection. Finding one or more patterns in a given text is known as pattern matching. Two important things that are used to judge how well exact pattern matching algorithms work are the total number of attempts and the character comparisons that are made during the matching process. The primary focus of our proposed method is reducing the size of both components wherever possible. Despite sprinting, hash-based pattern matching algorithms may have hash collisions. The Efficient Hashing Method (EHM) algorithm is improved in this research. Despite the EHM algorithm’s effectiveness, it takes a lot of time in the preprocessing phase, and some hash collisions are generated. A novel hashing method has been proposed, which has reduced the preprocessing time and hash collision of the EHM algorithm. We devised the Hashing Approach for Pattern Matching (HAPM) algorithm by taking the best parts of the EHM and Quick Search (QS) algorithms and adding a way to avoid hash collisions. The preprocessing step of this algorithm combines the bad character table from the QS algorithm, the hashing strategy from the EHM algorithm, and the collision-reducing mechanism. To analyze the performance of our HAPM algorithm, we have used three types of datasets: E. coli, DNA sequences, and protein sequences. We looked at six algorithms discussed in the literature and compared our proposed method. The Hash-q with Unique FNG (HqUF) algorithm was only compared with E. coli and DNA datasets because it creates unique bits for DNA sequences. Our proposed HAPM algorithm also overcomes the problems of the HqUF algorithm. The new method beats older ones regarding average runtime, number of attempts, and character comparisons for long and short text patterns, though it did worse on some short patterns.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139257629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.
{"title":"TOPSIS Method Based on Entropy Measure for Solving Multiple-Attribute Group Decision-Making Problems with Spherical Fuzzy Soft Information","authors":"Perveen P. A. Fathima, Sunil Jacob John, T. Baiju","doi":"10.1155/2023/7927541","DOIUrl":"https://doi.org/10.1155/2023/7927541","url":null,"abstract":"A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139260978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.
{"title":"Fuzzy Set and Soft Set Theories as Tools for Vocal Risk Diagnosis","authors":"José Sanabria, Marinela Álvarez, O. Ferrer","doi":"10.1155/2023/5525978","DOIUrl":"https://doi.org/10.1155/2023/5525978","url":null,"abstract":"New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139275166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood
Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.
{"title":"A Comparative Analysis of Traditional SARIMA and Machine Learning Models for CPI Data Modelling in Pakistan","authors":"Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood","doi":"10.1155/2023/3236617","DOIUrl":"https://doi.org/10.1155/2023/3236617","url":null,"abstract":"Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.
本文研究了一类较新的组合优化问题——随机剥落库存问题。传统的SSP寻求确定最佳结构,将不同尺寸的小块并排剥离,形成尽可能多的大项目(产品),以满足所需的宽度。本文研究了需求和废品率不确定情况下SSP的多产品情况,包括不同宽度的产品。这个随机版本的SSP考虑了每个产品的随机需求和生产过程中的随机废品率。采用一种带追索作用的两阶段随机规划方法来研究这类大规模的随机N - P -难问题。此外,该问题的解决分为两个阶段。在第一阶段,蜻蜓算法构建最小的模式,作为下一阶段的输入。第二阶段进行样本平均近似,解决随机生产问题。结果表明,两阶段启发式方法在计算运行时间方面是高效的,并且在最坏情况下提供了具有0.3%最优性差距的鲁棒性解决方案。此外,我们还比较了蜻蜓算法(DA)和粒子群算法(PSO)在模式生成方面的性能。基准测试表明,随着问题的紧密性增加,数据分析产生了更健壮的最小模式集。
{"title":"A Two-Phase Pattern Generation and Production Planning Procedure for the Stochastic Skiving Process","authors":"Tolga Kudret Karaca, Funda Samanlioglu, Ayca Altay","doi":"10.1155/2023/9918022","DOIUrl":"https://doi.org/10.1155/2023/9918022","url":null,"abstract":"The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M1\"> <mi mathvariant=\"script\">N</mi> <mi mathvariant=\"script\">P</mi> </math> -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali
As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.
{"title":"Machine Learning Approaches to Predict Patient’s Length of Stay in Emergency Department","authors":"Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali","doi":"10.1155/2023/8063846","DOIUrl":"https://doi.org/10.1155/2023/8063846","url":null,"abstract":"As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136233976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar
In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.
{"title":"Forged Video Detection Using Deep Learning: A SLR","authors":"Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar","doi":"10.1155/2023/6661192","DOIUrl":"https://doi.org/10.1155/2023/6661192","url":null,"abstract":"In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135169448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf
Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.
{"title":"Embedded Parallel Implementation of LDPC Decoder for Ultra-Reliable Low-Latency Communications","authors":"Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf","doi":"10.1155/2023/5573438","DOIUrl":"https://doi.org/10.1155/2023/5573438","url":null,"abstract":"Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135512015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi
The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.
{"title":"Facial Emotion Recognition and Classification Using the Convolutional Neural Network-10 (CNN-10)","authors":"Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi","doi":"10.1155/2023/2457898","DOIUrl":"https://doi.org/10.1155/2023/2457898","url":null,"abstract":"The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135857347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.
{"title":"Local Search-Based Metaheuristic Methods for the Solid Waste Collection Problem","authors":"Haneen Algethami","doi":"10.1155/2023/5398400","DOIUrl":"https://doi.org/10.1155/2023/5398400","url":null,"abstract":"The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135346340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}