A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.
{"title":"TOPSIS Method Based on Entropy Measure for Solving Multiple-Attribute Group Decision-Making Problems with Spherical Fuzzy Soft Information","authors":"Perveen P. A. Fathima, Sunil Jacob John, T. Baiju","doi":"10.1155/2023/7927541","DOIUrl":"https://doi.org/10.1155/2023/7927541","url":null,"abstract":"A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"26 2","pages":""},"PeriodicalIF":2.9,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139260978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.
{"title":"Fuzzy Set and Soft Set Theories as Tools for Vocal Risk Diagnosis","authors":"José Sanabria, Marinela Álvarez, O. Ferrer","doi":"10.1155/2023/5525978","DOIUrl":"https://doi.org/10.1155/2023/5525978","url":null,"abstract":"New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"BME-28 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139275166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood
Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.
{"title":"A Comparative Analysis of Traditional SARIMA and Machine Learning Models for CPI Data Modelling in Pakistan","authors":"Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood","doi":"10.1155/2023/3236617","DOIUrl":"https://doi.org/10.1155/2023/3236617","url":null,"abstract":"Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"25 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.
本文研究了一类较新的组合优化问题——随机剥落库存问题。传统的SSP寻求确定最佳结构,将不同尺寸的小块并排剥离,形成尽可能多的大项目(产品),以满足所需的宽度。本文研究了需求和废品率不确定情况下SSP的多产品情况,包括不同宽度的产品。这个随机版本的SSP考虑了每个产品的随机需求和生产过程中的随机废品率。采用一种带追索作用的两阶段随机规划方法来研究这类大规模的随机N - P -难问题。此外,该问题的解决分为两个阶段。在第一阶段,蜻蜓算法构建最小的模式,作为下一阶段的输入。第二阶段进行样本平均近似,解决随机生产问题。结果表明,两阶段启发式方法在计算运行时间方面是高效的,并且在最坏情况下提供了具有0.3%最优性差距的鲁棒性解决方案。此外,我们还比较了蜻蜓算法(DA)和粒子群算法(PSO)在模式生成方面的性能。基准测试表明,随着问题的紧密性增加,数据分析产生了更健壮的最小模式集。
{"title":"A Two-Phase Pattern Generation and Production Planning Procedure for the Stochastic Skiving Process","authors":"Tolga Kudret Karaca, Funda Samanlioglu, Ayca Altay","doi":"10.1155/2023/9918022","DOIUrl":"https://doi.org/10.1155/2023/9918022","url":null,"abstract":"The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M1\"> <mi mathvariant=\"script\">N</mi> <mi mathvariant=\"script\">P</mi> </math> -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali
As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.
{"title":"Machine Learning Approaches to Predict Patient’s Length of Stay in Emergency Department","authors":"Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali","doi":"10.1155/2023/8063846","DOIUrl":"https://doi.org/10.1155/2023/8063846","url":null,"abstract":"As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"49 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136233976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar
In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.
{"title":"Forged Video Detection Using Deep Learning: A SLR","authors":"Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar","doi":"10.1155/2023/6661192","DOIUrl":"https://doi.org/10.1155/2023/6661192","url":null,"abstract":"In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"23 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135169448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf
Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.
{"title":"Embedded Parallel Implementation of LDPC Decoder for Ultra-Reliable Low-Latency Communications","authors":"Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf","doi":"10.1155/2023/5573438","DOIUrl":"https://doi.org/10.1155/2023/5573438","url":null,"abstract":"Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"14 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135512015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi
The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.
{"title":"Facial Emotion Recognition and Classification Using the Convolutional Neural Network-10 (CNN-10)","authors":"Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi","doi":"10.1155/2023/2457898","DOIUrl":"https://doi.org/10.1155/2023/2457898","url":null,"abstract":"The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135857347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.
{"title":"Local Search-Based Metaheuristic Methods for the Solid Waste Collection Problem","authors":"Haneen Algethami","doi":"10.1155/2023/5398400","DOIUrl":"https://doi.org/10.1155/2023/5398400","url":null,"abstract":"The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135346340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Umair Maqsood, Saif Ur Rehman, Tariq Ali, Khalid Mahmood, Tahani Alsaedi, Mahwish Kundi
The use of short message service (SMS) and e-mail have increased too much over the last decades. 80% of people do not read e-mails while 98% of cell phone users daily read their SMS. However, these communication media are unsafe and can produce malicious attacks called spam. The e-mails that pretend to be from a trusted company to provide “financial or personal information” are phishing e-mails. These e-mails contain some links; users might download malicious software on their computers when they click on them. Most techniques and models are developed to automatically detect these “SMS and e-mails” but none of them achieved 100% accuracy. In previous studies using machine learning (ML), spam detection using a small dataset has resulted in lower accuracy. To counter this problem, in this paper, multiple classifiers of ML and a classifier of deep learning (DL) were applied to the SMS and e-mail dataset for spam detection with higher accuracy. After conducting experiments on the real dataset, the researchers concluded that the proposed system performed better and more accurately than previously existing models. Specifically, the support vector machine (SVM) classifier outperformed all others. These results suggest that SVM is the optimal choice for classification purposes.
{"title":"An Intelligent Framework Based on Deep Learning for SMS and e-mail Spam Detection","authors":"Umair Maqsood, Saif Ur Rehman, Tariq Ali, Khalid Mahmood, Tahani Alsaedi, Mahwish Kundi","doi":"10.1155/2023/6648970","DOIUrl":"https://doi.org/10.1155/2023/6648970","url":null,"abstract":"The use of short message service (SMS) and e-mail have increased too much over the last decades. 80% of people do not read e-mails while 98% of cell phone users daily read their SMS. However, these communication media are unsafe and can produce malicious attacks called spam. The e-mails that pretend to be from a trusted company to provide “financial or personal information” are phishing e-mails. These e-mails contain some links; users might download malicious software on their computers when they click on them. Most techniques and models are developed to automatically detect these “SMS and e-mails” but none of them achieved 100% accuracy. In previous studies using machine learning (ML), spam detection using a small dataset has resulted in lower accuracy. To counter this problem, in this paper, multiple classifiers of ML and a classifier of deep learning (DL) were applied to the SMS and e-mail dataset for spam detection with higher accuracy. After conducting experiments on the real dataset, the researchers concluded that the proposed system performed better and more accurately than previously existing models. Specifically, the support vector machine (SVM) classifier outperformed all others. These results suggest that SVM is the optimal choice for classification purposes.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136264272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}