Pub Date : 2025-08-04DOI: 10.1016/j.iswa.2025.200564
Temitope Olubanjo Kehinde , Oluyinka J. Adedokun , Morenikeji Kabirat Kareem , Joseph Akpan , Oludolapo A. Olanrewaju
Accurate forecasting of high-volatility stock markets is critical for investors and policymakers, yet existing models struggle with computational inefficiency and noise sensitivity. This study introduces STL-ELM, a novel hybrid model combining Seasonal-Trend decomposition using LOESS (STL) and Extreme Learning Machine (ELM), to deliver unparalleled accuracy and speed. By decomposing stock data into trend, seasonal, and residual components, STL-ELM isolates multiscale features, while ELM’s lightweight architecture ensures rapid training and robust generalization, outperforming advanced techniques such as LSTM, GRU, and transformer variants in both prediction and trading simulations. With faster runtimes and minimal memory usage, STL-ELM is tailored for real-time trading applications and high-frequency financial forecasting, offering institutional investors, traders, and financial analysts a competitive edge in volatile markets. The hybrid nature of STL-ELM, which combines STL’s multiscale decomposition with ELM’s rapid learning, enhances its adaptability to various financial domains, including stocks, commodities, foreign exchange, and cryptocurrencies, by efficiently capturing domain-specific volatility patterns. This work not only sets a new standard for predictive accuracy in stock market modelling but also presents an invaluable tool for those navigating the complexities of modern financial markets.
{"title":"STL-ELM: A computationally efficient hybrid approach for predicting high volatility stock market","authors":"Temitope Olubanjo Kehinde , Oluyinka J. Adedokun , Morenikeji Kabirat Kareem , Joseph Akpan , Oludolapo A. Olanrewaju","doi":"10.1016/j.iswa.2025.200564","DOIUrl":"10.1016/j.iswa.2025.200564","url":null,"abstract":"<div><div>Accurate forecasting of high-volatility stock markets is critical for investors and policymakers, yet existing models struggle with computational inefficiency and noise sensitivity. This study introduces STL-ELM, a novel hybrid model combining Seasonal-Trend decomposition using LOESS (STL) and Extreme Learning Machine (ELM), to deliver unparalleled accuracy and speed. By decomposing stock data into trend, seasonal, and residual components, STL-ELM isolates multiscale features, while ELM’s lightweight architecture ensures rapid training and robust generalization, outperforming advanced techniques such as LSTM, GRU, and transformer variants in both prediction and trading simulations. With faster runtimes and minimal memory usage, STL-ELM is tailored for real-time trading applications and high-frequency financial forecasting, offering institutional investors, traders, and financial analysts a competitive edge in volatile markets. The hybrid nature of STL-ELM, which combines STL’s multiscale decomposition with ELM’s rapid learning, enhances its adaptability to various financial domains, including stocks, commodities, foreign exchange, and cryptocurrencies, by efficiently capturing domain-specific volatility patterns. This work not only sets a new standard for predictive accuracy in stock market modelling but also presents an invaluable tool for those navigating the complexities of modern financial markets.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200564"},"PeriodicalIF":4.3,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA () confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.
{"title":"Multi-modal expert system for automated durian ripeness classification using deep learning","authors":"Santi Sukkasem, Watchareewan Jitsakul, Phayung Meesad","doi":"10.1016/j.iswa.2025.200563","DOIUrl":"10.1016/j.iswa.2025.200563","url":null,"abstract":"<div><div>Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA (<span><math><mrow><mi>p</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200563"},"PeriodicalIF":4.3,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-31DOI: 10.1016/j.iswa.2025.200562
Serena Crisci , Valentina De Simone , Andrea Diana , Ferdinando Zullo
The increasing availability of visual data in fields such as archaeology has highlighted the need for automated image analysis tools. Ancient rock engravings, such as those in the Neolithic Domus de Janas tombs of Sardinia, are crucial cultural artifacts. However, their study is hindered by environmental degradation and the limitations of traditional analysis methods. This paper introduces a novel approach that employs a preprocessing method to isolate glyphs from their backgrounds, reducing the impact of wear and distortions caused by environmental factors such as lighting. Convolutional neural networks are then used to enhance the classification of glyphs in the preprocessed archaeological images. The refined data are processed using AlexNet, GoogLeNet, and EfficientNet neural networks, each trained to classify glyphs into distinct categories and to detect their geometric features. This method offers a more efficient and accurate way to analyze and preserve these cultural artifacts.
考古学等领域的视觉数据越来越多,这凸显了对自动图像分析工具的需求。古代岩石雕刻,如撒丁岛新石器时代Domus de Janas墓葬中的那些,是至关重要的文化文物。然而,它们的研究受到环境退化和传统分析方法的限制。本文介绍了一种新的方法,该方法采用预处理方法将字形从其背景中分离出来,减少了由光照等环境因素引起的磨损和扭曲的影响。然后使用卷积神经网络来增强预处理考古图像中的字形分类。精细化的数据使用AlexNet、GoogLeNet和effentnet神经网络进行处理,每个神经网络都经过训练,可以将字形分类为不同的类别,并检测其几何特征。这种方法为分析和保存这些文物提供了一种更有效、更准确的方法。
{"title":"Neural network for archaeological glyph detection","authors":"Serena Crisci , Valentina De Simone , Andrea Diana , Ferdinando Zullo","doi":"10.1016/j.iswa.2025.200562","DOIUrl":"10.1016/j.iswa.2025.200562","url":null,"abstract":"<div><div>The increasing availability of visual data in fields such as archaeology has highlighted the need for automated image analysis tools. Ancient rock engravings, such as those in the Neolithic Domus de Janas tombs of Sardinia, are crucial cultural artifacts. However, their study is hindered by environmental degradation and the limitations of traditional analysis methods. This paper introduces a novel approach that employs a preprocessing method to isolate glyphs from their backgrounds, reducing the impact of wear and distortions caused by environmental factors such as lighting. Convolutional neural networks are then used to enhance the classification of glyphs in the preprocessed archaeological images. The refined data are processed using AlexNet, GoogLeNet, and EfficientNet neural networks, each trained to classify glyphs into distinct categories and to detect their geometric features. This method offers a more efficient and accurate way to analyze and preserve these cultural artifacts.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200562"},"PeriodicalIF":4.3,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1016/j.iswa.2025.200560
Wonjik Kim , Gaku Kutsuzawa , Michiyo Maruyama
Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.
{"title":"Emotion recognition and forecasting from wearable data via cluster-guided attention with cross-species pretraining","authors":"Wonjik Kim , Gaku Kutsuzawa , Michiyo Maruyama","doi":"10.1016/j.iswa.2025.200560","DOIUrl":"10.1016/j.iswa.2025.200560","url":null,"abstract":"<div><div>Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200560"},"PeriodicalIF":4.3,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a novel dynamic optimization strategy called the Vehicle Route Optimizer (VRO), specifically designed to enhance the efficiency and sustainability of smart cities. Inspired by the dynamics and interactions observed in vehicle behavior and traffic systems, VRO effectively balances exploration and exploitation phases to discover optimal solutions. The algorithm has been rigorously tested using the IEEE CEC2022 benchmark suites, demonstrating its superior performance compared to 18 other optimizers. In smart cities, efficient waste management and routing are critical for reducing operational costs and minimizing environmental impact. Thus, VRO has been applied to solve the Waste Collection and Routing Optimization Problem (WCROP) in smart cities by integrating bin allocation and routing components into a single-objective optimization framework. In addressing WCROP in Smart Cities, VRO was evaluated using synthetic instances derived from PVRP-IF cases. The results show that VRO outperforms traditional hierarchical and heuristic methods in terms of total cost, computational efficiency, and solution feasibility.
{"title":"Vehicle route optimizer for waste collection and routing optimization problem","authors":"Hussam Fakhouri , Amjad Hudaib , Faten Hamad , Sandi Fakhouri , Niveen Halalsheh , Mohannad S. Alkhalaileh","doi":"10.1016/j.iswa.2025.200521","DOIUrl":"10.1016/j.iswa.2025.200521","url":null,"abstract":"<div><div>This paper introduces a novel dynamic optimization strategy called the Vehicle Route Optimizer (VRO), specifically designed to enhance the efficiency and sustainability of smart cities. Inspired by the dynamics and interactions observed in vehicle behavior and traffic systems, VRO effectively balances exploration and exploitation phases to discover optimal solutions. The algorithm has been rigorously tested using the IEEE CEC2022 benchmark suites, demonstrating its superior performance compared to 18 other optimizers. In smart cities, efficient waste management and routing are critical for reducing operational costs and minimizing environmental impact. Thus, VRO has been applied to solve the Waste Collection and Routing Optimization Problem (WCROP) in smart cities by integrating bin allocation and routing components into a single-objective optimization framework. In addressing WCROP in Smart Cities, VRO was evaluated using synthetic instances derived from PVRP-IF cases. The results show that VRO outperforms traditional hierarchical and heuristic methods in terms of total cost, computational efficiency, and solution feasibility.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200521"},"PeriodicalIF":4.3,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144738302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-28DOI: 10.1016/j.iswa.2025.200565
Rima Tri Wahyuningrum , Achmad Bauravindah , Indah Agustien Siradjuddin , Budi Dwi Satoto , Amillia Kartika Sari , Anggraini Dwi Sensusiati
The coronavirus disease 2019 (COVID-19) pandemic has underscored the need for efficient diagnostic methods owing to the limitations in sensitivity and time constraints associated with molecular tests such as reverse transcription PCR (RT-PCR). This research aims to enhance the efficiency of COVID-19 and other lung diseases such as pneumonia, tuberculosis, bronchitis, emphysema, asthma, and others diagnoses. As an alternative diagnostic, we considered an approach based on enhanced computed tomography (CT) scan images using deep learning (DL). However, we propose a preprocessing segmentation method to enhance the accuracy of DL-based classification that uses the UNet++ architecture, an encoder-decoder approach in DL. In this architecture, the encoder reduces the image resolution to extract informative feature maps while the decoder returns the resolution to the original size. UNet++ is available in four levels: UNet++ L1, L2, L3, and L4, and its performance is compared to that of several other models, including SegNet, FCANet, and DeepLabV3+. Using two different datasets, RSPHC (Indonesia) and Kaggle, testing was conducted to determine the model with the optimum performance. The criteria used to evaluate model performance included the Dice coefficient and IoU metrics, most efficient computational time, and minimal resource requirements (measured by trainable parameters). The UNet++ L4 model achieved a Dice coefficient of 0.994, IoU of 0.989, computational time of 0.925 s, and 9.16 million trainable parameters on the RSPHC dataset. Whereas on the Kaggle dataset it achieved a Dice coefficient of 0.961, IoU of 0.930, computational time of 1.189 s, and 9.16 million trainable parameters. Therefore, the UNet++ L4 model is ideal for accurate segmentation, computational efficiency, and affordable resource requirements. Thus, this research improves lung disease diagnosis through enhanced CT scan images using DL.
{"title":"Enhancing lung disease diagnosis with deep-learning-based CT scan image segmentation","authors":"Rima Tri Wahyuningrum , Achmad Bauravindah , Indah Agustien Siradjuddin , Budi Dwi Satoto , Amillia Kartika Sari , Anggraini Dwi Sensusiati","doi":"10.1016/j.iswa.2025.200565","DOIUrl":"10.1016/j.iswa.2025.200565","url":null,"abstract":"<div><div>The coronavirus disease 2019 (COVID-19) pandemic has underscored the need for efficient diagnostic methods owing to the limitations in sensitivity and time constraints associated with molecular tests such as reverse transcription PCR (RT-PCR). This research aims to enhance the efficiency of COVID-19 and other lung diseases such as pneumonia, tuberculosis, bronchitis, emphysema, asthma, and others diagnoses. As an alternative diagnostic, we considered an approach based on enhanced computed tomography (CT) scan images using deep learning (DL). However, we propose a preprocessing segmentation method to enhance the accuracy of DL-based classification that uses the UNet++ architecture, an encoder-decoder approach in DL. In this architecture, the encoder reduces the image resolution to extract informative feature maps while the decoder returns the resolution to the original size. UNet++ is available in four levels: UNet++ L1, L2, L3, and L4, and its performance is compared to that of several other models, including SegNet, FCANet, and DeepLabV3+. Using two different datasets, RSPHC (Indonesia) and Kaggle, testing was conducted to determine the model with the optimum performance. The criteria used to evaluate model performance included the Dice coefficient and IoU metrics, most efficient computational time, and minimal resource requirements (measured by trainable parameters). The UNet++ L4 model achieved a Dice coefficient of 0.994, IoU of 0.989, computational time of 0.925 s, and 9.16 million trainable parameters on the RSPHC dataset. Whereas on the Kaggle dataset it achieved a Dice coefficient of 0.961, IoU of 0.930, computational time of 1.189 s, and 9.16 million trainable parameters. Therefore, the UNet++ L4 model is ideal for accurate segmentation, computational efficiency, and affordable resource requirements. Thus, this research improves lung disease diagnosis through enhanced CT scan images using DL.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200565"},"PeriodicalIF":4.3,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-26DOI: 10.1016/j.iswa.2025.200546
Dwi Astharini, Muhamad Asvial, Dadang Gunawan
Allocation of power is one of the critical aspects of the non-orthogonal multiple access (NOMA) system. On its implementation in higher-order modulation visible light communication (VLC), the consequent constrictions are more problematic. In this paper, power ratio was optimized for NOMA VLC with M-ary pulse amplitude modulation (MPAM). The requirements for user accessibility were chiefly derived from NOMA VLC model and applied to the throughput maximization. The power ratio for each user was then defined using the Karush–Kuhn–Tucker (KKT) optimality conditions, resulting in a suboptimal low-complexity solution for the case of QoS and optimal solution for best-effort scenario. Simulations were conducted to compare the performance between four- and eight-PAM.
{"title":"Power optimization of higher order modulated downlink non-orthogonal multiple access visible light communication","authors":"Dwi Astharini, Muhamad Asvial, Dadang Gunawan","doi":"10.1016/j.iswa.2025.200546","DOIUrl":"10.1016/j.iswa.2025.200546","url":null,"abstract":"<div><div>Allocation of power is one of the critical aspects of the non-orthogonal multiple access (NOMA) system. On its implementation in higher-order modulation visible light communication (VLC), the consequent constrictions are more problematic. In this paper, power ratio was optimized for NOMA VLC with M-ary pulse amplitude modulation (MPAM). The requirements for user accessibility were chiefly derived from NOMA VLC model and applied to the throughput maximization. The power ratio for each user was then defined using the Karush–Kuhn–Tucker (KKT) optimality conditions, resulting in a suboptimal low-complexity solution for the case of QoS and optimal solution for best-effort scenario. Simulations were conducted to compare the performance between four- and eight-PAM.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200546"},"PeriodicalIF":4.3,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-18DOI: 10.1016/j.iswa.2025.200559
Ali Alsalama, Saad Harous, Ashraf Elnagar
This survey provides an in-depth review of recent advancements in forensic anthropology through the application of imaging and modeling techniques for paranasal sinus structures. The focus is on exploring various studies that leverage the paranasal sinuses for the identification of individuals and demographic analysis, including age and gender estimation, especially when traditional methods such as fingerprint analysis, dental records, or DNA profiling are not feasible. Additionally, the survey aims to serve as a foundation for future work in similar analyses and segmentation tasks. These methods are especially useful in forensic contexts, such as those involving skeletonized remains where other anatomical structures are absent. The paper discusses several case studies, including the segmentation of paranasal sinuses as well as their classification for establishing biological profiles in diverse populations. The effectiveness of these 3D modeling approaches in predicting demographic characteristics such as sex, age, and ethnicity is also highlighted. Special emphasis is placed on the robustness and reliability of sinus morphology as both a forensic identifier and a tool for demographic inference.
{"title":"Paranasal sinus analysis based on deep learning and machine learning techniques: A comprehensive survey","authors":"Ali Alsalama, Saad Harous, Ashraf Elnagar","doi":"10.1016/j.iswa.2025.200559","DOIUrl":"10.1016/j.iswa.2025.200559","url":null,"abstract":"<div><div>This survey provides an in-depth review of recent advancements in forensic anthropology through the application of imaging and modeling techniques for paranasal sinus structures. The focus is on exploring various studies that leverage the paranasal sinuses for the identification of individuals and demographic analysis, including age and gender estimation, especially when traditional methods such as fingerprint analysis, dental records, or DNA profiling are not feasible. Additionally, the survey aims to serve as a foundation for future work in similar analyses and segmentation tasks. These methods are especially useful in forensic contexts, such as those involving skeletonized remains where other anatomical structures are absent. The paper discusses several case studies, including the segmentation of paranasal sinuses as well as their classification for establishing biological profiles in diverse populations. The effectiveness of these 3D modeling approaches in predicting demographic characteristics such as sex, age, and ethnicity is also highlighted. Special emphasis is placed on the robustness and reliability of sinus morphology as both a forensic identifier and a tool for demographic inference.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200559"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1016/j.iswa.2025.200558
Ali Rodan , Sharif Naser Makhadmeh , Yousef Sanjalawe , Rizik M.H. Al-Sayyed , Mohammed Azmi Al-Betar
Stellar Oscillation Optimizer (SOO) takes its core inspiration from the study of stellar pulsations, a domain often referred to as asteroseismology which is formulated as an optimization algorithm for continuous domain. In this paper, the Binary version of Stellar Oscillation Optimizer (BSOO) is proposed for Feature Selection (FS) problems. BSOO introduces binary adaptations, including threshold-based encoding, controlled oscillatory movements, and a top-solution influence mechanism. In order to evaluate the BSOO, sixteen FS datasets are used with different numbers of features, samples, and class labels. Seven performance measures are also used, which are: fitness value, number of selected features, accuracy, sensitivity, specificity, Precision, and F-measure. An intensive comparative evaluation against 18 state-of-the-art optimization algorithms using the same datasets has been conducted. The results show that the proposed BSOO version is able to compete well with the other FS-based methods where it is able to overcome several methods and produce the best overall results for some datasets on different measurements. Furthermore, the convergence behavior to show the optimization behavior of BSOO during the search is investigated and visualized. Interestingly, the BSOO is able to provide a suitable trade-off between the global wide-range exploration and local nearby exploitation during the optimization process. This is proved using the statistical Wilcoxon Rank-Sum Test Results. In conclusion, this paper provides a new alternative solution for FS research community that is able to work well for many FS instances and find the optimal solution. The source code of BSOO is publicly available for both MATLAB at: https://www.mathworks.com/matlabcentral/fileexchange/180096-bsoo-binary-stellar-oscillation-optimizer and PYTHON at: https://github.com/AliRodan/BSOO-Binary-Stellar-Oscillation-Optimizer.
{"title":"A novel binary Stellar Oscillation Optimizer for feature selection optimization problems","authors":"Ali Rodan , Sharif Naser Makhadmeh , Yousef Sanjalawe , Rizik M.H. Al-Sayyed , Mohammed Azmi Al-Betar","doi":"10.1016/j.iswa.2025.200558","DOIUrl":"10.1016/j.iswa.2025.200558","url":null,"abstract":"<div><div>Stellar Oscillation Optimizer (SOO) takes its core inspiration from the study of stellar pulsations, a domain often referred to as asteroseismology which is formulated as an optimization algorithm for continuous domain. In this paper, the Binary version of Stellar Oscillation Optimizer (BSOO) is proposed for Feature Selection (FS) problems. BSOO introduces binary adaptations, including threshold-based encoding, controlled oscillatory movements, and a top-solution influence mechanism. In order to evaluate the BSOO, sixteen FS datasets are used with different numbers of features, samples, and class labels. Seven performance measures are also used, which are: fitness value, number of selected features, accuracy, sensitivity, specificity, Precision, and F-measure. An intensive comparative evaluation against 18 state-of-the-art optimization algorithms using the same datasets has been conducted. The results show that the proposed BSOO version is able to compete well with the other FS-based methods where it is able to overcome several methods and produce the best overall results for some datasets on different measurements. Furthermore, the convergence behavior to show the optimization behavior of BSOO during the search is investigated and visualized. Interestingly, the BSOO is able to provide a suitable trade-off between the global wide-range exploration and local nearby exploitation during the optimization process. This is proved using the statistical Wilcoxon Rank-Sum Test Results. In conclusion, this paper provides a new alternative solution for FS research community that is able to work well for many FS instances and find the optimal solution. The source code of BSOO is publicly available for both MATLAB at: <span><span>https://www.mathworks.com/matlabcentral/fileexchange/180096-bsoo-binary-stellar-oscillation-optimizer</span><svg><path></path></svg></span> and PYTHON at: <span><span>https://github.com/AliRodan/BSOO-Binary-Stellar-Oscillation-Optimizer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200558"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144694572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1016/j.iswa.2025.200557
Marwa F. Mohamed , Ahmed Hamed
High-dimensional optimization remains a key challenge in computational intelligence, especially under resource constraints. Evolutionary algorithms, which mimic the change in heritable characteristics of biological populations, have been proposed to address this. These algorithms apply selection pressure to favor better solutions over generations, and stochastic variations may occasionally introduce suboptimal candidates to preserve population diversity. However, they often struggle to balance exploration and exploitation, leading to suboptimal solutions, premature convergence, and significant computational demands, making them unsuitable for resource-constrained environments. This paper introduces Monkeypox Optimization (MO), a novel evolutionary algorithm inspired by the infection and replication lifecycle of the monkeypox virus. MO mimics the virus’s rapid spread by employing virus-to-cell infection, where the virus persistently seeks out vulnerable cells to penetrate—representing global exploration of the search space. Once inside, cell-to-cell transmission enables fast local propagation, modeling the refinement of high-potential solutions through accelerated replication. To conserve resources, MO continuously deletes the least effective virion copies, maintaining a compact and memory-efficient population. This biologically grounded design not only accelerates convergence but also aligns MO with TinyML principles, making it ideally suited for low-power, resource-constrained IoT environments. MO is benchmarked against 21 recent algorithms across 90 functions from CEC-2017, CEC-2019, and CEC-2020, and validated on three engineering design problems. Results show MO achieves up to 13% lower energy consumption and 34% shorter execution time compared to state-of-the-art competitors, while maintaining robust accuracy. A theoretical analysis reveals MO’s time complexity is , confirming its scalability. Statistical validation via Friedman and Fisher tests further supports MO’s performance gains.
{"title":"Monkeypox optimizer: A TinyML bio-inspired evolutionary optimization algorithm and its engineering applications","authors":"Marwa F. Mohamed , Ahmed Hamed","doi":"10.1016/j.iswa.2025.200557","DOIUrl":"10.1016/j.iswa.2025.200557","url":null,"abstract":"<div><div>High-dimensional optimization remains a key challenge in computational intelligence, especially under resource constraints. Evolutionary algorithms, which mimic the change in heritable characteristics of biological populations, have been proposed to address this. These algorithms apply selection pressure to favor better solutions over generations, and stochastic variations may occasionally introduce suboptimal candidates to preserve population diversity. However, they often struggle to balance exploration and exploitation, leading to suboptimal solutions, premature convergence, and significant computational demands, making them unsuitable for resource-constrained environments. This paper introduces Monkeypox Optimization (MO), a novel evolutionary algorithm inspired by the infection and replication lifecycle of the monkeypox virus. MO mimics the virus’s rapid spread by employing virus-to-cell infection, where the virus persistently seeks out vulnerable cells to penetrate—representing global exploration of the search space. Once inside, cell-to-cell transmission enables fast local propagation, modeling the refinement of high-potential solutions through accelerated replication. To conserve resources, MO continuously deletes the least effective virion copies, maintaining a compact and memory-efficient population. This biologically grounded design not only accelerates convergence but also aligns MO with TinyML principles, making it ideally suited for low-power, resource-constrained IoT environments. MO is benchmarked against 21 recent algorithms across 90 functions from CEC-2017, CEC-2019, and CEC-2020, and validated on three engineering design problems. Results show MO achieves up to 13% lower energy consumption and 34% shorter execution time compared to state-of-the-art competitors, while maintaining robust accuracy. A theoretical analysis reveals MO’s time complexity is <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>m</mi><mi>n</mi><mo>+</mo><mi>R</mi><mi>T</mi><mi>n</mi><mo>)</mo></mrow></mrow></math></span>, confirming its scalability. Statistical validation via Friedman and Fisher tests further supports MO’s performance gains.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200557"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}