Pub Date : 2026-01-28DOI: 10.1016/j.array.2026.100696
Angel Biskupovic , Miguel A. González , Fernando Huanca , Mario Torres , Maria Rodriguez-Fernandez , Felipe Núñez
Numerous health conditions, such as obesity, diabetes, and cardiovascular diseases, require strict adherence to nutritional guidelines and accurate reporting of eating behaviors, making effective eating monitoring essential. A common approach to eating monitoring involves maintaining a food diary, where subjects manually self-report eating events, a process inherently prone to imprecision. Recent technological advances have enabled the development of passive, automatic eating detection systems, typically relying on data from wearable devices to identify eating events. In this context, the literature is vast on efforts that use machine learning methods for this purpose, with great success. However, most existing studies focus only on eating detection mechanisms and fail to offer an integrated solution with practical use cases. To address this gap, in this work, we present a cyber–physical systems approach to eating monitoring that integrates an eating event detection module with a cloud-based service-oriented backbone where numerous services are deployed, yielding an integrated solution for real-time eating monitoring.
{"title":"Real-time eating monitoring: A cyber-physical systems approach","authors":"Angel Biskupovic , Miguel A. González , Fernando Huanca , Mario Torres , Maria Rodriguez-Fernandez , Felipe Núñez","doi":"10.1016/j.array.2026.100696","DOIUrl":"10.1016/j.array.2026.100696","url":null,"abstract":"<div><div>Numerous health conditions, such as obesity, diabetes, and cardiovascular diseases, require strict adherence to nutritional guidelines and accurate reporting of eating behaviors, making effective eating monitoring essential. A common approach to eating monitoring involves maintaining a food diary, where subjects manually self-report eating events, a process inherently prone to imprecision. Recent technological advances have enabled the development of passive, automatic eating detection systems, typically relying on data from wearable devices to identify eating events. In this context, the literature is vast on efforts that use machine learning methods for this purpose, with great success. However, most existing studies focus only on eating detection mechanisms and fail to offer an integrated solution with practical use cases. To address this gap, in this work, we present a cyber–physical systems approach to eating monitoring that integrates an eating event detection module with a cloud-based service-oriented backbone where numerous services are deployed, yielding an integrated solution for real-time eating monitoring.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100696"},"PeriodicalIF":4.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1016/j.array.2026.100697
Golam Imran , Md Parvez Hossain , Mahmudul Hasan , Md Tarek Hasan , Ohidujjaman
The performance of machine learning models in industrial settings is often limited by noise and missing values in real-world data. Tabular data representations, commonly used in traditional machine learning, may not effectively capture complex relationships or maintain reliability under such data degradation. This study comparatively evaluates the robustness of tabular and graph-based data representations for machine learning models when faced with data corruption. Using a real-world steel industry energy consumption dataset, we assess six models: Random Forest, XGBoost, Multi-Layer Perceptron (MLP), Graph Convolutional Network, SAGE, and Graph Attention Network, across clean, noisy, missing, and combined noise and missing data scenarios. A novel transformation technique converts tabular data into graph structures to facilitate relational learning in graph-based models. Graph-based models demonstrated 30.8% greater robustness than tabular models, as measured by their lower average drop in classification accuracy across missing, noisy, and combined data corruption scenarios. These findings pave the way for deploying more resilient artificial intelligence (AI) systems in complex industrial environments, emphasizing the critical role of relational data representations in robust machine learning. For validation, we applied another study with the UCI Machine Learning Repository: the Concrete Compressive Strength Dataset, and found comparable resonance in this regard.
{"title":"Tabular and graph-based representations for noise and missing data in robust machine learning","authors":"Golam Imran , Md Parvez Hossain , Mahmudul Hasan , Md Tarek Hasan , Ohidujjaman","doi":"10.1016/j.array.2026.100697","DOIUrl":"10.1016/j.array.2026.100697","url":null,"abstract":"<div><div>The performance of machine learning models in industrial settings is often limited by noise and missing values in real-world data. Tabular data representations, commonly used in traditional machine learning, may not effectively capture complex relationships or maintain reliability under such data degradation. This study comparatively evaluates the robustness of tabular and graph-based data representations for machine learning models when faced with data corruption. Using a real-world steel industry energy consumption dataset, we assess six models: Random Forest, XGBoost, Multi-Layer Perceptron (MLP), Graph Convolutional Network, SAGE, and Graph Attention Network, across clean, noisy, missing, and combined noise and missing data scenarios. A novel transformation technique converts tabular data into graph structures to facilitate relational learning in graph-based models. Graph-based models demonstrated 30.8% greater robustness than tabular models, as measured by their lower average drop in classification accuracy across missing, noisy, and combined data corruption scenarios. These findings pave the way for deploying more resilient artificial intelligence (AI) systems in complex industrial environments, emphasizing the critical role of relational data representations in robust machine learning. For validation, we applied another study with the UCI Machine Learning Repository: the Concrete Compressive Strength Dataset, and found comparable resonance in this regard.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100697"},"PeriodicalIF":4.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1016/j.array.2026.100695
Francesco D’Amore , Luca Mariani , Carlo Mastroianni , Francesco Plastina , Luca Salatino , Jacopo Settino , Andrea Vinci
The use of quantum computing for machine learning is among the most promising applications of quantum technologies. Quantum models inspired by classical algorithms are developed to explore some possible advantages over classical approaches. A primary challenge in the development and testing of Quantum Machine Learning (QML) algorithms is the scarcity of datasets designed specifically for a quantum approach. Existing datasets, often borrowed from classical machine learning, need modifications to be compatible with current quantum hardware. In this work, we utilize a dataset generated by Internet-of-Things (IoT) devices in a format directly compatible with the proposed quantum data process, eliminating the need for feature reduction. Among quantum-inspired machine learning algorithms, the Projected Quantum Kernel (PQK) stands out for its elegant solution of projecting the data encoded in the Hilbert space into a classical space. For a prediction task concerning office room occupancy, we compare PQK with the standard Quantum Kernel (QK) and their classical counterparts to investigate how different feature maps affect the encoding of IoT data. Our findings show that the PQK demonstrates comparable effectiveness to classical methods when the proposed shallow circuit is used for quantum encoding.
{"title":"Assessing projected quantum kernels for the classification of IoT data","authors":"Francesco D’Amore , Luca Mariani , Carlo Mastroianni , Francesco Plastina , Luca Salatino , Jacopo Settino , Andrea Vinci","doi":"10.1016/j.array.2026.100695","DOIUrl":"10.1016/j.array.2026.100695","url":null,"abstract":"<div><div>The use of quantum computing for machine learning is among the most promising applications of quantum technologies. Quantum models inspired by classical algorithms are developed to explore some possible advantages over classical approaches. A primary challenge in the development and testing of Quantum Machine Learning (QML) algorithms is the scarcity of datasets designed specifically for a quantum approach. Existing datasets, often borrowed from classical machine learning, need modifications to be compatible with current quantum hardware. In this work, we utilize a dataset generated by Internet-of-Things (IoT) devices in a format directly compatible with the proposed quantum data process, eliminating the need for feature reduction. Among quantum-inspired machine learning algorithms, the Projected Quantum Kernel (PQK) stands out for its elegant solution of projecting the data encoded in the Hilbert space into a classical space. For a prediction task concerning office room occupancy, we compare PQK with the standard Quantum Kernel (QK) and their classical counterparts to investigate how different feature maps affect the encoding of IoT data. Our findings show that the PQK demonstrates comparable effectiveness to classical methods when the proposed shallow circuit is used for quantum encoding.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100695"},"PeriodicalIF":4.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.array.2026.100694
Bokai Li , Mingkang Guo , Yongli Jia , Tianzi Zeng , Xiaojing Liu
To address the issue of alert information overload in cloud platform monitoring, where unnecessary or duplicate alerts hinder the rapid identification of problem sources by operation and maintenance personnel, an automatic analysis system for cloud platform alert monitoring based on the random forest (RF) algorithm has been proposed. In the system architecture, the infrastructure layer creates multiple virtual machines through the CloudStack cloud platform, utilizing the C8051F0403 model chip as an information collector to acquire abnormal data. The core service layer, centered around the ARM7TDMI core microprocessor, designs the hardware structure of the monitoring terminal, integrating global GSM-based SMS transmission and reception to track abnormal operational states. The user interface layer supplies alert information to the system. The alert client is functionally designed by incorporating the random forest algorithm, which is capable of processing a large volume of alert log samples from the cloud platform system while avoiding overfitting. By constructing multiple decision trees, the algorithm enhances the accuracy of classification and regression tasks, effectively identifying and filtering out unnecessary or duplicate alert information, thereby enabling automated analysis of abnormal alert monitoring. Experimental results demonstrate that the system achieves effective noise reduction in alert data, maintains a low false alert rate in alert monitoring, and supports root-cause analysis of alerts. The application of this system can significantly mitigate alert overload, ensuring that the alert information received by operation and maintenance (O&M) personnel is more accurate and reliable, thereby facilitating quicker problem localization and effective resolution.
{"title":"Design of cloud platform alert monitoring and automatic analysis system based on random forest algorithm","authors":"Bokai Li , Mingkang Guo , Yongli Jia , Tianzi Zeng , Xiaojing Liu","doi":"10.1016/j.array.2026.100694","DOIUrl":"10.1016/j.array.2026.100694","url":null,"abstract":"<div><div>To address the issue of alert information overload in cloud platform monitoring, where unnecessary or duplicate alerts hinder the rapid identification of problem sources by operation and maintenance personnel, an automatic analysis system for cloud platform alert monitoring based on the random forest (RF) algorithm has been proposed. In the system architecture, the infrastructure layer creates multiple virtual machines through the CloudStack cloud platform, utilizing the C8051F0403 model chip as an information collector to acquire abnormal data. The core service layer, centered around the ARM7TDMI core microprocessor, designs the hardware structure of the monitoring terminal, integrating global GSM-based SMS transmission and reception to track abnormal operational states. The user interface layer supplies alert information to the system. The alert client is functionally designed by incorporating the random forest algorithm, which is capable of processing a large volume of alert log samples from the cloud platform system while avoiding overfitting. By constructing multiple decision trees, the algorithm enhances the accuracy of classification and regression tasks, effectively identifying and filtering out unnecessary or duplicate alert information, thereby enabling automated analysis of abnormal alert monitoring. Experimental results demonstrate that the system achieves effective noise reduction in alert data, maintains a low false alert rate in alert monitoring, and supports root-cause analysis of alerts. The application of this system can significantly mitigate alert overload, ensuring that the alert information received by operation and maintenance (O&M) personnel is more accurate and reliable, thereby facilitating quicker problem localization and effective resolution.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100694"},"PeriodicalIF":4.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.array.2026.100692
Andrea Bricola, Nicoletta Noceti, Daniele D’Agostino
Computer vision is currently applied in an increasing number of technological systems and devices. In many cases, security and privacy constraints, or the need for real-time decision-making, require these tasks to be executed at the edge, where images are acquired. When high performance targets must be met, Convolutional Neural Networks (CNNs) remain the gold standard since, if compared to more recent and complex architectures, they provide a simpler structure that allows for easier implementation and compatibility with different hardware platforms. This paper presents a comparative analysis of the performance of several state-of-the-art CNNs on two edge computing architectures, specifically Jetson Nano and OAK-D-CM4. We considered also the Coral Edge TPU, even if it seems discontinued. The objective is to evaluate the achievable performance and identify the limitations inherent in the available software libraries and hardware. Particular attention is given to the trade-off between high accuracy and fast inference. To this end, two use cases targeting classical Computer Vision tasks, i.e. object detection and face recognition, will be discussed.
{"title":"Performance analysis of Convolutional Neural Networks on edge devices for Computer Vision tasks","authors":"Andrea Bricola, Nicoletta Noceti, Daniele D’Agostino","doi":"10.1016/j.array.2026.100692","DOIUrl":"10.1016/j.array.2026.100692","url":null,"abstract":"<div><div>Computer vision is currently applied in an increasing number of technological systems and devices. In many cases, security and privacy constraints, or the need for real-time decision-making, require these tasks to be executed at the edge, where images are acquired. When high performance targets must be met, Convolutional Neural Networks (CNNs) remain the gold standard since, if compared to more recent and complex architectures, they provide a simpler structure that allows for easier implementation and compatibility with different hardware platforms. This paper presents a comparative analysis of the performance of several state-of-the-art CNNs on two edge computing architectures, specifically Jetson Nano and OAK-D-CM4. We considered also the Coral Edge TPU, even if it seems discontinued. The objective is to evaluate the achievable performance and identify the limitations inherent in the available software libraries and hardware. Particular attention is given to the trade-off between high accuracy and fast inference. To this end, two use cases targeting classical Computer Vision tasks, i.e. object detection and face recognition, will be discussed.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100692"},"PeriodicalIF":4.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.array.2026.100685
Carlos Beis-Penedo , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Manuel Fernández-Veiga , Francisco Troncoso-Pastoriza
Collaborative machine learning in sensitive domains demands scalable, privacy-aware and access-controlled solutions for enterprise-grade deployment. Conventional federated learning (FL) relies on a central server, introducing single points of failure and privacy risks, while split learning (SL) partitions models for privacy but scales poorly because of sequential training. We present HLF-FSL, a decentralized architecture that combines federated split learning (FSL) with the permissioned blockchain Hyperledger Fabric (HLF). Chaincode orchestrates split-model execution and peer-to-peer aggregation without a central coordinator, leveraging HLF’s transient fields and Private Data Collections (PDCs) to keep raw data and model activations off-chain and access-controlled. On CIFAR-10, MNIST and ImageNet-Mini, HLF-FSL matches the accuracy of a standard server-coordinated FSL baseline while reducing per-epoch training time versus Ethereum-based baselines. Performance and scalability tests quantify the Fabric coordination overhead via a component-level breakdown of SDK-facing latencies and communication volumes; empirically, this overhead increases wall-clock epoch time while preserving the same accuracy-vs-epoch behavior as a FedSplit Learning baseline.
{"title":"HLF-FSL: A decentralized federated split learning solution for IoT on hyperledger fabric","authors":"Carlos Beis-Penedo , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Manuel Fernández-Veiga , Francisco Troncoso-Pastoriza","doi":"10.1016/j.array.2026.100685","DOIUrl":"10.1016/j.array.2026.100685","url":null,"abstract":"<div><div>Collaborative machine learning in sensitive domains demands scalable, <em>privacy-aware and access-controlled</em> solutions for enterprise-grade deployment. Conventional federated learning (FL) relies on a central server, introducing single points of failure and privacy risks, while split learning (SL) partitions models for privacy but scales poorly because of sequential training. We present HLF-FSL, a decentralized architecture that combines federated split learning (FSL) with the permissioned blockchain Hyperledger Fabric (HLF). Chaincode orchestrates split-model execution and peer-to-peer aggregation without a central coordinator, leveraging HLF’s transient fields and Private Data Collections (PDCs) to keep raw data and model activations off-chain and access-controlled. On CIFAR-10, MNIST and ImageNet-Mini, HLF-FSL matches the accuracy of a standard server-coordinated FSL baseline while reducing per-epoch training time versus Ethereum-based baselines. Performance and scalability tests quantify the Fabric coordination overhead via a component-level breakdown of SDK-facing latencies and communication volumes; empirically, this overhead increases wall-clock epoch time while preserving the same accuracy-vs-epoch behavior as a FedSplit Learning baseline.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100685"},"PeriodicalIF":4.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting cyberbullying in multimodal content (such as memes) is challenging due to complex interactions between images and text, often involving sarcasm, multilingual usage, and other noisy real-world factors. This paper presents a multilingual multimodal cyberbullying detection framework that combines early fusion, late fusion, and hierarchical fusion strategies within a unified architecture. The framework introduces three key modules: Adaptive Cross-Modal Token Integration (ACTI) for iterative early fusion, Context-Adaptive Ensemble with Uncertainty-Aware Gating (CAE-UAG) for dynamic late fusion based on input reliability, and a Hierarchical Contextual Fusion Network (HCFN) that feeds early fused context back into later unimodal processing for refined predictions. Our system leverages state-of-the-art pretrained vision-language models (e.g., CLIP for images and XLM-RoBERTa for text) to learn subtle cross-modal representations (e.g., sarcasm or image–text irony) and uses uncertainty modeling to handle ambiguous or noisy inputs. We evaluate the approach on two benchmark datasets: the English-language Facebook Hateful Memes and the ArMeme dataset of Arabic memes. Experimental results show that our model outperforms multiple baselines (including single-modality models and a strong CLIP-based multimodal baseline), achieving high accuracy, F1-scores, and area under ROC (AUROC) across languages. Notably, it achieves state-of-the-art performance (e.g., 0.85 F1 and 0.88 AUROC on Hateful Memes), surpassing prior fusion methods. The proposed framework represents a significant step toward generalizable, culturally aware, and robust multimodal cyberbullying detection suitable for deployment across diverse social media contexts.
{"title":"Multilingual multimodal cyberbullying detection through adaptive and hierarchical fusion","authors":"Walaa Saber Ismail , Hikmat Ullah , Muhammad Adnan , Farman Ullah","doi":"10.1016/j.array.2026.100689","DOIUrl":"10.1016/j.array.2026.100689","url":null,"abstract":"<div><div>Detecting cyberbullying in multimodal content (such as memes) is challenging due to complex interactions between images and text, often involving sarcasm, multilingual usage, and other noisy real-world factors. This paper presents a multilingual multimodal cyberbullying detection framework that combines early fusion, late fusion, and hierarchical fusion strategies within a unified architecture. The framework introduces three key modules: Adaptive Cross-Modal Token Integration (ACTI) for iterative early fusion, Context-Adaptive Ensemble with Uncertainty-Aware Gating (CAE-UAG) for dynamic late fusion based on input reliability, and a Hierarchical Contextual Fusion Network (HCFN) that feeds early fused context back into later unimodal processing for refined predictions. Our system leverages state-of-the-art pretrained vision-language models (e.g., CLIP for images and XLM-RoBERTa for text) to learn subtle cross-modal representations (e.g., sarcasm or image–text irony) and uses uncertainty modeling to handle ambiguous or noisy inputs. We evaluate the approach on two benchmark datasets: the English-language Facebook Hateful Memes and the ArMeme dataset of Arabic memes. Experimental results show that our model outperforms multiple baselines (including single-modality models and a strong CLIP-based multimodal baseline), achieving high accuracy, F1-scores, and area under ROC (AUROC) across languages. Notably, it achieves state-of-the-art performance (e.g., 0.85 F1 and 0.88 AUROC on Hateful Memes), surpassing prior fusion methods. The proposed framework represents a significant step toward generalizable, culturally aware, and robust multimodal cyberbullying detection suitable for deployment across diverse social media contexts.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100689"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.array.2026.100688
Ahmad , Husna Zafar , Aneeqa Zafar , Muhammad Noveel Sadiq , A.K. Awasthi , Homan Emadifar , Karim K. Ahmed
Physics-Informed Neural Networks (PINNs) are a machine learning technique that directly incorporates the governing physics of problems, such as partial differential equations (PDEs) and ordinary differential equations (ODEs), into the neural network architecture. The primary goal of PINNs is to approximate solutions while satisfying given constraints and minimizing the residuals of the differential equations. PINNs have been employed to solve various problems, including integro-differential equations, fractional differential equations, and stochastic PDEs. Over the past two years, significant advancements have addressed the challenges associated with PINNs, resulting in notable improvements in accuracy and performance. This article provides a comprehensive summary of the latest methodologies contributing to these advancements, focusing on innovations in hyperparameter optimization and novel PINN variants inspired by other neural networks. Examples include MultiInNet-PINN, Transformer-based PINNs such as Tr-PINN and PINNsFormer, as well as PINNs incorporating attention mechanisms and recurrent neural network (RNN) architectures (PIANN). Additionally, recent research on domain decomposition techniques in PINN architectures are highlighted. By consolidating recent architectural and algorithmic advances, this research identifies critical research opportunities for enhancing the reliability, efficiency, and broader applicability of PINNs in scientific computing.
{"title":"Evolution of physics-informed neural networks: Recent architectural variants and optimization strategies","authors":"Ahmad , Husna Zafar , Aneeqa Zafar , Muhammad Noveel Sadiq , A.K. Awasthi , Homan Emadifar , Karim K. Ahmed","doi":"10.1016/j.array.2026.100688","DOIUrl":"10.1016/j.array.2026.100688","url":null,"abstract":"<div><div>Physics-Informed Neural Networks (PINNs) are a machine learning technique that directly incorporates the governing physics of problems, such as partial differential equations (PDEs) and ordinary differential equations (ODEs), into the neural network architecture. The primary goal of PINNs is to approximate solutions while satisfying given constraints and minimizing the residuals of the differential equations. PINNs have been employed to solve various problems, including integro-differential equations, fractional differential equations, and stochastic PDEs. Over the past two years, significant advancements have addressed the challenges associated with PINNs, resulting in notable improvements in accuracy and performance. This article provides a comprehensive summary of the latest methodologies contributing to these advancements, focusing on innovations in hyperparameter optimization and novel PINN variants inspired by other neural networks. Examples include MultiInNet-PINN, Transformer-based PINNs such as Tr-PINN and PINNsFormer, as well as PINNs incorporating attention mechanisms and recurrent neural network (RNN) architectures (PIANN). Additionally, recent research on domain decomposition techniques in PINN architectures are highlighted. By consolidating recent architectural and algorithmic advances, this research identifies critical research opportunities for enhancing the reliability, efficiency, and broader applicability of PINNs in scientific computing.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100688"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.array.2026.100687
Muhammad Wasim , Sehrash Safdar , Abdur Rehman , Zahoor Ur Rehman , Osama A. Khashan , Naif Alzahrani , Anwar Ghani
Approximately half of the global population relies on social media platforms such as Facebook, Twitter, and Instagram for news consumption. The vast volume and rapid dissemination of information on these platforms pose substantial challenges for the timely and accurate detection of fake news. Academics are working harder to develop AI-based automated systems to check news accuracy because of the detrimental effects of misinformation on public health, social trust, and political stability. But the majority of false news detection methods currently in use focus primarily on content-based features, often ignoring essential factors such as user profiling, social context, and knowledge extraction. The knowledge-based features necessary for effective document retrieval, position identification, social engagement analysis, and user profile integration are often absent from datasets, even though some of them contain elements of social context and user behavior. This work offers a thorough, fully annotated dataset that integrates user profiles, stance information, social engagements, knowledge extraction, and content elements into a single resource to overcome these limitations. Building on this dataset, this study creates KeepUp, a unified system that integrates user profiles, social media activity, and knowledge extraction to detect bogus news. KeepUp outperforms all baseline models, achieving a detection accuracy of 0.78, demonstrating the effectiveness of this combined approach.
{"title":"KeepUp: A unified framework fusing knowledge extraction, social platform engagement, and user profiling for fake news detection","authors":"Muhammad Wasim , Sehrash Safdar , Abdur Rehman , Zahoor Ur Rehman , Osama A. Khashan , Naif Alzahrani , Anwar Ghani","doi":"10.1016/j.array.2026.100687","DOIUrl":"10.1016/j.array.2026.100687","url":null,"abstract":"<div><div>Approximately half of the global population relies on social media platforms such as Facebook, Twitter, and Instagram for news consumption. The vast volume and rapid dissemination of information on these platforms pose substantial challenges for the timely and accurate detection of fake news. Academics are working harder to develop AI-based automated systems to check news accuracy because of the detrimental effects of misinformation on public health, social trust, and political stability. But the majority of false news detection methods currently in use focus primarily on content-based features, often ignoring essential factors such as user profiling, social context, and knowledge extraction. The knowledge-based features necessary for effective document retrieval, position identification, social engagement analysis, and user profile integration are often absent from datasets, even though some of them contain elements of social context and user behavior. This work offers a thorough, fully annotated dataset that integrates user profiles, stance information, social engagements, knowledge extraction, and content elements into a single resource to overcome these limitations. Building on this dataset, this study creates KeepUp, a unified system that integrates user profiles, social media activity, and knowledge extraction to detect bogus news. KeepUp outperforms all baseline models, achieving a detection accuracy of 0.78, demonstrating the effectiveness of this combined approach.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100687"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.array.2026.100686
Suman Sharma , Samart Moodleah , Stanislav S. Makhanov
Medical image analysis often relies on vector fields (VF), which are fundamental to deterministic models such as Active Contours, Level Set Methods, Phase Portrait Analysis, and artificial agent–based formulations. We experimentally demonstrate that a Deep Learning Neural Network (DLNN) capable of interpreting VF structures can substantially enhance the decision-making capabilities of artificial agents. We introduce a novel hybrid framework that integrates artificial life (AL) agents operating within a VF with a DLNN that guides their behavior. A key innovation of the model is the initialization of AL agents using streamlines derived from the VF orthogonal to the generalized gradient vector flow (GGVF) field. The VF is further transformed into a bi-directional Tensor Field (TF), where the spatial distribution and classification of degenerate points (DPs) serve as critical features. These DPs are leveraged to train AL agents through the DLNN, enabling them to follow meaningful anatomical structures. The framework employs DeepLabV3+ with ResNet50 as the backbone and is trained on 179 benign and 107 malignant breast ultrasound images collected at Thammasat University Hospital (TUH) and annotated by three leading radiologists, in addition to the BUSI and UDIAT datasets. Using 10-fold cross-validation, the proposed method achieves stable and robust performance across three datasets. Mean Dice scores of (TUH), (BUSI), and (UDIAT) are obtained, with corresponding IoU values of , and , demonstrating strong generalization across diverse imaging conditions. Comparative evaluations against state-of-the-art methods confirm the superiority of the proposed model. A video demonstration is available at: https://tinyurl.com/AL-DLNN.
{"title":"Multi-agent deep learning on tensor fields for segmentation of ultrasound images","authors":"Suman Sharma , Samart Moodleah , Stanislav S. Makhanov","doi":"10.1016/j.array.2026.100686","DOIUrl":"10.1016/j.array.2026.100686","url":null,"abstract":"<div><div>Medical image analysis often relies on vector fields (VF), which are fundamental to deterministic models such as Active Contours, Level Set Methods, Phase Portrait Analysis, and artificial agent–based formulations. We experimentally demonstrate that a Deep Learning Neural Network (DLNN) capable of interpreting VF structures can substantially enhance the decision-making capabilities of artificial agents. We introduce a novel hybrid framework that integrates artificial life (AL) agents operating within a VF with a DLNN that guides their behavior. A key innovation of the model is the initialization of AL agents using streamlines derived from the VF orthogonal to the generalized gradient vector flow (GGVF) field. The VF is further transformed into a bi-directional Tensor Field (TF), where the spatial distribution and classification of degenerate points (DPs) serve as critical features. These DPs are leveraged to train AL agents through the DLNN, enabling them to follow meaningful anatomical structures. The framework employs DeepLabV3+ with ResNet50 as the backbone and is trained on 179 benign and 107 malignant breast ultrasound images collected at Thammasat University Hospital (TUH) and annotated by three leading radiologists, in addition to the BUSI and UDIAT datasets. Using 10-fold cross-validation, the proposed method achieves stable and robust performance across three datasets. Mean Dice scores of <span><math><mrow><mn>94</mn><mo>.</mo><mn>84</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>63</mn><mtext>%</mtext></mrow></math></span> (TUH), <span><math><mrow><mn>94</mn><mo>.</mo><mn>16</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>62</mn><mtext>%</mtext></mrow></math></span> (BUSI), and <span><math><mrow><mn>93</mn><mo>.</mo><mn>67</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>51</mn><mtext>%</mtext></mrow></math></span> (UDIAT) are obtained, with corresponding IoU values of <span><math><mrow><mn>91</mn><mo>.</mo><mn>19</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>76</mn><mtext>%</mtext></mrow></math></span>, <span><math><mrow><mn>90</mn><mo>.</mo><mn>21</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>83</mn><mtext>%</mtext></mrow></math></span> and <span><math><mrow><mn>89</mn><mo>.</mo><mn>08</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>70</mn><mtext>%</mtext></mrow></math></span>, demonstrating strong generalization across diverse imaging conditions. Comparative evaluations against state-of-the-art methods confirm the superiority of the proposed model. A video demonstration is available at: <span><span>https://tinyurl.com/AL-DLNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100686"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}