Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100586
Munusamy S, Jothi K R
The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.
医疗物联网(IoMT)、区块链技术和联邦学习的集成可以提供一种新的方法,以分散、安全和隐私保护的形式保存电子健康记录(EHRs)。本文介绍了一种新颖的基于区块链iom的联邦学习(FL)系统,该系统使用智能隐私保护控制方法来解决电子病历管理中的关键问题,包括数据安全、患者隐私和互操作性。FL范例将患者数据限制在边缘节点,限制了集中攻击的机会。尽管先进的隐私敏感方法(如差分隐私和同态加密)确保敏感数据在训练和通信期间不会暴露给敌对模型,但区块链技术允许不可变地记录数据并透明地对其进行审计,以及分散数据访问。基于帕金森病数据的实验评估表明,所提出的PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy)模型在准确性、鲁棒性和计算效率方面优于目前的实践。结果证实了该框架在保护医疗保健数据、支持传播节点之间的安全通信以及为可扩展和隐私敏感的医疗保健系统奠定基础方面的有用性。
{"title":"Blockchain-IoMT-enabled federated learning: An intelligent privacy-preserving control policy for electronic health records","authors":"Munusamy S, Jothi K R","doi":"10.1016/j.array.2025.100586","DOIUrl":"10.1016/j.array.2025.100586","url":null,"abstract":"<div><div>The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100586"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100598
Lichong Cui , Huayu Chu , Junsheng Wang , Wei Guo , Fan Yang , Zixi Hu
The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.
{"title":"Collaborative path optimization model of power material supply chain based on hash index spatio-temporal graph neural network","authors":"Lichong Cui , Huayu Chu , Junsheng Wang , Wei Guo , Fan Yang , Zixi Hu","doi":"10.1016/j.array.2025.100598","DOIUrl":"10.1016/j.array.2025.100598","url":null,"abstract":"<div><div>The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100598"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100608
Jelena Hađina , Joshua Fogarty , Boris Jukić
The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.
{"title":"Innovative Data Modeling Concepts for Big Data Analytics: Probabilistic Cardinality and Replicability Notations","authors":"Jelena Hađina , Joshua Fogarty , Boris Jukić","doi":"10.1016/j.array.2025.100608","DOIUrl":"10.1016/j.array.2025.100608","url":null,"abstract":"<div><div>The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100608"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes () and image resolutions reveals optimal trade-offs: values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.
{"title":"Utilizing JIT Python runtime and parameter optimization for CPU-based Gaussian Splatting thumbnailer","authors":"Evgeni Genchev , Dimitar Rangelov , Kars Waanders , Sierd Waanders","doi":"10.1016/j.array.2025.100611","DOIUrl":"10.1016/j.array.2025.100611","url":null,"abstract":"<div><div>Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes (<span><math><mi>σ</mi></math></span>) and image resolutions reveals optimal trade-offs: <span><math><mi>σ</mi></math></span> values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100611"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100601
Angana Chakraborty , Subhankar Joardar , Dilip K. Prasad , Arif Ahmed Sekh
The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) & Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained & fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in https://github.com/mani-design/BGPCN.
{"title":"BGPCN: A BERT and GPT-2-based Relational Graph Convolutional Network for hostile Hindi information detection","authors":"Angana Chakraborty , Subhankar Joardar , Dilip K. Prasad , Arif Ahmed Sekh","doi":"10.1016/j.array.2025.100601","DOIUrl":"10.1016/j.array.2025.100601","url":null,"abstract":"<div><div>The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) & Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained & fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in <span><span>https://github.com/mani-design/BGPCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100601"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100587
D.G. Fantini , R.N. Silva , M.B.B. Siqueira
This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.
本文提出了一种新的深度学习模型W-Net,专注于鱼眼相机获得的全天空图像的语义分割。该模型基于使用两个U-Net网络串联,通过跳过连接和注意跳过连接进行互连。此外,所提出的方法结合了一个颜色空间转换层,将图像从RGB空间转换为HSV或CIE XYZ,然后是一个利用二维小波变换的特征提取层。引入了新的注意机制,特别是负责两个u - net之间信息转移的注意机制。为了评估模型的性能,对文献中四个成熟的模型进行了比较分析。值得注意的是,其中三个模型是为二元语义分割设计的,只考虑了“Sky”和“Cloud”类,而W-Net模型采用了多类语义分割,区分了“Sky”、“Sun”、“Sloud”和“Edge”类别。实验结果证明了W-Net体系结构的优越性。未加权版本的Mean Intersection over Union (MeanIoU)为87.63%,Dice系数为96.30%,总体准确率为97.40%,精密度为93.07%。加权W-Net进一步改善了结果,达到了MeanIoU为87.79%,Dice系数为96.62%,准确率为97.41%,Precision为89.89%。这些结果证实了所提出的模型优于基准方法,并且加权的包含增强了对太阳区域的检测。最后,通过将人工标注的掩码与所提出模型生成的掩码进行视觉比较,进行定性评价。
{"title":"Semantic segmentation of terrestrial whole-sky images using the new W-Net model with the stationary wavelet transform 2D","authors":"D.G. Fantini , R.N. Silva , M.B.B. Siqueira","doi":"10.1016/j.array.2025.100587","DOIUrl":"10.1016/j.array.2025.100587","url":null,"abstract":"<div><div>This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100587"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hybrid models are recognized as one of the most effective approaches to address the imbalanced data problem. In these models, data-level methods such as over-sampling are combined with algorithm-level methods, such as ensemble approaches. However, the resulting models can face challenges concerning inefficiency and ineffectiveness. A solution to tackle these issues is proposed in this paper, which includes a novel weighted F1-ordered pruning technique integrated with two state-of-the-art hybrid models, Balanced Bagging and Balanced One-versus-One. Unlike prior hybrid models designed primarily to address the binary imbalance problem, the proposed approach is specifically designed to tackle the challenging multi-class classification imbalance problem. An extensive experimental evaluation and statistical validation were conducted, and demonstrated that the Pruned Balanced Bagging ensemble remarkably outperforms the considered hybrid models.
{"title":"The effect of pruning on the efficiency and effectiveness of hybrid imbalanced multiclass classification models","authors":"Esra’a Alshdaifat , Ala’a Al-Shdaifat , Fairouz Hussein","doi":"10.1016/j.array.2025.100610","DOIUrl":"10.1016/j.array.2025.100610","url":null,"abstract":"<div><div>Hybrid models are recognized as one of the most effective approaches to address the imbalanced data problem. In these models, data-level methods such as over-sampling are combined with algorithm-level methods, such as ensemble approaches. However, the resulting models can face challenges concerning inefficiency and ineffectiveness. A solution to tackle these issues is proposed in this paper, which includes a novel weighted F1-ordered pruning technique integrated with two state-of-the-art hybrid models, Balanced Bagging and Balanced One-versus-One. Unlike prior hybrid models designed primarily to address the binary imbalance problem, the proposed approach is specifically designed to tackle the challenging multi-class classification imbalance problem. An extensive experimental evaluation and statistical validation were conducted, and demonstrated that the Pruned Balanced Bagging ensemble remarkably outperforms the considered hybrid models.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100610"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100616
Nadia Dahmani , Amril Nazir , Ikbal Taleb , Syed M. Salman Bukhari
The convergence of Reinforcement Learning (RL) and Bin Packing Problems (BPP) is a critical field of study that has profound ramifications in logistics, manufacturing, computer, and retail industries. This paper thoroughly examines the progression from simple rule-based tactics to advanced Deep Reinforcement Learning (DRL) techniques in solving BPPs. By conducting a thorough review of 231 papers conducted between 2019 and 2024, we address and provide answers to important research inquiries, such as “To what extent has academic research explored the use of RL for BPP during this time frame?” and “Which specific areas of application and methodologies have been predominantly used?” Our examination highlights a significant and rapid growth in research activity in this field. The study reveals a clear inclination towards DRL compared to traditional RL techniques, especially in complex, multi-dimensional BPP situations. It also identifies a growing interest in hybrid models and transfer learning methods as potential solutions to the challenges of scalability, computational requirements, and the exploration-exploitation trade-off. This study shows that some DRL models are highly effective in complex BPP scenarios. It suggests that future research should focus on scalability, operational efficiency, and the practical implementation of theoretical achievements in industry. This study aims to promote multidisciplinary discourse and collaboration in optimisation and artificial intelligence by comprehensively analysing current achievements and identifying the remaining problems.
{"title":"Reinforcement learning based intelligent optimisation for bin packing problems: A review","authors":"Nadia Dahmani , Amril Nazir , Ikbal Taleb , Syed M. Salman Bukhari","doi":"10.1016/j.array.2025.100616","DOIUrl":"10.1016/j.array.2025.100616","url":null,"abstract":"<div><div>The convergence of Reinforcement Learning (RL) and Bin Packing Problems (BPP) is a critical field of study that has profound ramifications in logistics, manufacturing, computer, and retail industries. This paper thoroughly examines the progression from simple rule-based tactics to advanced Deep Reinforcement Learning (DRL) techniques in solving BPPs. By conducting a thorough review of 231 papers conducted between 2019 and 2024, we address and provide answers to important research inquiries, such as “To what extent has academic research explored the use of RL for BPP during this time frame?” and “Which specific areas of application and methodologies have been predominantly used?” Our examination highlights a significant and rapid growth in research activity in this field. The study reveals a clear inclination towards DRL compared to traditional RL techniques, especially in complex, multi-dimensional BPP situations. It also identifies a growing interest in hybrid models and transfer learning methods as potential solutions to the challenges of scalability, computational requirements, and the exploration-exploitation trade-off. This study shows that some DRL models are highly effective in complex BPP scenarios. It suggests that future research should focus on scalability, operational efficiency, and the practical implementation of theoretical achievements in industry. This study aims to promote multidisciplinary discourse and collaboration in optimisation and artificial intelligence by comprehensively analysing current achievements and identifying the remaining problems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100616"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100618
Zhengwei Zhang , Weien Xiao , Fenfen Li
Pixel value ordering (PVO) is a widely used reversible data hiding (RDH) technique that leverages pixel correlations within image blocks to generate high-fidelity stego-images. However, its embedding performance is limited by fixed block sizes, which fail to adapt to varying texture complexities. To address this issue, we propose a novel RDH method based on block dynamic selection. First, we employ a 2 × 3 image block as the basic embedding unit. In addition, we introduce a dual-layer embedding mechanism that partitions the cover image into checkerboard-like gray and white blocks, which enables the use of neighboring pixels to more accurately estimate the complexity of each block. For flat blocks with lower complexity values, we further subdivide the 2 × 3 block into two 1 × 3 sub-blocks, and a pixel-based pre-ordering scheme is proposed to determine the optimal ordering of pixels within the block, thereby increasing the number of expandable errors. For texture blocks, we utilize the adaptive pixel distribution density (APDD) to select the most suitable neighboring block for merging. By leveraging location information from two predicted pixels in the current block, APDD dynamically selects the optimal block, effectively enhancing its embedding potential. Experimental results demonstrate that the proposed method achieves a PSNR improvement of up to 1.46 dB compared to state-of-the-art methods under the same embedding capacity.
{"title":"High-performance reversible data hiding scheme via block dynamic selection","authors":"Zhengwei Zhang , Weien Xiao , Fenfen Li","doi":"10.1016/j.array.2025.100618","DOIUrl":"10.1016/j.array.2025.100618","url":null,"abstract":"<div><div>Pixel value ordering (PVO) is a widely used reversible data hiding (RDH) technique that leverages pixel correlations within image blocks to generate high-fidelity stego-images. However, its embedding performance is limited by fixed block sizes, which fail to adapt to varying texture complexities. To address this issue, we propose a novel RDH method based on block dynamic selection. First, we employ a 2 × 3 image block as the basic embedding unit. In addition, we introduce a dual-layer embedding mechanism that partitions the cover image into checkerboard-like gray and white blocks, which enables the use of neighboring pixels to more accurately estimate the complexity of each block. For flat blocks with lower complexity values, we further subdivide the 2 × 3 block into two 1 × 3 sub-blocks, and a pixel-based pre-ordering scheme is proposed to determine the optimal ordering of pixels within the block, thereby increasing the number of expandable errors. For texture blocks, we utilize the adaptive pixel distribution density (APDD) to select the most suitable neighboring block for merging. By leveraging location information from two predicted pixels in the current block, APDD dynamically selects the optimal block, effectively enhancing its embedding potential. Experimental results demonstrate that the proposed method achieves a PSNR improvement of up to 1.46 dB compared to state-of-the-art methods under the same embedding capacity.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100618"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100595
José Luis González-Blázquez , Alicia García-Holgado , Francisco José García-Peñalvo
This systematic literature review examines how agile solutions can drive organizational change in collaborative open-source software (OSS) contexts. Motivated by persistent challenges in governance, alignment, contribution lifecycles, workflow, leadership, and measurement, the review asks which prescriptive and non-prescriptive agile approaches are being applied when organizations collaborate with OSS communities, and how these approaches mitigate those issues. The study first conducts an umbrella review (2000–2024) to confirm the gap and scope, then performs a main systematic review across digital libraries using inclusion, exclusion, and quality criteria. The synthesis maps findings to a conceptual framework of nine problem areas and two change paths. Results show a dominance of prescriptive methods, especially Scrum, LeSS, SAFe, and Kanban, for workflow transparency, dependency management, and coordination, while governance and leadership models remain underexplored. Building on this evidence, the paper proposes: (1) a prescriptive change approach for low-maturity organizations that integrates holacratic governance with Scrum/LeSS, Communities of Practice, Design Thinking for innovation, Management 3.0 leadership, and KPI-oriented cultures; and (2) a non-prescriptive approach for mature organizations based on unFIX's fractal organizational design, forums and collaboration patterns, delegation levels, and outcome-focused metrics to extend co-evolution with communities. The dual pathway enables organizations to select and sequence interventions that align with their paradigm and maturity, thereby bridging organizational and community boundaries to foster sustained agility. The review highlights open research needs on governance mechanisms, leadership in symbiotic ecosystems, and empirical evaluations of combined scaling approaches beyond SAFe, as well as longitudinal studies on alignment, dependency management, and measurement cultures in high-variability OSS environments.
{"title":"Agile change approach for collaborative software development contexts: A systematic literature review","authors":"José Luis González-Blázquez , Alicia García-Holgado , Francisco José García-Peñalvo","doi":"10.1016/j.array.2025.100595","DOIUrl":"10.1016/j.array.2025.100595","url":null,"abstract":"<div><div>This systematic literature review examines how agile solutions can drive organizational change in collaborative open-source software (OSS) contexts. Motivated by persistent challenges in governance, alignment, contribution lifecycles, workflow, leadership, and measurement, the review asks which prescriptive and non-prescriptive agile approaches are being applied when organizations collaborate with OSS communities, and how these approaches mitigate those issues. The study first conducts an umbrella review (2000–2024) to confirm the gap and scope, then performs a main systematic review across digital libraries using inclusion, exclusion, and quality criteria. The synthesis maps findings to a conceptual framework of nine problem areas and two change paths. Results show a dominance of prescriptive methods, especially Scrum, LeSS, SAFe, and Kanban, for workflow transparency, dependency management, and coordination, while governance and leadership models remain underexplored. Building on this evidence, the paper proposes: (1) a prescriptive change approach for low-maturity organizations that integrates holacratic governance with Scrum/LeSS, Communities of Practice, Design Thinking for innovation, Management 3.0 leadership, and KPI-oriented cultures; and (2) a non-prescriptive approach for mature organizations based on unFIX's fractal organizational design, forums and collaboration patterns, delegation levels, and outcome-focused metrics to extend co-evolution with communities. The dual pathway enables organizations to select and sequence interventions that align with their paradigm and maturity, thereby bridging organizational and community boundaries to foster sustained agility. The review highlights open research needs on governance mechanisms, leadership in symbiotic ecosystems, and empirical evaluations of combined scaling approaches beyond SAFe, as well as longitudinal studies on alignment, dependency management, and measurement cultures in high-variability OSS environments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100595"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}