首页 > 最新文献

Array最新文献

英文 中文
Enhancing cybersecurity in smart and cognitive cities: A systematic mapping of AI-based techniques 增强智能和认知城市的网络安全:基于人工智能技术的系统映射
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100606
Rawan Alraddadi , Mohammad Alshayeb , Sajjad Mahmood , Mahmood Niazi
The rapid development of smart and cognitive cities has led to significant advancements in urban technology, but also introduced new cybersecurity challenges. This issue is aggravated in cognitive cities, where citizens act not only as service recipients but also as active data generators, heightening the need for human-centric security frameworks. This review examines the application of AI-based techniques to enhance cybersecurity, with a particular focus on human-centric concerns in smart and cognitive urban environments. We conducted a systematic mapping study and identified 173 studies on AI-based threat detection techniques, where qualitative data was collected and analyzed. These studies were analyzed and categorized according to the Cyber Security Body of Knowledge (CyBOK) framework. The findings reveal that while network security is the most extensively studied area in the CyBOK, critical domains such as human factors remain underexplored. We observed that most AI-based techniques concentrate on the detection phase, often using supervised learning, while only a minority incorporate identification, protection, or response phases. AI-driven techniques are often combined with approaches such as federated learning and blockchain, which are pivotal for safeguarding citizen data; however, challenges persist in balancing privacy-preserving methods and detection performance. This review provides valuable insights into AI-driven cybersecurity techniques. It provides a novel CyBOK-based mapping of threats, while also identifying opportunities for future research, including the development of real-world datasets tailored to cognitive cities and the refinement of human-centric solutions. These contributions offer a foundation for researchers, practitioners, and policymakers to enhance the security of smart and cognitive cities.
智能和认知城市的快速发展导致了城市技术的重大进步,但也带来了新的网络安全挑战。这个问题在认知城市中更为严重,在那里,公民不仅是服务接受者,而且还是活跃的数据生成器,这就加强了对以人为中心的安全框架的需求。本文综述了基于人工智能的技术在增强网络安全方面的应用,特别关注智能和认知城市环境中以人为中心的问题。我们进行了系统的测绘研究,确定了173项基于人工智能的威胁检测技术研究,并收集和分析了定性数据。根据网络安全知识体系(CyBOK)框架对这些研究进行分析和分类。研究结果表明,虽然网络安全是CyBOK中研究最广泛的领域,但人为因素等关键领域仍未得到充分探索。我们观察到,大多数基于人工智能的技术集中在检测阶段,通常使用监督学习,而只有少数技术包含识别、保护或响应阶段。人工智能驱动的技术通常与联邦学习和区块链等方法相结合,这对于保护公民数据至关重要;然而,在平衡隐私保护方法和检测性能方面仍然存在挑战。这篇综述为人工智能驱动的网络安全技术提供了有价值的见解。它提供了一种新颖的基于cybok的威胁映射,同时也确定了未来研究的机会,包括为认知城市量身定制的现实世界数据集的开发和以人为中心的解决方案的改进。这些贡献为研究人员、从业人员和政策制定者加强智慧和认知城市的安全奠定了基础。
{"title":"Enhancing cybersecurity in smart and cognitive cities: A systematic mapping of AI-based techniques","authors":"Rawan Alraddadi ,&nbsp;Mohammad Alshayeb ,&nbsp;Sajjad Mahmood ,&nbsp;Mahmood Niazi","doi":"10.1016/j.array.2025.100606","DOIUrl":"10.1016/j.array.2025.100606","url":null,"abstract":"<div><div>The rapid development of smart and cognitive cities has led to significant advancements in urban technology, but also introduced new cybersecurity challenges. This issue is aggravated in cognitive cities, where citizens act not only as service recipients but also as active data generators, heightening the need for human-centric security frameworks. This review examines the application of AI-based techniques to enhance cybersecurity, with a particular focus on human-centric concerns in smart and cognitive urban environments. We conducted a systematic mapping study and identified 173 studies on AI-based threat detection techniques, where qualitative data was collected and analyzed. These studies were analyzed and categorized according to the Cyber Security Body of Knowledge (CyBOK) framework. The findings reveal that while network security is the most extensively studied area in the CyBOK, critical domains such as human factors remain underexplored. We observed that most AI-based techniques concentrate on the detection phase, often using supervised learning, while only a minority incorporate identification, protection, or response phases. AI-driven techniques are often combined with approaches such as federated learning and blockchain, which are pivotal for safeguarding citizen data; however, challenges persist in balancing privacy-preserving methods and detection performance. This review provides valuable insights into AI-driven cybersecurity techniques. It provides a novel CyBOK-based mapping of threats, while also identifying opportunities for future research, including the development of real-world datasets tailored to cognitive cities and the refinement of human-centric solutions. These contributions offer a foundation for researchers, practitioners, and policymakers to enhance the security of smart and cognitive cities.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100606"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized GNSS tropospheric delay model for high-precision dam deformation monitoring in large height difference environments 大高差环境下高精度大坝变形监测的优化GNSS对流层延迟模型
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100600
Yan Chen , Xingyu Zhou , Xiaowu Hu , Xuexi Liu , Zhuoni Jin , Jiayan Shen , Mingyuan Zhang , Jie Wang
Tropospheric delay is a critical error in Global Navigation Satellite System (GNSS) high-precision positioning, particularly in dam deformation monitoring requiring millimeter-level accuracy. In small-area relative positioning, tropospheric delays between stations are often mitigated through inter-station differencing. However, in large height difference environments common in dam monitoring scenarios (e.g., stations spanning reservoir banks and mountain valleys), residual tropospheric delays persist after differencing, severely degrading vertical (Up direction) positioning accuracy. While analytical models approximate these residuals, their performance remains limited in complex terrains due to insufficient error characterization. This study proposes a Gradient Boosting Decision Tree (GBDT) machine learning model that integrates practical meteorological data to optimize the relative tropospheric delay correction for dam deformation monitoring under extreme height variations. Observation data of GNSS stations and meteorological parameters with a height difference of up to near 700 m were used to verify the validity of the model. Results show that the model effectively captures nonlinear tropospheric delay variations in height-varying dam environments. When baseline height differences exceed 400 m, the GBDT method achieves an average vertical positioning accuracy of 6.5 mm, outperforming the ERA5 model and stochastic process estimation by 22 % and 28 %, respectively. This enhancement directly benefits dam safety monitoring by improving the reliability of vertical deformation measurements. It should be noted that this is a preliminary study and presents a modeling approach rather than a finalized model. The proposed method provides a robust solution for mitigating tropospheric errors in GNSS-based dam monitoring operating in topographically challenging regions, ensuring higher data fidelity for early warning and risk assessment.
对流层延迟是全球卫星导航系统(GNSS)高精度定位中的一个关键误差,特别是在需要毫米级精度的大坝变形监测中。在小面积相对定位中,台站之间的对流层延迟通常通过台间差来减轻。然而,在大坝监测场景中常见的大高差环境中(如跨越库岸和山谷的站点),差异后残留的对流层延迟持续存在,严重降低了垂直(向上)定位精度。虽然解析模型近似这些残差,但由于误差表征不足,它们在复杂地形中的性能仍然有限。本文提出了一种结合实际气象数据的梯度增强决策树(GBDT)机器学习模型,以优化极端高度变化下大坝变形监测的相对对流层延迟校正。利用全球导航卫星系统(GNSS)站点的观测数据和高差近700 m的气象参数验证了模型的有效性。结果表明,该模型能有效地捕捉坝高变化环境下对流层的非线性延迟变化。当基线高度差超过400 m时,GBDT方法的平均垂直定位精度为6.5 mm,分别比ERA5模型和随机过程估计高22%和28%。这一改进提高了竖向变形测量的可靠性,直接有利于大坝安全监测。应该指出的是,这是一项初步研究,提出了一种建模方法,而不是最终的模型。该方法提供了一种强大的解决方案,可减轻基于gnss的大坝监测在地形挑战性地区运行时的对流层误差,确保更高的数据保真度,用于早期预警和风险评估。
{"title":"Optimized GNSS tropospheric delay model for high-precision dam deformation monitoring in large height difference environments","authors":"Yan Chen ,&nbsp;Xingyu Zhou ,&nbsp;Xiaowu Hu ,&nbsp;Xuexi Liu ,&nbsp;Zhuoni Jin ,&nbsp;Jiayan Shen ,&nbsp;Mingyuan Zhang ,&nbsp;Jie Wang","doi":"10.1016/j.array.2025.100600","DOIUrl":"10.1016/j.array.2025.100600","url":null,"abstract":"<div><div>Tropospheric delay is a critical error in Global Navigation Satellite System (GNSS) high-precision positioning, particularly in dam deformation monitoring requiring millimeter-level accuracy. In small-area relative positioning, tropospheric delays between stations are often mitigated through inter-station differencing. However, in large height difference environments common in dam monitoring scenarios (e.g., stations spanning reservoir banks and mountain valleys), residual tropospheric delays persist after differencing, severely degrading vertical (Up direction) positioning accuracy. While analytical models approximate these residuals, their performance remains limited in complex terrains due to insufficient error characterization. This study proposes a Gradient Boosting Decision Tree (GBDT) machine learning model that integrates practical meteorological data to optimize the relative tropospheric delay correction for dam deformation monitoring under extreme height variations. Observation data of GNSS stations and meteorological parameters with a height difference of up to near 700 m were used to verify the validity of the model. Results show that the model effectively captures nonlinear tropospheric delay variations in height-varying dam environments. When baseline height differences exceed 400 m, the GBDT method achieves an average vertical positioning accuracy of 6.5 mm, outperforming the ERA5 model and stochastic process estimation by 22 % and 28 %, respectively. This enhancement directly benefits dam safety monitoring by improving the reliability of vertical deformation measurements. It should be noted that this is a preliminary study and presents a modeling approach rather than a finalized model. The proposed method provides a robust solution for mitigating tropospheric errors in GNSS-based dam monitoring operating in topographically challenging regions, ensuring higher data fidelity for early warning and risk assessment.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100600"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GreenBERT: A lightweight green transformer for automated prediction of software vulnerability scores GreenBERT:用于自动预测软件漏洞分数的轻量级绿色转换器
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100536
Seyedeh Leili Mirtaheri , Ali Kafi Tafti , Hamid Heidari Soureshjani , Andrea Pugliese
Timely assessment of software vulnerabilities is critical for effective patch prioritization, yet manual Common Vulnerability Scoring System (CVSS) scoring remains slow and resource-intensive. While transformer-based models such as BERT have advanced automated scoring, their substantial computational demands conflict with sustainable, green computing objectives. This paper introduces GreenBERT, a tailored ensemble of lightweight student Transformers, each specialized on individual CVSS metrics through a targeted multi-head knowledge distillation framework. By jointly optimizing alignment with ground-truth labels and softened outputs from a fine-tuned BERT teacher, GreenBERT efficiently captures complex vulnerability patterns while significantly reducing computational overhead. Extensive experiments on the National Vulnerability Database (NVD) and a more challenging COMBINED dataset demonstrate that GreenBERT achieves an average F1-score improvement exceeding 6% over the BERT baseline, while simultaneously reducing inference time by approximately 80% and cutting energy usage and CO2 emissions by about 70%. These results position GreenBERT as a robust, scalable, and environmentally conscious solution for high-performance vulnerability scoring, effectively reconciling the traditionally conflicting goals of predictive accuracy and sustainable AI.
及时评估软件漏洞对于有效的补丁优先级至关重要,但手动通用漏洞评分系统(CVSS)评分仍然缓慢且占用大量资源。虽然基于变压器的模型(如BERT)具有先进的自动评分,但它们的大量计算需求与可持续的绿色计算目标相冲突。本文介绍了GreenBERT,一个定制的轻量级学生变形器集合,每个变形器都通过有针对性的多头知识蒸馏框架专门研究单个CVSS指标。通过联合优化与ground-truth标签的对齐以及来自微调BERT教师的软化输出,GreenBERT有效地捕获了复杂的漏洞模式,同时显着降低了计算开销。在国家漏洞数据库(NVD)和更具挑战性的组合数据集上进行的大量实验表明,GreenBERT的平均f1分数比BERT基线提高了6%以上,同时减少了大约80%的推理时间,减少了大约70%的能源使用和二氧化碳排放。这些结果将GreenBERT定位为高性能漏洞评分的健壮、可扩展、环保的解决方案,有效地协调了预测准确性和可持续人工智能的传统冲突目标。
{"title":"GreenBERT: A lightweight green transformer for automated prediction of software vulnerability scores","authors":"Seyedeh Leili Mirtaheri ,&nbsp;Ali Kafi Tafti ,&nbsp;Hamid Heidari Soureshjani ,&nbsp;Andrea Pugliese","doi":"10.1016/j.array.2025.100536","DOIUrl":"10.1016/j.array.2025.100536","url":null,"abstract":"<div><div>Timely assessment of software vulnerabilities is critical for effective patch prioritization, yet manual Common Vulnerability Scoring System (CVSS) scoring remains slow and resource-intensive. While transformer-based models such as BERT have advanced automated scoring, their substantial computational demands conflict with sustainable, green computing objectives. This paper introduces <em>GreenBERT</em>, a tailored ensemble of lightweight student Transformers, each specialized on individual CVSS metrics through a targeted multi-head knowledge distillation framework. By jointly optimizing alignment with ground-truth labels and softened outputs from a fine-tuned BERT teacher, GreenBERT efficiently captures complex vulnerability patterns while significantly reducing computational overhead. Extensive experiments on the National Vulnerability Database (NVD) and a more challenging <em>COMBINED</em> dataset demonstrate that GreenBERT achieves an average F1-score improvement exceeding 6% over the BERT baseline, while simultaneously reducing inference time by approximately 80% and cutting energy usage and CO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span> emissions by about 70%. These results position GreenBERT as a robust, scalable, and environmentally conscious solution for high-performance vulnerability scoring, effectively reconciling the traditionally conflicting goals of predictive accuracy and sustainable AI.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100536"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A blockchain-based framework for drug security: Leveraging EdDSA to prevent counterfeiting 基于区块链的药物安全框架:利用EdDSA防止假冒
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100604
Md Saifur Rahman , Nazmun Nahar , Md Hasan Imam , Mohammad Nuruzzaman Bhuyian , Md Auhidur Rahman , Mayeen Uddin Khandaker , Shams Forruque Ahmed
Drug counterfeiting is one of the most serious public health problems in the world, reducing consumer trust and costing hundreds of billions of dollars annually. However, the pharmaceutical supply chain monitoring system lacks traceability, is prone to unauthorized distribution, or has weak regulatory enforcement. While frameworks leveraging the advantages of blockchains have great promise, they primarily depend on the Rivest-Shamir-Adleman (RSA) and elliptic curve digital signature (ECDSA) algorithms. These algorithms lead to high computational costs and long processing times. They are also susceptible to side-channel attacks. This paper presents an Edwards-curve Digital Signature Algorithm (EdDSA) enabled blockchain-based framework for the pharmaceutical supply chain. It combines a decentralized ledger acting as an immutable and auditable record of the payments, and a QR-code verification module allowing consumers to verify products on the spot. The EdDSA algorithm enhances the functionality of the system with fast signature generation and verification. In addition, it has a higher resilience and efficiency against cryptographic attacks and is resource-efficient. Through experimental evaluation, it is shown that the proposed framework achieves superior performance in transaction throughput, execution time, and energy efficiency in comparison to state-of-the-art methods while obtaining a comparable level of security guarantees. Finally, the EdDSA-based framework not only improves transparency and security by mitigating threats to public health safety but also encourages stakeholders to make the right decisions, paving the way for a counterfeit-free global market for pharmaceuticals.
假药是世界上最严重的公共卫生问题之一,它降低了消费者的信任,每年造成数千亿美元的损失。然而,药品供应链监控系统缺乏可追溯性,容易出现未经授权的分销,或者监管执法不力。虽然利用区块链优势的框架具有很大的前景,但它们主要依赖于Rivest-Shamir-Adleman (RSA)和椭圆曲线数字签名(ECDSA)算法。这些算法导致高计算成本和长处理时间。它们也容易受到侧信道攻击。本文提出了一种支持爱德华兹曲线数字签名算法(EdDSA)的基于区块链的制药供应链框架。它结合了一个去中心化的分类账,作为不可变的、可审计的支付记录,以及一个qr码验证模块,允许消费者在现场验证产品。EdDSA算法通过快速的签名生成和验证增强了系统的功能。此外,它对加密攻击具有更高的弹性和效率,并且资源高效。通过实验评估表明,与最先进的方法相比,所提出的框架在事务吞吐量、执行时间和能源效率方面取得了卓越的性能,同时获得了相当水平的安全保证。最后,以eddsa为基础的框架不仅通过减轻对公共卫生安全的威胁来提高透明度和安全性,而且还鼓励利益攸关方做出正确的决定,为建立一个无假冒的全球药品市场铺平道路。
{"title":"A blockchain-based framework for drug security: Leveraging EdDSA to prevent counterfeiting","authors":"Md Saifur Rahman ,&nbsp;Nazmun Nahar ,&nbsp;Md Hasan Imam ,&nbsp;Mohammad Nuruzzaman Bhuyian ,&nbsp;Md Auhidur Rahman ,&nbsp;Mayeen Uddin Khandaker ,&nbsp;Shams Forruque Ahmed","doi":"10.1016/j.array.2025.100604","DOIUrl":"10.1016/j.array.2025.100604","url":null,"abstract":"<div><div>Drug counterfeiting is one of the most serious public health problems in the world, reducing consumer trust and costing hundreds of billions of dollars annually. However, the pharmaceutical supply chain monitoring system lacks traceability, is prone to unauthorized distribution, or has weak regulatory enforcement. While frameworks leveraging the advantages of blockchains have great promise, they primarily depend on the Rivest-Shamir-Adleman (RSA) and elliptic curve digital signature (ECDSA) algorithms. These algorithms lead to high computational costs and long processing times. They are also susceptible to side-channel attacks. This paper presents an Edwards-curve Digital Signature Algorithm (EdDSA) enabled blockchain-based framework for the pharmaceutical supply chain. It combines a decentralized ledger acting as an immutable and auditable record of the payments, and a QR-code verification module allowing consumers to verify products on the spot. The EdDSA algorithm enhances the functionality of the system with fast signature generation and verification. In addition, it has a higher resilience and efficiency against cryptographic attacks and is resource-efficient. Through experimental evaluation, it is shown that the proposed framework achieves superior performance in transaction throughput, execution time, and energy efficiency in comparison to state-of-the-art methods while obtaining a comparable level of security guarantees. Finally, the EdDSA-based framework not only improves transparency and security by mitigating threats to public health safety but also encourages stakeholders to make the right decisions, paving the way for a counterfeit-free global market for pharmaceuticals.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100604"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-IoMT-enabled federated learning: An intelligent privacy-preserving control policy for electronic health records 支持区块链-物联网的联邦学习:电子健康记录的智能隐私保护控制策略
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100586
Munusamy S, Jothi K R
The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.
医疗物联网(IoMT)、区块链技术和联邦学习的集成可以提供一种新的方法,以分散、安全和隐私保护的形式保存电子健康记录(EHRs)。本文介绍了一种新颖的基于区块链iom的联邦学习(FL)系统,该系统使用智能隐私保护控制方法来解决电子病历管理中的关键问题,包括数据安全、患者隐私和互操作性。FL范例将患者数据限制在边缘节点,限制了集中攻击的机会。尽管先进的隐私敏感方法(如差分隐私和同态加密)确保敏感数据在训练和通信期间不会暴露给敌对模型,但区块链技术允许不可变地记录数据并透明地对其进行审计,以及分散数据访问。基于帕金森病数据的实验评估表明,所提出的PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy)模型在准确性、鲁棒性和计算效率方面优于目前的实践。结果证实了该框架在保护医疗保健数据、支持传播节点之间的安全通信以及为可扩展和隐私敏感的医疗保健系统奠定基础方面的有用性。
{"title":"Blockchain-IoMT-enabled federated learning: An intelligent privacy-preserving control policy for electronic health records","authors":"Munusamy S,&nbsp;Jothi K R","doi":"10.1016/j.array.2025.100586","DOIUrl":"10.1016/j.array.2025.100586","url":null,"abstract":"<div><div>The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100586"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative path optimization model of power material supply chain based on hash index spatio-temporal graph neural network 基于哈希指数时空图神经网络的电力材料供应链协同路径优化模型
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100598
Lichong Cui , Huayu Chu , Junsheng Wang , Wei Guo , Fan Yang , Zixi Hu
The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.
电力系统物料的日常调度涉及多方面的操作,包括数据分析和物流仓库管理。目前智能物联网的研究主要集中在电气材料的静态管理和孤立的动态调度方案上。在整个物联网配送过程中缺乏全面的时空流通设计。这一差距阻碍了有效分配机制的实施。本文考虑了物流协同数据与时空相关性之间的耦合关系。利用哈希索引算法,将物流数据转化为多目标优化组合函数。该框架集成了时空图神经网络(stgnn)来模拟分布路径中异常坐标相邻节点之间的时空关系。该框架通过聚合相邻协作节点的信息来更新节点嵌入,在决策过程中充分利用了多个相邻节点增强的外部功能。该方法在保证模型全局最优的同时,有效地解决了紧急情况下的最优路径选择挑战。实验结果表明,与主流图神经网络模型相比,该模型通过平均绝对误差(MAE)、平均绝对百分比误差(MAPE)和均方根误差(RMSE)平均降低了约12.3%的路径预测误差。在多目标协同路径优化中,该算法使路径长度缩短了17.6%。这些结果证实了该模型在电力材料供应链路由任务中的有效性和优越性。与主流方法相比,所提出的解决方案还显示出显著的技术优势。它不仅保证了电力物流的运行效率,而且为物流行业应急条件下的多车多站协同作业提供了技术支持。
{"title":"Collaborative path optimization model of power material supply chain based on hash index spatio-temporal graph neural network","authors":"Lichong Cui ,&nbsp;Huayu Chu ,&nbsp;Junsheng Wang ,&nbsp;Wei Guo ,&nbsp;Fan Yang ,&nbsp;Zixi Hu","doi":"10.1016/j.array.2025.100598","DOIUrl":"10.1016/j.array.2025.100598","url":null,"abstract":"<div><div>The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100598"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Data Modeling Concepts for Big Data Analytics: Probabilistic Cardinality and Replicability Notations 大数据分析的创新数据建模概念:概率基数和可复制性符号
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100608
Jelena Hađina , Joshua Fogarty , Boris Jukić
The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.
不断发展的大数据分析实践包括来自多个来源的数据聚合,并且必须提供保持高可靠性和一致性标准的指标和报告。由于涉众在整个过程中可能会以不同的方式解释数据和相关的度量标准,这通常需要做出假设,这可能导致度量标准聚合中的不一致。我们的工作解决了传统数据建模方法的局限性,这些方法通常无法捕捉各种数据源之间关系的细微差别。我们提出了两个概念性数据建模概念:概率基数和度量可复制性,以及定义、符号和说明性示例,以及用于讨论这些概念的作用和实现的通用大数据分析框架。通过两个应用案例研究说明了所提出概念的应用,这些应用案例研究突出了各种方法,其中它们减少了不一致的聚合和度量报告的风险。
{"title":"Innovative Data Modeling Concepts for Big Data Analytics: Probabilistic Cardinality and Replicability Notations","authors":"Jelena Hađina ,&nbsp;Joshua Fogarty ,&nbsp;Boris Jukić","doi":"10.1016/j.array.2025.100608","DOIUrl":"10.1016/j.array.2025.100608","url":null,"abstract":"<div><div>The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100608"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing JIT Python runtime and parameter optimization for CPU-based Gaussian Splatting thumbnailer 利用JIT Python运行时和参数优化基于cpu的高斯飞溅缩略图
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100611
Evgeni Genchev , Dimitar Rangelov , Kars Waanders , Sierd Waanders
Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes (σ) and image resolutions reveals optimal trade-offs: σ values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.
高斯飞溅已经成为高保真3D场景表示的一种强大技术,但其计算需求阻碍了快速可视化,特别是在基于cpu的系统上。本文介绍了一种轻量级方法,用于从高斯飞溅数据高效生成缩略图,利用Python中的即时(JIT)编译来优化性能关键操作。通过集成Numba JIT编译器和战略性地简化参数,通过省略旋转数据和将高斯近似为球体,我们在保持视觉资格的同时实现了显着的速度提高。对高斯碎片大小(σ)和图像分辨率的系统实验揭示了最佳权衡:σ值为0.4-0.5平衡细节和速度,允许在1.8秒内生成720p缩略图。JIT编译将纯Python的执行时间缩短了156×compared(从336秒减少到2.33秒),将Python转变为执行性能敏感任务的可行工具。以cpu为中心的设计确保了跨设备的可移植性,解决了资源受限的情况,如刑事调查或现场操作。尽管Python固有性能上限的限制仍然存在,但这项工作证明了jit驱动的轻量级3D渲染优化的潜力,为不依赖GPU的快速预览提供了实用的解决方案。未来的方向包括迁移到编译语言和自适应参数调优,以进一步增强可伸缩性和实时适用性。
{"title":"Utilizing JIT Python runtime and parameter optimization for CPU-based Gaussian Splatting thumbnailer","authors":"Evgeni Genchev ,&nbsp;Dimitar Rangelov ,&nbsp;Kars Waanders ,&nbsp;Sierd Waanders","doi":"10.1016/j.array.2025.100611","DOIUrl":"10.1016/j.array.2025.100611","url":null,"abstract":"<div><div>Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes (<span><math><mi>σ</mi></math></span>) and image resolutions reveals optimal trade-offs: <span><math><mi>σ</mi></math></span> values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100611"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGPCN: A BERT and GPT-2-based Relational Graph Convolutional Network for hostile Hindi information detection BGPCN:一种基于BERT和gpt -2的关系图卷积网络,用于敌对印地语信息检测
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100601
Angana Chakraborty , Subhankar Joardar , Dilip K. Prasad , Arif Ahmed Sekh
The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) & Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained & fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in https://github.com/mani-design/BGPCN.
社交媒体平台上敌对内容的激增,尤其是以印地语等资源匮乏的语言传播的内容,对维护安全的网络环境构成了重大挑战。本研究介绍了BGPCN模型,该模型利用了变形金刚(BERT)和生成预训练变形金刚2 (GPT-2)嵌入的双向编码器表示的优势,并将其与关系图卷积网络(R-GCN)集成在一起,以识别印地语中的敌对信息。该模型处理粗粒度(敌意或非敌意)和细粒度(虚假、诽谤、仇恨、攻击性)分类任务。该模型在Constraint 2021 Hindi数据集上进行了评估,在粗粒度和细粒度分类方面的F1-Score分别为0.9816、0.85、0.50、0.62和0.65,优于最新方法。综合误差分析和消融研究强调了BGPCN模型的鲁棒性,同时确定了改进的机会。研究结果表明,BGPCN为恶意内容检测提供了可靠且可扩展的解决方案,在社交媒体监控和内容审核方面具有潜在的应用前景。数据和代码将在https://github.com/mani-design/BGPCN上公开访问。
{"title":"BGPCN: A BERT and GPT-2-based Relational Graph Convolutional Network for hostile Hindi information detection","authors":"Angana Chakraborty ,&nbsp;Subhankar Joardar ,&nbsp;Dilip K. Prasad ,&nbsp;Arif Ahmed Sekh","doi":"10.1016/j.array.2025.100601","DOIUrl":"10.1016/j.array.2025.100601","url":null,"abstract":"<div><div>The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) &amp; Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained &amp; fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in <span><span>https://github.com/mani-design/BGPCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100601"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation of terrestrial whole-sky images using the new W-Net model with the stationary wavelet transform 2D 基于平稳小波变换二维W-Net模型的地面全天空图像语义分割
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100587
D.G. Fantini , R.N. Silva , M.B.B. Siqueira
This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.
本文提出了一种新的深度学习模型W-Net,专注于鱼眼相机获得的全天空图像的语义分割。该模型基于使用两个U-Net网络串联,通过跳过连接和注意跳过连接进行互连。此外,所提出的方法结合了一个颜色空间转换层,将图像从RGB空间转换为HSV或CIE XYZ,然后是一个利用二维小波变换的特征提取层。引入了新的注意机制,特别是负责两个u - net之间信息转移的注意机制。为了评估模型的性能,对文献中四个成熟的模型进行了比较分析。值得注意的是,其中三个模型是为二元语义分割设计的,只考虑了“Sky”和“Cloud”类,而W-Net模型采用了多类语义分割,区分了“Sky”、“Sun”、“Sloud”和“Edge”类别。实验结果证明了W-Net体系结构的优越性。未加权版本的Mean Intersection over Union (MeanIoU)为87.63%,Dice系数为96.30%,总体准确率为97.40%,精密度为93.07%。加权W-Net进一步改善了结果,达到了MeanIoU为87.79%,Dice系数为96.62%,准确率为97.41%,Precision为89.89%。这些结果证实了所提出的模型优于基准方法,并且加权的包含增强了对太阳区域的检测。最后,通过将人工标注的掩码与所提出模型生成的掩码进行视觉比较,进行定性评价。
{"title":"Semantic segmentation of terrestrial whole-sky images using the new W-Net model with the stationary wavelet transform 2D","authors":"D.G. Fantini ,&nbsp;R.N. Silva ,&nbsp;M.B.B. Siqueira","doi":"10.1016/j.array.2025.100587","DOIUrl":"10.1016/j.array.2025.100587","url":null,"abstract":"<div><div>This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100587"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1