首页 > 最新文献

Array最新文献

英文 中文
Real-time eating monitoring: A cyber-physical systems approach 实时饮食监测:一种网络物理系统方法
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-28 DOI: 10.1016/j.array.2026.100696
Angel Biskupovic , Miguel A. González , Fernando Huanca , Mario Torres , Maria Rodriguez-Fernandez , Felipe Núñez
Numerous health conditions, such as obesity, diabetes, and cardiovascular diseases, require strict adherence to nutritional guidelines and accurate reporting of eating behaviors, making effective eating monitoring essential. A common approach to eating monitoring involves maintaining a food diary, where subjects manually self-report eating events, a process inherently prone to imprecision. Recent technological advances have enabled the development of passive, automatic eating detection systems, typically relying on data from wearable devices to identify eating events. In this context, the literature is vast on efforts that use machine learning methods for this purpose, with great success. However, most existing studies focus only on eating detection mechanisms and fail to offer an integrated solution with practical use cases. To address this gap, in this work, we present a cyber–physical systems approach to eating monitoring that integrates an eating event detection module with a cloud-based service-oriented backbone where numerous services are deployed, yielding an integrated solution for real-time eating monitoring.
许多健康状况,如肥胖、糖尿病和心血管疾病,都需要严格遵守营养指南,准确报告饮食行为,因此有效的饮食监测至关重要。一种常见的饮食监测方法包括保持饮食日记,受试者手动自我报告饮食事件,这一过程本身就容易不精确。最近的技术进步使被动的自动进食检测系统得以发展,通常依赖于可穿戴设备的数据来识别进食事件。在此背景下,关于使用机器学习方法实现此目的的努力的文献大量,并取得了巨大的成功。然而,大多数现有研究只关注进食检测机制,未能提供具有实际用例的综合解决方案。为了解决这一差距,在这项工作中,我们提出了一种网络物理系统方法来进行饮食监测,该方法将饮食事件检测模块与基于云的面向服务的主干集成在一起,该主干部署了许多服务,从而产生了实时饮食监测的集成解决方案。
{"title":"Real-time eating monitoring: A cyber-physical systems approach","authors":"Angel Biskupovic ,&nbsp;Miguel A. González ,&nbsp;Fernando Huanca ,&nbsp;Mario Torres ,&nbsp;Maria Rodriguez-Fernandez ,&nbsp;Felipe Núñez","doi":"10.1016/j.array.2026.100696","DOIUrl":"10.1016/j.array.2026.100696","url":null,"abstract":"<div><div>Numerous health conditions, such as obesity, diabetes, and cardiovascular diseases, require strict adherence to nutritional guidelines and accurate reporting of eating behaviors, making effective eating monitoring essential. A common approach to eating monitoring involves maintaining a food diary, where subjects manually self-report eating events, a process inherently prone to imprecision. Recent technological advances have enabled the development of passive, automatic eating detection systems, typically relying on data from wearable devices to identify eating events. In this context, the literature is vast on efforts that use machine learning methods for this purpose, with great success. However, most existing studies focus only on eating detection mechanisms and fail to offer an integrated solution with practical use cases. To address this gap, in this work, we present a cyber–physical systems approach to eating monitoring that integrates an eating event detection module with a cloud-based service-oriented backbone where numerous services are deployed, yielding an integrated solution for real-time eating monitoring.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100696"},"PeriodicalIF":4.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tabular and graph-based representations for noise and missing data in robust machine learning 鲁棒机器学习中基于表格和图形的噪声和缺失数据表示
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-27 DOI: 10.1016/j.array.2026.100697
Golam Imran , Md Parvez Hossain , Mahmudul Hasan , Md Tarek Hasan , Ohidujjaman
The performance of machine learning models in industrial settings is often limited by noise and missing values in real-world data. Tabular data representations, commonly used in traditional machine learning, may not effectively capture complex relationships or maintain reliability under such data degradation. This study comparatively evaluates the robustness of tabular and graph-based data representations for machine learning models when faced with data corruption. Using a real-world steel industry energy consumption dataset, we assess six models: Random Forest, XGBoost, Multi-Layer Perceptron (MLP), Graph Convolutional Network, SAGE, and Graph Attention Network, across clean, noisy, missing, and combined noise and missing data scenarios. A novel transformation technique converts tabular data into graph structures to facilitate relational learning in graph-based models. Graph-based models demonstrated 30.8% greater robustness than tabular models, as measured by their lower average drop in classification accuracy across missing, noisy, and combined data corruption scenarios. These findings pave the way for deploying more resilient artificial intelligence (AI) systems in complex industrial environments, emphasizing the critical role of relational data representations in robust machine learning. For validation, we applied another study with the UCI Machine Learning Repository: the Concrete Compressive Strength Dataset, and found comparable resonance in this regard.
机器学习模型在工业环境中的表现经常受到现实世界数据中的噪声和缺失值的限制。传统机器学习中常用的表格数据表示可能无法有效捕获复杂关系或在这种数据退化下保持可靠性。在面对数据损坏时,本研究比较评估了机器学习模型的表格和基于图形的数据表示的鲁棒性。使用真实的钢铁行业能耗数据集,我们评估了六种模型:随机森林,XGBoost,多层感知器(MLP),图卷积网络,SAGE和图注意力网络,跨越干净,有噪声,缺失和组合噪声和缺失数据场景。一种新的转换技术将表格数据转换为图结构,以促进基于图的模型中的关系学习。基于图的模型比表格模型表现出30.8%的鲁棒性,这是通过在缺失、噪声和组合数据损坏场景中分类精度的平均下降幅度更小来衡量的。这些发现为在复杂的工业环境中部署更具弹性的人工智能(AI)系统铺平了道路,强调了关系数据表示在强大的机器学习中的关键作用。为了验证,我们应用了UCI机器学习存储库的另一项研究:混凝土抗压强度数据集,并在这方面发现了类似的共鸣。
{"title":"Tabular and graph-based representations for noise and missing data in robust machine learning","authors":"Golam Imran ,&nbsp;Md Parvez Hossain ,&nbsp;Mahmudul Hasan ,&nbsp;Md Tarek Hasan ,&nbsp;Ohidujjaman","doi":"10.1016/j.array.2026.100697","DOIUrl":"10.1016/j.array.2026.100697","url":null,"abstract":"<div><div>The performance of machine learning models in industrial settings is often limited by noise and missing values in real-world data. Tabular data representations, commonly used in traditional machine learning, may not effectively capture complex relationships or maintain reliability under such data degradation. This study comparatively evaluates the robustness of tabular and graph-based data representations for machine learning models when faced with data corruption. Using a real-world steel industry energy consumption dataset, we assess six models: Random Forest, XGBoost, Multi-Layer Perceptron (MLP), Graph Convolutional Network, SAGE, and Graph Attention Network, across clean, noisy, missing, and combined noise and missing data scenarios. A novel transformation technique converts tabular data into graph structures to facilitate relational learning in graph-based models. Graph-based models demonstrated 30.8% greater robustness than tabular models, as measured by their lower average drop in classification accuracy across missing, noisy, and combined data corruption scenarios. These findings pave the way for deploying more resilient artificial intelligence (AI) systems in complex industrial environments, emphasizing the critical role of relational data representations in robust machine learning. For validation, we applied another study with the UCI Machine Learning Repository: the Concrete Compressive Strength Dataset, and found comparable resonance in this regard.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100697"},"PeriodicalIF":4.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing projected quantum kernels for the classification of IoT data 评估物联网数据分类的投影量子核
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-27 DOI: 10.1016/j.array.2026.100695
Francesco D’Amore , Luca Mariani , Carlo Mastroianni , Francesco Plastina , Luca Salatino , Jacopo Settino , Andrea Vinci
The use of quantum computing for machine learning is among the most promising applications of quantum technologies. Quantum models inspired by classical algorithms are developed to explore some possible advantages over classical approaches. A primary challenge in the development and testing of Quantum Machine Learning (QML) algorithms is the scarcity of datasets designed specifically for a quantum approach. Existing datasets, often borrowed from classical machine learning, need modifications to be compatible with current quantum hardware. In this work, we utilize a dataset generated by Internet-of-Things (IoT) devices in a format directly compatible with the proposed quantum data process, eliminating the need for feature reduction. Among quantum-inspired machine learning algorithms, the Projected Quantum Kernel (PQK) stands out for its elegant solution of projecting the data encoded in the Hilbert space into a classical space. For a prediction task concerning office room occupancy, we compare PQK with the standard Quantum Kernel (QK) and their classical counterparts to investigate how different feature maps affect the encoding of IoT data. Our findings show that the PQK demonstrates comparable effectiveness to classical methods when the proposed shallow circuit is used for quantum encoding.
将量子计算用于机器学习是量子技术最有前途的应用之一。受经典算法启发的量子模型被开发来探索一些可能优于经典方法的优势。量子机器学习(QML)算法开发和测试的主要挑战是专门为量子方法设计的数据集的稀缺性。现有的数据集通常来自经典的机器学习,需要修改才能与当前的量子硬件兼容。在这项工作中,我们利用物联网(IoT)设备生成的数据集,其格式与所提出的量子数据处理直接兼容,从而消除了特征缩减的需要。在量子启发的机器学习算法中,投影量子核(PQK)以其将希尔伯特空间中编码的数据投影到经典空间的优雅解决方案而脱颖而出。对于一项关于办公室占用率的预测任务,我们将PQK与标准量子内核(QK)及其经典对质进行比较,以研究不同的特征映射如何影响物联网数据的编码。我们的研究结果表明,当所提出的浅电路用于量子编码时,PQK显示出与经典方法相当的有效性。
{"title":"Assessing projected quantum kernels for the classification of IoT data","authors":"Francesco D’Amore ,&nbsp;Luca Mariani ,&nbsp;Carlo Mastroianni ,&nbsp;Francesco Plastina ,&nbsp;Luca Salatino ,&nbsp;Jacopo Settino ,&nbsp;Andrea Vinci","doi":"10.1016/j.array.2026.100695","DOIUrl":"10.1016/j.array.2026.100695","url":null,"abstract":"<div><div>The use of quantum computing for machine learning is among the most promising applications of quantum technologies. Quantum models inspired by classical algorithms are developed to explore some possible advantages over classical approaches. A primary challenge in the development and testing of Quantum Machine Learning (QML) algorithms is the scarcity of datasets designed specifically for a quantum approach. Existing datasets, often borrowed from classical machine learning, need modifications to be compatible with current quantum hardware. In this work, we utilize a dataset generated by Internet-of-Things (IoT) devices in a format directly compatible with the proposed quantum data process, eliminating the need for feature reduction. Among quantum-inspired machine learning algorithms, the Projected Quantum Kernel (PQK) stands out for its elegant solution of projecting the data encoded in the Hilbert space into a classical space. For a prediction task concerning office room occupancy, we compare PQK with the standard Quantum Kernel (QK) and their classical counterparts to investigate how different feature maps affect the encoding of IoT data. Our findings show that the PQK demonstrates comparable effectiveness to classical methods when the proposed shallow circuit is used for quantum encoding.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100695"},"PeriodicalIF":4.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of cloud platform alert monitoring and automatic analysis system based on random forest algorithm 基于随机森林算法的云平台报警监测与自动分析系统设计
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-24 DOI: 10.1016/j.array.2026.100694
Bokai Li , Mingkang Guo , Yongli Jia , Tianzi Zeng , Xiaojing Liu
To address the issue of alert information overload in cloud platform monitoring, where unnecessary or duplicate alerts hinder the rapid identification of problem sources by operation and maintenance personnel, an automatic analysis system for cloud platform alert monitoring based on the random forest (RF) algorithm has been proposed. In the system architecture, the infrastructure layer creates multiple virtual machines through the CloudStack cloud platform, utilizing the C8051F0403 model chip as an information collector to acquire abnormal data. The core service layer, centered around the ARM7TDMI core microprocessor, designs the hardware structure of the monitoring terminal, integrating global GSM-based SMS transmission and reception to track abnormal operational states. The user interface layer supplies alert information to the system. The alert client is functionally designed by incorporating the random forest algorithm, which is capable of processing a large volume of alert log samples from the cloud platform system while avoiding overfitting. By constructing multiple decision trees, the algorithm enhances the accuracy of classification and regression tasks, effectively identifying and filtering out unnecessary or duplicate alert information, thereby enabling automated analysis of abnormal alert monitoring. Experimental results demonstrate that the system achieves effective noise reduction in alert data, maintains a low false alert rate in alert monitoring, and supports root-cause analysis of alerts. The application of this system can significantly mitigate alert overload, ensuring that the alert information received by operation and maintenance (O&M) personnel is more accurate and reliable, thereby facilitating quicker problem localization and effective resolution.
针对云平台监控中报警信息过载、不必要或重复报警阻碍运维人员快速识别问题来源的问题,提出了一种基于随机森林(RF)算法的云平台报警监控自动分析系统。在系统架构中,基础架构层通过CloudStack云平台创建多个虚拟机,利用C8051F0403型号芯片作为信息采集器采集异常数据。核心业务层以ARM7TDMI核心微处理器为核心,设计监控终端的硬件结构,集成基于全球gsm的短信收发,跟踪异常运行状态。用户界面层向系统提供警报信息。警报客户端在功能设计上结合随机森林算法,能够处理来自云平台系统的大量警报日志样本,同时避免过拟合。该算法通过构建多棵决策树,提高了分类和回归任务的准确性,有效地识别和过滤掉不必要或重复的警报信息,从而实现异常警报监测的自动化分析。实验结果表明,该系统对报警数据进行了有效的降噪,在报警监控中保持了较低的误报率,并支持对报警进行根本原因分析。该系统的应用可以显著缓解警报过载,保证运维人员接收到的警报信息更加准确可靠,从而更快地定位问题,有效地解决问题。
{"title":"Design of cloud platform alert monitoring and automatic analysis system based on random forest algorithm","authors":"Bokai Li ,&nbsp;Mingkang Guo ,&nbsp;Yongli Jia ,&nbsp;Tianzi Zeng ,&nbsp;Xiaojing Liu","doi":"10.1016/j.array.2026.100694","DOIUrl":"10.1016/j.array.2026.100694","url":null,"abstract":"<div><div>To address the issue of alert information overload in cloud platform monitoring, where unnecessary or duplicate alerts hinder the rapid identification of problem sources by operation and maintenance personnel, an automatic analysis system for cloud platform alert monitoring based on the random forest (RF) algorithm has been proposed. In the system architecture, the infrastructure layer creates multiple virtual machines through the CloudStack cloud platform, utilizing the C8051F0403 model chip as an information collector to acquire abnormal data. The core service layer, centered around the ARM7TDMI core microprocessor, designs the hardware structure of the monitoring terminal, integrating global GSM-based SMS transmission and reception to track abnormal operational states. The user interface layer supplies alert information to the system. The alert client is functionally designed by incorporating the random forest algorithm, which is capable of processing a large volume of alert log samples from the cloud platform system while avoiding overfitting. By constructing multiple decision trees, the algorithm enhances the accuracy of classification and regression tasks, effectively identifying and filtering out unnecessary or duplicate alert information, thereby enabling automated analysis of abnormal alert monitoring. Experimental results demonstrate that the system achieves effective noise reduction in alert data, maintains a low false alert rate in alert monitoring, and supports root-cause analysis of alerts. The application of this system can significantly mitigate alert overload, ensuring that the alert information received by operation and maintenance (O&amp;M) personnel is more accurate and reliable, thereby facilitating quicker problem localization and effective resolution.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100694"},"PeriodicalIF":4.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of Convolutional Neural Networks on edge devices for Computer Vision tasks 卷积神经网络在边缘设备上的性能分析
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-23 DOI: 10.1016/j.array.2026.100692
Andrea Bricola, Nicoletta Noceti, Daniele D’Agostino
Computer vision is currently applied in an increasing number of technological systems and devices. In many cases, security and privacy constraints, or the need for real-time decision-making, require these tasks to be executed at the edge, where images are acquired. When high performance targets must be met, Convolutional Neural Networks (CNNs) remain the gold standard since, if compared to more recent and complex architectures, they provide a simpler structure that allows for easier implementation and compatibility with different hardware platforms. This paper presents a comparative analysis of the performance of several state-of-the-art CNNs on two edge computing architectures, specifically Jetson Nano and OAK-D-CM4. We considered also the Coral Edge TPU, even if it seems discontinued. The objective is to evaluate the achievable performance and identify the limitations inherent in the available software libraries and hardware. Particular attention is given to the trade-off between high accuracy and fast inference. To this end, two use cases targeting classical Computer Vision tasks, i.e. object detection and face recognition, will be discussed.
计算机视觉目前在越来越多的技术系统和设备中得到应用。在许多情况下,由于安全和隐私限制,或者需要实时决策,需要在获取图像的边缘执行这些任务。当必须满足高性能目标时,卷积神经网络(cnn)仍然是黄金标准,因为如果与最近和更复杂的架构相比,它们提供了更简单的结构,允许更容易实现和兼容不同的硬件平台。本文对几种最先进的cnn在两种边缘计算架构上的性能进行了比较分析,特别是Jetson Nano和OAK-D-CM4。我们也考虑了珊瑚边缘TPU,即使它似乎已停产。目标是评估可实现的性能,并确定可用软件库和硬件中固有的限制。特别注意在高精度和快速推理之间的权衡。为此,将讨论两个针对经典计算机视觉任务的用例,即对象检测和人脸识别。
{"title":"Performance analysis of Convolutional Neural Networks on edge devices for Computer Vision tasks","authors":"Andrea Bricola,&nbsp;Nicoletta Noceti,&nbsp;Daniele D’Agostino","doi":"10.1016/j.array.2026.100692","DOIUrl":"10.1016/j.array.2026.100692","url":null,"abstract":"<div><div>Computer vision is currently applied in an increasing number of technological systems and devices. In many cases, security and privacy constraints, or the need for real-time decision-making, require these tasks to be executed at the edge, where images are acquired. When high performance targets must be met, Convolutional Neural Networks (CNNs) remain the gold standard since, if compared to more recent and complex architectures, they provide a simpler structure that allows for easier implementation and compatibility with different hardware platforms. This paper presents a comparative analysis of the performance of several state-of-the-art CNNs on two edge computing architectures, specifically Jetson Nano and OAK-D-CM4. We considered also the Coral Edge TPU, even if it seems discontinued. The objective is to evaluate the achievable performance and identify the limitations inherent in the available software libraries and hardware. Particular attention is given to the trade-off between high accuracy and fast inference. To this end, two use cases targeting classical Computer Vision tasks, i.e. object detection and face recognition, will be discussed.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100692"},"PeriodicalIF":4.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HLF-FSL: A decentralized federated split learning solution for IoT on hyperledger fabric HLF-FSL:一种基于超级账本结构的物联网分散联邦分裂学习解决方案
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-23 DOI: 10.1016/j.array.2026.100685
Carlos Beis-Penedo , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas , Manuel Fernández-Veiga , Francisco Troncoso-Pastoriza
Collaborative machine learning in sensitive domains demands scalable, privacy-aware and access-controlled solutions for enterprise-grade deployment. Conventional federated learning (FL) relies on a central server, introducing single points of failure and privacy risks, while split learning (SL) partitions models for privacy but scales poorly because of sequential training. We present HLF-FSL, a decentralized architecture that combines federated split learning (FSL) with the permissioned blockchain Hyperledger Fabric (HLF). Chaincode orchestrates split-model execution and peer-to-peer aggregation without a central coordinator, leveraging HLF’s transient fields and Private Data Collections (PDCs) to keep raw data and model activations off-chain and access-controlled. On CIFAR-10, MNIST and ImageNet-Mini, HLF-FSL matches the accuracy of a standard server-coordinated FSL baseline while reducing per-epoch training time versus Ethereum-based baselines. Performance and scalability tests quantify the Fabric coordination overhead via a component-level breakdown of SDK-facing latencies and communication volumes; empirically, this overhead increases wall-clock epoch time while preserving the same accuracy-vs-epoch behavior as a FedSplit Learning baseline.
敏感领域的协作机器学习需要可扩展、隐私感知和访问控制的企业级部署解决方案。传统的联邦学习(FL)依赖于中央服务器,引入了单点故障和隐私风险,而分离学习(SL)为隐私划分模型,但由于顺序训练而扩展性差。我们提出了HLF-FSL,这是一种将联邦分裂学习(FSL)与允许的区块链超级账本结构(HLF)相结合的去中心化架构。Chaincode在没有中央协调器的情况下编排了分模型执行和点对点聚合,利用HLF的瞬态字段和私有数据集合(PDCs)来保持原始数据和模型激活的链下和访问控制。在CIFAR-10、MNIST和ImageNet-Mini上,HLF-FSL与标准服务器协调的FSL基线的准确性相匹配,同时与基于以太坊的基线相比,减少了每个epoch的训练时间。性能和可伸缩性测试通过组件级分解面向sdk的延迟和通信量来量化Fabric协调开销;根据经验,这种开销增加了时钟的历元时间,同时保留了与FedSplit学习基线相同的精度vs历元行为。
{"title":"HLF-FSL: A decentralized federated split learning solution for IoT on hyperledger fabric","authors":"Carlos Beis-Penedo ,&nbsp;Rebeca P. Díaz-Redondo ,&nbsp;Ana Fernández-Vilas ,&nbsp;Manuel Fernández-Veiga ,&nbsp;Francisco Troncoso-Pastoriza","doi":"10.1016/j.array.2026.100685","DOIUrl":"10.1016/j.array.2026.100685","url":null,"abstract":"<div><div>Collaborative machine learning in sensitive domains demands scalable, <em>privacy-aware and access-controlled</em> solutions for enterprise-grade deployment. Conventional federated learning (FL) relies on a central server, introducing single points of failure and privacy risks, while split learning (SL) partitions models for privacy but scales poorly because of sequential training. We present HLF-FSL, a decentralized architecture that combines federated split learning (FSL) with the permissioned blockchain Hyperledger Fabric (HLF). Chaincode orchestrates split-model execution and peer-to-peer aggregation without a central coordinator, leveraging HLF’s transient fields and Private Data Collections (PDCs) to keep raw data and model activations off-chain and access-controlled. On CIFAR-10, MNIST and ImageNet-Mini, HLF-FSL matches the accuracy of a standard server-coordinated FSL baseline while reducing per-epoch training time versus Ethereum-based baselines. Performance and scalability tests quantify the Fabric coordination overhead via a component-level breakdown of SDK-facing latencies and communication volumes; empirically, this overhead increases wall-clock epoch time while preserving the same accuracy-vs-epoch behavior as a FedSplit Learning baseline.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100685"},"PeriodicalIF":4.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilingual multimodal cyberbullying detection through adaptive and hierarchical fusion 基于自适应和分层融合的多语言多模式网络欺凌检测
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.array.2026.100689
Walaa Saber Ismail , Hikmat Ullah , Muhammad Adnan , Farman Ullah
Detecting cyberbullying in multimodal content (such as memes) is challenging due to complex interactions between images and text, often involving sarcasm, multilingual usage, and other noisy real-world factors. This paper presents a multilingual multimodal cyberbullying detection framework that combines early fusion, late fusion, and hierarchical fusion strategies within a unified architecture. The framework introduces three key modules: Adaptive Cross-Modal Token Integration (ACTI) for iterative early fusion, Context-Adaptive Ensemble with Uncertainty-Aware Gating (CAE-UAG) for dynamic late fusion based on input reliability, and a Hierarchical Contextual Fusion Network (HCFN) that feeds early fused context back into later unimodal processing for refined predictions. Our system leverages state-of-the-art pretrained vision-language models (e.g., CLIP for images and XLM-RoBERTa for text) to learn subtle cross-modal representations (e.g., sarcasm or image–text irony) and uses uncertainty modeling to handle ambiguous or noisy inputs. We evaluate the approach on two benchmark datasets: the English-language Facebook Hateful Memes and the ArMeme dataset of Arabic memes. Experimental results show that our model outperforms multiple baselines (including single-modality models and a strong CLIP-based multimodal baseline), achieving high accuracy, F1-scores, and area under ROC (AUROC) across languages. Notably, it achieves state-of-the-art performance (e.g., 0.85 F1 and 0.88 AUROC on Hateful Memes), surpassing prior fusion methods. The proposed framework represents a significant step toward generalizable, culturally aware, and robust multimodal cyberbullying detection suitable for deployment across diverse social media contexts.
检测多模态内容(如模因)中的网络欺凌具有挑战性,因为图像和文本之间存在复杂的相互作用,通常涉及讽刺、多语言使用和其他嘈杂的现实世界因素。本文提出了一个多语言多模式的网络欺凌检测框架,该框架在统一架构内结合了早期融合、晚期融合和分层融合策略。该框架引入了三个关键模块:用于迭代早期融合的自适应跨模态令牌集成(ACTI),用于基于输入可靠性的动态晚期融合的具有不确定性感知门控的上下文自适应集成(CAE-UAG),以及将早期融合的上下文反馈到后期单模处理以进行精细预测的分层上下文融合网络(HCFN)。我们的系统利用最先进的预训练视觉语言模型(例如,用于图像的CLIP和用于文本的XLM-RoBERTa)来学习微妙的跨模态表示(例如,讽刺或图像-文本讽刺),并使用不确定性建模来处理模糊或有噪声的输入。我们在两个基准数据集上评估了这种方法:英语Facebook仇恨表情包和阿拉伯语表情包的ArMeme数据集。实验结果表明,我们的模型优于多个基线(包括单模态模型和基于clip的强多模态基线),实现了高准确率、f1分数和跨语言的ROC下面积(AUROC)。值得注意的是,它实现了最先进的性能(例如,在仇恨表情包上,0.85 F1和0.88 AUROC),超过了先前的融合方法。所提出的框架是朝着适用于不同社交媒体环境的可推广、具有文化意识和健壮的多模式网络欺凌检测迈出的重要一步。
{"title":"Multilingual multimodal cyberbullying detection through adaptive and hierarchical fusion","authors":"Walaa Saber Ismail ,&nbsp;Hikmat Ullah ,&nbsp;Muhammad Adnan ,&nbsp;Farman Ullah","doi":"10.1016/j.array.2026.100689","DOIUrl":"10.1016/j.array.2026.100689","url":null,"abstract":"<div><div>Detecting cyberbullying in multimodal content (such as memes) is challenging due to complex interactions between images and text, often involving sarcasm, multilingual usage, and other noisy real-world factors. This paper presents a multilingual multimodal cyberbullying detection framework that combines early fusion, late fusion, and hierarchical fusion strategies within a unified architecture. The framework introduces three key modules: Adaptive Cross-Modal Token Integration (ACTI) for iterative early fusion, Context-Adaptive Ensemble with Uncertainty-Aware Gating (CAE-UAG) for dynamic late fusion based on input reliability, and a Hierarchical Contextual Fusion Network (HCFN) that feeds early fused context back into later unimodal processing for refined predictions. Our system leverages state-of-the-art pretrained vision-language models (e.g., CLIP for images and XLM-RoBERTa for text) to learn subtle cross-modal representations (e.g., sarcasm or image–text irony) and uses uncertainty modeling to handle ambiguous or noisy inputs. We evaluate the approach on two benchmark datasets: the English-language Facebook Hateful Memes and the ArMeme dataset of Arabic memes. Experimental results show that our model outperforms multiple baselines (including single-modality models and a strong CLIP-based multimodal baseline), achieving high accuracy, F1-scores, and area under ROC (AUROC) across languages. Notably, it achieves state-of-the-art performance (e.g., 0.85 F1 and 0.88 AUROC on Hateful Memes), surpassing prior fusion methods. The proposed framework represents a significant step toward generalizable, culturally aware, and robust multimodal cyberbullying detection suitable for deployment across diverse social media contexts.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100689"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolution of physics-informed neural networks: Recent architectural variants and optimization strategies 物理信息神经网络的进化:最近的架构变体和优化策略
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.array.2026.100688
Ahmad , Husna Zafar , Aneeqa Zafar , Muhammad Noveel Sadiq , A.K. Awasthi , Homan Emadifar , Karim K. Ahmed
Physics-Informed Neural Networks (PINNs) are a machine learning technique that directly incorporates the governing physics of problems, such as partial differential equations (PDEs) and ordinary differential equations (ODEs), into the neural network architecture. The primary goal of PINNs is to approximate solutions while satisfying given constraints and minimizing the residuals of the differential equations. PINNs have been employed to solve various problems, including integro-differential equations, fractional differential equations, and stochastic PDEs. Over the past two years, significant advancements have addressed the challenges associated with PINNs, resulting in notable improvements in accuracy and performance. This article provides a comprehensive summary of the latest methodologies contributing to these advancements, focusing on innovations in hyperparameter optimization and novel PINN variants inspired by other neural networks. Examples include MultiInNet-PINN, Transformer-based PINNs such as Tr-PINN and PINNsFormer, as well as PINNs incorporating attention mechanisms and recurrent neural network (RNN) architectures (PIANN). Additionally, recent research on domain decomposition techniques in PINN architectures are highlighted. By consolidating recent architectural and algorithmic advances, this research identifies critical research opportunities for enhancing the reliability, efficiency, and broader applicability of PINNs in scientific computing.
物理信息神经网络(pinn)是一种机器学习技术,它直接将问题的控制物理,如偏微分方程(PDEs)和常微分方程(ode),整合到神经网络架构中。pinn的主要目标是在满足给定约束条件的情况下逼近解并最小化微分方程的残差。pinn已被用于解决各种问题,包括积分微分方程,分数阶微分方程和随机偏微分方程。在过去的两年中,在解决与pin相关的挑战方面取得了重大进展,导致准确性和性能显着提高。本文全面总结了促进这些进步的最新方法,重点介绍了超参数优化的创新和受其他神经网络启发的新型PINN变体。例子包括multiinet - pinn,基于变压器的pinn,如Tr-PINN和PINNsFormer,以及结合注意机制和循环神经网络(RNN)架构(PIANN)的pinn。此外,本文还重点介绍了PINN体系结构中域分解技术的最新研究。通过整合最近的架构和算法进展,本研究确定了提高pin在科学计算中的可靠性、效率和更广泛适用性的关键研究机会。
{"title":"Evolution of physics-informed neural networks: Recent architectural variants and optimization strategies","authors":"Ahmad ,&nbsp;Husna Zafar ,&nbsp;Aneeqa Zafar ,&nbsp;Muhammad Noveel Sadiq ,&nbsp;A.K. Awasthi ,&nbsp;Homan Emadifar ,&nbsp;Karim K. Ahmed","doi":"10.1016/j.array.2026.100688","DOIUrl":"10.1016/j.array.2026.100688","url":null,"abstract":"<div><div>Physics-Informed Neural Networks (PINNs) are a machine learning technique that directly incorporates the governing physics of problems, such as partial differential equations (PDEs) and ordinary differential equations (ODEs), into the neural network architecture. The primary goal of PINNs is to approximate solutions while satisfying given constraints and minimizing the residuals of the differential equations. PINNs have been employed to solve various problems, including integro-differential equations, fractional differential equations, and stochastic PDEs. Over the past two years, significant advancements have addressed the challenges associated with PINNs, resulting in notable improvements in accuracy and performance. This article provides a comprehensive summary of the latest methodologies contributing to these advancements, focusing on innovations in hyperparameter optimization and novel PINN variants inspired by other neural networks. Examples include MultiInNet-PINN, Transformer-based PINNs such as Tr-PINN and PINNsFormer, as well as PINNs incorporating attention mechanisms and recurrent neural network (RNN) architectures (PIANN). Additionally, recent research on domain decomposition techniques in PINN architectures are highlighted. By consolidating recent architectural and algorithmic advances, this research identifies critical research opportunities for enhancing the reliability, efficiency, and broader applicability of PINNs in scientific computing.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100688"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KeepUp: A unified framework fusing knowledge extraction, social platform engagement, and user profiling for fake news detection KeepUp:一个统一的框架,融合了知识提取、社交平台参与和假新闻检测的用户分析
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.array.2026.100687
Muhammad Wasim , Sehrash Safdar , Abdur Rehman , Zahoor Ur Rehman , Osama A. Khashan , Naif Alzahrani , Anwar Ghani
Approximately half of the global population relies on social media platforms such as Facebook, Twitter, and Instagram for news consumption. The vast volume and rapid dissemination of information on these platforms pose substantial challenges for the timely and accurate detection of fake news. Academics are working harder to develop AI-based automated systems to check news accuracy because of the detrimental effects of misinformation on public health, social trust, and political stability. But the majority of false news detection methods currently in use focus primarily on content-based features, often ignoring essential factors such as user profiling, social context, and knowledge extraction. The knowledge-based features necessary for effective document retrieval, position identification, social engagement analysis, and user profile integration are often absent from datasets, even though some of them contain elements of social context and user behavior. This work offers a thorough, fully annotated dataset that integrates user profiles, stance information, social engagements, knowledge extraction, and content elements into a single resource to overcome these limitations. Building on this dataset, this study creates KeepUp, a unified system that integrates user profiles, social media activity, and knowledge extraction to detect bogus news. KeepUp outperforms all baseline models, achieving a detection accuracy of 0.78, demonstrating the effectiveness of this combined approach.
全球大约一半的人口依赖Facebook、Twitter和Instagram等社交媒体平台来消费新闻。这些平台上信息的庞大数量和快速传播对及时准确地发现假新闻构成了重大挑战。学者们正在努力开发基于人工智能的自动化系统,以检查新闻的准确性,因为错误信息对公共健康、社会信任和政治稳定产生了有害影响。但是,目前使用的大多数假新闻检测方法主要集中在基于内容的特征上,往往忽略了用户特征、社会背景和知识提取等基本因素。有效的文档检索、职位识别、社会参与分析和用户档案集成所必需的基于知识的特征通常不在数据集中,尽管其中一些数据集包含社会背景和用户行为的元素。这项工作提供了一个全面的、完全注释的数据集,它将用户配置文件、立场信息、社交活动、知识提取和内容元素集成到一个资源中,以克服这些限制。在此数据集的基础上,本研究创建了KeepUp,这是一个统一的系统,集成了用户配置文件、社交媒体活动和知识提取来检测虚假新闻。KeepUp优于所有基线模型,达到0.78的检测精度,证明了这种组合方法的有效性。
{"title":"KeepUp: A unified framework fusing knowledge extraction, social platform engagement, and user profiling for fake news detection","authors":"Muhammad Wasim ,&nbsp;Sehrash Safdar ,&nbsp;Abdur Rehman ,&nbsp;Zahoor Ur Rehman ,&nbsp;Osama A. Khashan ,&nbsp;Naif Alzahrani ,&nbsp;Anwar Ghani","doi":"10.1016/j.array.2026.100687","DOIUrl":"10.1016/j.array.2026.100687","url":null,"abstract":"<div><div>Approximately half of the global population relies on social media platforms such as Facebook, Twitter, and Instagram for news consumption. The vast volume and rapid dissemination of information on these platforms pose substantial challenges for the timely and accurate detection of fake news. Academics are working harder to develop AI-based automated systems to check news accuracy because of the detrimental effects of misinformation on public health, social trust, and political stability. But the majority of false news detection methods currently in use focus primarily on content-based features, often ignoring essential factors such as user profiling, social context, and knowledge extraction. The knowledge-based features necessary for effective document retrieval, position identification, social engagement analysis, and user profile integration are often absent from datasets, even though some of them contain elements of social context and user behavior. This work offers a thorough, fully annotated dataset that integrates user profiles, stance information, social engagements, knowledge extraction, and content elements into a single resource to overcome these limitations. Building on this dataset, this study creates KeepUp, a unified system that integrates user profiles, social media activity, and knowledge extraction to detect bogus news. KeepUp outperforms all baseline models, achieving a detection accuracy of 0.78, demonstrating the effectiveness of this combined approach.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100687"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent deep learning on tensor fields for segmentation of ultrasound images 基于张量场的多智能体深度学习超声图像分割
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.array.2026.100686
Suman Sharma , Samart Moodleah , Stanislav S. Makhanov
Medical image analysis often relies on vector fields (VF), which are fundamental to deterministic models such as Active Contours, Level Set Methods, Phase Portrait Analysis, and artificial agent–based formulations. We experimentally demonstrate that a Deep Learning Neural Network (DLNN) capable of interpreting VF structures can substantially enhance the decision-making capabilities of artificial agents. We introduce a novel hybrid framework that integrates artificial life (AL) agents operating within a VF with a DLNN that guides their behavior. A key innovation of the model is the initialization of AL agents using streamlines derived from the VF orthogonal to the generalized gradient vector flow (GGVF) field. The VF is further transformed into a bi-directional Tensor Field (TF), where the spatial distribution and classification of degenerate points (DPs) serve as critical features. These DPs are leveraged to train AL agents through the DLNN, enabling them to follow meaningful anatomical structures. The framework employs DeepLabV3+ with ResNet50 as the backbone and is trained on 179 benign and 107 malignant breast ultrasound images collected at Thammasat University Hospital (TUH) and annotated by three leading radiologists, in addition to the BUSI and UDIAT datasets. Using 10-fold cross-validation, the proposed method achieves stable and robust performance across three datasets. Mean Dice scores of 94.84±1.63% (TUH), 94.16±1.62% (BUSI), and 93.67±1.51% (UDIAT) are obtained, with corresponding IoU values of 91.19±1.76%, 90.21±1.83% and 89.08±1.70%, demonstrating strong generalization across diverse imaging conditions. Comparative evaluations against state-of-the-art methods confirm the superiority of the proposed model. A video demonstration is available at: https://tinyurl.com/AL-DLNN.
医学图像分析通常依赖于矢量场(VF),这是确定性模型的基础,如活动轮廓、水平集方法、相位肖像分析和基于人工智能体的配方。我们通过实验证明,能够解释VF结构的深度学习神经网络(DLNN)可以大大提高人工智能体的决策能力。我们引入了一种新的混合框架,该框架将在VF内操作的人工生命(AL)代理与指导其行为的dln集成在一起。该模型的一个关键创新是使用与广义梯度向量流(GGVF)场正交的VF衍生的流线来初始化人工智能代理。VF进一步转化为双向张量场(TF),其中简并点(dp)的空间分布和分类作为关键特征。这些DPs被用来通过DLNN训练人工智能体,使它们能够跟随有意义的解剖结构。该框架采用DeepLabV3+和ResNet50作为主干,在泰国法王大学医院(TUH)收集的179张良性和107张恶性乳房超声图像上进行训练,并由三位顶尖放射科医生进行注释,此外还有BUSI和UDIAT数据集。通过10倍交叉验证,该方法在三个数据集上实现了稳定和鲁棒的性能。平均Dice评分为94.84±1.63% (TUH)、94.16±1.62% (BUSI)和93.67±1.51% (UDIAT),对应的IoU值分别为91.19±1.76%、90.21±1.83%和89.08±1.70%,在不同成像条件下具有较强的通俗性。与最先进的方法进行比较评估,证实了所提出模型的优越性。视频演示可在https://tinyurl.com/AL-DLNN获得。
{"title":"Multi-agent deep learning on tensor fields for segmentation of ultrasound images","authors":"Suman Sharma ,&nbsp;Samart Moodleah ,&nbsp;Stanislav S. Makhanov","doi":"10.1016/j.array.2026.100686","DOIUrl":"10.1016/j.array.2026.100686","url":null,"abstract":"<div><div>Medical image analysis often relies on vector fields (VF), which are fundamental to deterministic models such as Active Contours, Level Set Methods, Phase Portrait Analysis, and artificial agent–based formulations. We experimentally demonstrate that a Deep Learning Neural Network (DLNN) capable of interpreting VF structures can substantially enhance the decision-making capabilities of artificial agents. We introduce a novel hybrid framework that integrates artificial life (AL) agents operating within a VF with a DLNN that guides their behavior. A key innovation of the model is the initialization of AL agents using streamlines derived from the VF orthogonal to the generalized gradient vector flow (GGVF) field. The VF is further transformed into a bi-directional Tensor Field (TF), where the spatial distribution and classification of degenerate points (DPs) serve as critical features. These DPs are leveraged to train AL agents through the DLNN, enabling them to follow meaningful anatomical structures. The framework employs DeepLabV3+ with ResNet50 as the backbone and is trained on 179 benign and 107 malignant breast ultrasound images collected at Thammasat University Hospital (TUH) and annotated by three leading radiologists, in addition to the BUSI and UDIAT datasets. Using 10-fold cross-validation, the proposed method achieves stable and robust performance across three datasets. Mean Dice scores of <span><math><mrow><mn>94</mn><mo>.</mo><mn>84</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>63</mn><mtext>%</mtext></mrow></math></span> (TUH), <span><math><mrow><mn>94</mn><mo>.</mo><mn>16</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>62</mn><mtext>%</mtext></mrow></math></span> (BUSI), and <span><math><mrow><mn>93</mn><mo>.</mo><mn>67</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>51</mn><mtext>%</mtext></mrow></math></span> (UDIAT) are obtained, with corresponding IoU values of <span><math><mrow><mn>91</mn><mo>.</mo><mn>19</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>76</mn><mtext>%</mtext></mrow></math></span>, <span><math><mrow><mn>90</mn><mo>.</mo><mn>21</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>83</mn><mtext>%</mtext></mrow></math></span> and <span><math><mrow><mn>89</mn><mo>.</mo><mn>08</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>1</mn><mo>.</mo><mn>70</mn><mtext>%</mtext></mrow></math></span>, demonstrating strong generalization across diverse imaging conditions. Comparative evaluations against state-of-the-art methods confirm the superiority of the proposed model. A video demonstration is available at: <span><span>https://tinyurl.com/AL-DLNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100686"},"PeriodicalIF":4.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1