首页 > 最新文献

Systems and Soft Computing最新文献

英文 中文
Parameter efficient vs full fine-tuning for building children’s myopia prediction models 参数高效vs全微调构建儿童近视预测模型
IF 3.6 Pub Date : 2026-01-29 DOI: 10.1016/j.sasc.2026.200452
Elena Ros-Sánchez , César Domínguez , Jónathan Heras , David Oliver-Gutiérrez , Didac Royo , Anna Boixadera Espax , Miguel Ángel Zapata

Background and objective:

The prevalence of myopia is increasing globally, with projections suggesting that by 2050, half of the population could be affected and 10% may experience high myopia. High myopia significantly increases the risk of irreversible vision loss due to complications such as myopic macular degeneration, retinal detachment, and glaucoma. Early detection in childhood is therefore crucial to implement timely interventions and prevent progression. However, identifying myopia in clinical practice remains challenging, as current methods often rely on subjective recall or require specialized tests that may not be widely available. This highlights the need for faster, more accessible, and reliable detection methods. Artificial intelligence, particularly deep learning, offers a promising alternative for quickly and accurately identifying myopia in children. This study presents the first application of deep learning methods to predict myopia in children.

Methods:

We conducted a comprehensive analysis of different families of deep learning architectures – namely convolutional neural networks, transformers, and state-based models – along with training strategies including Low-Rank Adaptation (LoRA) and full fine-tuning. These models were trained to predict spherical equivalent from retinal fundus images of children.

Results:

Our experiments demonstrated that transformer- and state-based architectures outperformed convolutional models. Additionally, full fine-tuning yielded better results compared to LoRA, although the latter is more resource-efficient. The best-performing model, based on the Mamba architecture, achieved a mean absolute error (MAE) of 0.74 diopters in estimating spherical equivalent, a similar result to those obtained in the literature for adult cohorts.

Conclusions:

Deep learning models, particularly those based on transformer and Mamba architectures, show strong potential for predicting myopia in children using retinal fundus images. These findings are a step towards the development of scalable and accessible tools for early myopia detection and intervention.
背景与目的:全球近视患病率正在上升,预测表明,到2050年,一半的人口可能会受到影响,10%的人口可能会患上高度近视。高度近视会显著增加因近视性黄斑变性、视网膜脱离和青光眼等并发症而导致不可逆视力丧失的风险。因此,儿童早期发现对于及时实施干预措施和防止病情恶化至关重要。然而,在临床实践中识别近视仍然具有挑战性,因为目前的方法通常依赖于主观回忆或需要可能不广泛使用的专门测试。这突出了对更快、更容易获得和更可靠的检测方法的需求。人工智能,特别是深度学习,为快速准确地识别儿童近视提供了一个很有前途的选择。本研究首次应用深度学习方法预测儿童近视。方法:我们对不同家族的深度学习架构——即卷积神经网络、变压器和基于状态的模型——以及包括低秩自适应(LoRA)和完全微调在内的训练策略进行了全面分析。这些模型经过训练,可以预测儿童视网膜眼底图像的球形当量。结果:我们的实验表明,基于变压器和状态的架构优于卷积模型。此外,与LoRA相比,完全微调产生了更好的结果,尽管后者更节约资源。基于曼巴(Mamba)结构的最佳模型在估计球等效时的平均绝对误差(MAE)为0.74屈光度,与文献中成人队列的结果相似。结论:深度学习模型,特别是那些基于transformer和Mamba架构的模型,显示出利用视网膜眼底图像预测儿童近视的强大潜力。这些发现是朝着开发可扩展和可获得的早期近视检测和干预工具迈出的一步。
{"title":"Parameter efficient vs full fine-tuning for building children’s myopia prediction models","authors":"Elena Ros-Sánchez ,&nbsp;César Domínguez ,&nbsp;Jónathan Heras ,&nbsp;David Oliver-Gutiérrez ,&nbsp;Didac Royo ,&nbsp;Anna Boixadera Espax ,&nbsp;Miguel Ángel Zapata","doi":"10.1016/j.sasc.2026.200452","DOIUrl":"10.1016/j.sasc.2026.200452","url":null,"abstract":"<div><h3>Background and objective:</h3><div>The prevalence of myopia is increasing globally, with projections suggesting that by 2050, half of the population could be affected and 10% may experience high myopia. High myopia significantly increases the risk of irreversible vision loss due to complications such as myopic macular degeneration, retinal detachment, and glaucoma. Early detection in childhood is therefore crucial to implement timely interventions and prevent progression. However, identifying myopia in clinical practice remains challenging, as current methods often rely on subjective recall or require specialized tests that may not be widely available. This highlights the need for faster, more accessible, and reliable detection methods. Artificial intelligence, particularly deep learning, offers a promising alternative for quickly and accurately identifying myopia in children. This study presents the first application of deep learning methods to predict myopia in children.</div></div><div><h3>Methods:</h3><div>We conducted a comprehensive analysis of different families of deep learning architectures – namely convolutional neural networks, transformers, and state-based models – along with training strategies including Low-Rank Adaptation (LoRA) and full fine-tuning. These models were trained to predict spherical equivalent from retinal fundus images of children.</div></div><div><h3>Results:</h3><div>Our experiments demonstrated that transformer- and state-based architectures outperformed convolutional models. Additionally, full fine-tuning yielded better results compared to LoRA, although the latter is more resource-efficient. The best-performing model, based on the Mamba architecture, achieved a mean absolute error (MAE) of 0.74 diopters in estimating spherical equivalent, a similar result to those obtained in the literature for adult cohorts.</div></div><div><h3>Conclusions:</h3><div>Deep learning models, particularly those based on transformer and Mamba architectures, show strong potential for predicting myopia in children using retinal fundus images. These findings are a step towards the development of scalable and accessible tools for early myopia detection and intervention.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200452"},"PeriodicalIF":3.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mathematical Analysis of Real-Time Data Processing Methods for IoT Applications Based on Hesitant Bipolar Fuzzy Dombi Power Operators 基于犹豫双极模糊多比功率算子的物联网应用实时数据处理方法的数学分析
IF 3.6 Pub Date : 2026-01-20 DOI: 10.1016/j.sasc.2026.200444
Tahir Mahmood , Hafiz Muhammad Waqas , Ubaid ur Rehman , Dragan Pamucar
The rapid growth of Internet of Things (IoT) technologies has made real-time data processing a critical component for efficient monitoring, analysis, and intelligent decision-making in dynamic and large-scale environments. IoT systems continuously generate massive volumes of heterogeneous data that must be processed with minimal latency to ensure timely responses and reliable system performance. Effective real-time data processing enables IoT applications to adapt to changing conditions, enhance operational efficiency, improve safety and reliability, and support time-sensitive services in domains such as smart cities, healthcare monitoring, industrial automation, and intelligent transportation systems. This study presents a comprehensive mathematical framework for the analysis of real-time data processing methods for IoT applications based on hesitant bipolar fuzzy (HBF) Dombi power operators. The proposed model is designed to effectively capture uncertainty, hesitation, and bipolar information that naturally arise in real-world IoT environments due to incomplete, imprecise, and conflicting data sources. By incorporating a multi-criteria decision-making (MCDM) approach, multiple real-time data processing techniques are systematically evaluated and prioritized with respect to several performance-related attributes. The proposed HBF Dombi power-based framework offers a reliable and transparent mechanism for comparing competing real-time data processing strategies and selecting the most suitable method for specific IoT scenarios. The results indicate that the proposed approach improves decision accuracy and supports better alignment between data processing methods and the complex operational requirements of modern IoT systems. This work contributes both theoretical insights and practical guidance for the design and evaluation of efficient, adaptive, and intelligent real-time IoT data processing architectures.
物联网(IoT)技术的快速发展使得实时数据处理成为动态和大规模环境中高效监控、分析和智能决策的关键组成部分。物联网系统不断产生大量异构数据,必须以最小的延迟进行处理,以确保及时响应和可靠的系统性能。有效的实时数据处理使物联网应用能够适应不断变化的条件,提高运营效率,提高安全性和可靠性,并支持智慧城市、医疗监控、工业自动化和智能交通系统等领域的时间敏感服务。本研究提出了一个综合的数学框架,用于分析基于犹豫双极模糊(HBF) Dombi功率算子的物联网应用的实时数据处理方法。所提出的模型旨在有效捕获由于不完整、不精确和冲突的数据源而在现实世界物联网环境中自然产生的不确定性、犹豫和两极信息。通过结合多标准决策(MCDM)方法,系统地评估了多种实时数据处理技术,并根据几个与性能相关的属性对其进行了优先级排序。提出的HBF Dombi基于功率的框架提供了一个可靠和透明的机制,用于比较相互竞争的实时数据处理策略,并为特定的物联网场景选择最合适的方法。结果表明,所提出的方法提高了决策精度,并支持数据处理方法与现代物联网系统复杂操作需求之间更好的一致性。这项工作为高效、自适应和智能的实时物联网数据处理架构的设计和评估提供了理论见解和实践指导。
{"title":"Mathematical Analysis of Real-Time Data Processing Methods for IoT Applications Based on Hesitant Bipolar Fuzzy Dombi Power Operators","authors":"Tahir Mahmood ,&nbsp;Hafiz Muhammad Waqas ,&nbsp;Ubaid ur Rehman ,&nbsp;Dragan Pamucar","doi":"10.1016/j.sasc.2026.200444","DOIUrl":"10.1016/j.sasc.2026.200444","url":null,"abstract":"<div><div>The rapid growth of Internet of Things (IoT) technologies has made real-time data processing a critical component for efficient monitoring, analysis, and intelligent decision-making in dynamic and large-scale environments. IoT systems continuously generate massive volumes of heterogeneous data that must be processed with minimal latency to ensure timely responses and reliable system performance. Effective real-time data processing enables IoT applications to adapt to changing conditions, enhance operational efficiency, improve safety and reliability, and support time-sensitive services in domains such as smart cities, healthcare monitoring, industrial automation, and intelligent transportation systems. This study presents a comprehensive mathematical framework for the analysis of real-time data processing methods for IoT applications based on hesitant bipolar fuzzy (HBF) Dombi power operators. The proposed model is designed to effectively capture uncertainty, hesitation, and bipolar information that naturally arise in real-world IoT environments due to incomplete, imprecise, and conflicting data sources. By incorporating a multi-criteria decision-making (MCDM) approach, multiple real-time data processing techniques are systematically evaluated and prioritized with respect to several performance-related attributes. The proposed HBF Dombi power-based framework offers a reliable and transparent mechanism for comparing competing real-time data processing strategies and selecting the most suitable method for specific IoT scenarios. The results indicate that the proposed approach improves decision accuracy and supports better alignment between data processing methods and the complex operational requirements of modern IoT systems. This work contributes both theoretical insights and practical guidance for the design and evaluation of efficient, adaptive, and intelligent real-time IoT data processing architectures.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200444"},"PeriodicalIF":3.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on visualization and interactive model of highway design based on virtual reality technology 基于虚拟现实技术的公路设计可视化交互模型研究
IF 3.6 Pub Date : 2026-01-16 DOI: 10.1016/j.sasc.2026.200445
Huiwen Zhou, Wenjing Si
This study combines a Convolutional Backpropagation Neural Network (Conv-BPNN) model with Virtual Reality (VR) technology to propose a novel method to highway design and traffic system optimization. The study intends to quantify vehicle energy consumption and emissions under various design scenarios, assess the effectiveness of route planning, and replicate real-world traffic flow. With the use of traffic flow data and the China Vehicle Emission Dataset (2020–2024), the suggested model can reliably forecast patterns of fuel consumption and CO₂ emissions in a range of traffic densities and route configurations. By integrating virtual reality (VR), stakeholders may evaluate and see infrastructure projects interactively, simplifying complicated traffic dynamics. When compared to conventional modeling techniques, experimental findings show a notable improvement in route choice accuracy, design efficiency, and pollution reduction. This study highlights the benefits of integrating immersive simulation technology with data-driven neural models to facilitate environmental impact assessment and sustainable transportation planning.
本研究将卷积反向传播神经网络(convn - bpnn)模型与虚拟现实(VR)技术相结合,为公路设计和交通系统优化提供了一种新的方法。该研究旨在量化不同设计场景下的车辆能耗和排放,评估路线规划的有效性,并复制现实世界的交通流。利用交通流数据和中国车辆排放数据集(2020-2024),该模型能够可靠地预测一定交通密度和路线配置下的油耗和二氧化碳排放模式。通过集成虚拟现实(VR),利益相关者可以交互式地评估和查看基础设施项目,简化复杂的交通动态。实验结果表明,与传统建模技术相比,该方法在路线选择精度、设计效率和减少污染方面有显著提高。这项研究强调了将沉浸式仿真技术与数据驱动的神经模型相结合的好处,以促进环境影响评估和可持续交通规划。
{"title":"Research on visualization and interactive model of highway design based on virtual reality technology","authors":"Huiwen Zhou,&nbsp;Wenjing Si","doi":"10.1016/j.sasc.2026.200445","DOIUrl":"10.1016/j.sasc.2026.200445","url":null,"abstract":"<div><div>This study combines a Convolutional Backpropagation Neural Network (Conv-BPNN) model with Virtual Reality (VR) technology to propose a novel method to highway design and traffic system optimization. The study intends to quantify vehicle energy consumption and emissions under various design scenarios, assess the effectiveness of route planning, and replicate real-world traffic flow. With the use of traffic flow data and the China Vehicle Emission Dataset (2020–2024), the suggested model can reliably forecast patterns of fuel consumption and CO₂ emissions in a range of traffic densities and route configurations. By integrating virtual reality (VR), stakeholders may evaluate and see infrastructure projects interactively, simplifying complicated traffic dynamics. When compared to conventional modeling techniques, experimental findings show a notable improvement in route choice accuracy, design efficiency, and pollution reduction. This study highlights the benefits of integrating immersive simulation technology with data-driven neural models to facilitate environmental impact assessment and sustainable transportation planning.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200445"},"PeriodicalIF":3.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed face-based tracking with prediction trees in internet of things 物联网中基于人脸的分布式预测树跟踪
IF 3.6 Pub Date : 2026-01-16 DOI: 10.1016/j.sasc.2026.200446
Shayesteh Tabatabaei
Large-scale mobile target tracking in the Internet of Things (IoT) faces substantial challenges due to its centralized architecture, high communication delays, energy limitations, and inaccurate path predictions. This paper presents a novel distributed tracking approach, the IoT-based Face-Based Routing Protocol (IFBRP), which employs prefix trees for path prediction. IFBRP enhances coordination through regionalization, path forecasting via prefix trees, and dynamic sensing radius adjustment driven by learning agents. In contrast to existing centralized methods, IFBRP selects high-energy regional leaders to ensure stable routing. It predicts the locations of mobile targets using historical movement patterns stored in efficient prefix trees, activates nodes only in predicted regions, and adaptively adjusts the sensing range to optimize energy efficiency. Extensive OPNET simulations demonstrate that IFBRP outperforms the advanced FTCCOA protocol, achieving a 64.13% reduction in energy consumption, a 54.61% decrease in end-to-end delay, a 20.77% increase in throughput, a 54.88% reduction in bit error rate, and an 13.41% improvement in signal-to-noise ratio (SNR). These findings make IFBRP highly suitable for tactical aerial surveillance applications that require reliable and energy-efficient tracking of high-speed targets.
物联网(IoT)中的大规模移动目标跟踪由于其集中式架构、高通信延迟、能量限制和不准确的路径预测而面临重大挑战。本文提出了一种新的分布式跟踪方法,即基于物联网的基于人脸的路由协议(IFBRP),该协议采用前缀树进行路径预测。IFBRP通过区域化、前缀树路径预测和学习代理驱动的动态感知半径调整来增强协调能力。与现有的集中式路由方式不同,IFBRP选择能量较高的区域leader来保证路由的稳定性。它利用存储在有效前缀树中的历史运动模式来预测移动目标的位置,仅在预测区域激活节点,并自适应调整感知范围以优化能源效率。大量的OPNET仿真表明,IFBRP优于先进的FTCCOA协议,能耗降低64.13%,端到端延迟降低54.61%,吞吐量提高20.77%,误码率降低54.88%,信噪比(SNR)提高13.41%。这些发现使得IFBRP非常适用于需要可靠和节能地跟踪高速目标的战术空中监视应用。
{"title":"Distributed face-based tracking with prediction trees in internet of things","authors":"Shayesteh Tabatabaei","doi":"10.1016/j.sasc.2026.200446","DOIUrl":"10.1016/j.sasc.2026.200446","url":null,"abstract":"<div><div>Large-scale mobile target tracking in the Internet of Things (IoT) faces substantial challenges due to its centralized architecture, high communication delays, energy limitations, and inaccurate path predictions. This paper presents a novel distributed tracking approach, the IoT-based Face-Based Routing Protocol (IFBRP), which employs prefix trees for path prediction. IFBRP enhances coordination through regionalization, path forecasting via prefix trees, and dynamic sensing radius adjustment driven by learning agents. In contrast to existing centralized methods, IFBRP selects high-energy regional leaders to ensure stable routing. It predicts the locations of mobile targets using historical movement patterns stored in efficient prefix trees, activates nodes only in predicted regions, and adaptively adjusts the sensing range to optimize energy efficiency. Extensive OPNET simulations demonstrate that IFBRP outperforms the advanced FTCCOA protocol, achieving a 64.13% reduction in energy consumption, a 54.61% decrease in end-to-end delay, a 20.77% increase in throughput, a 54.88% reduction in bit error rate, and an 13.41% improvement in signal-to-noise ratio (SNR). These findings make IFBRP highly suitable for tactical aerial surveillance applications that require reliable and energy-efficient tracking of high-speed targets.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200446"},"PeriodicalIF":3.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning approach for early prediction of task failures in cloud computing environments 云计算环境中任务失败早期预测的深度学习方法
IF 3.6 Pub Date : 2026-01-14 DOI: 10.1016/j.sasc.2026.200442
Saba Aldomi , Husam Suleiman , Ali Shatnawi , Luay Alawneh
Prediction of task failures in cloud computing is of great importance due to its critical impact on task execution and resource utilization. Potential risks associated with failure events of tasks can lead to dissatisfaction among clients relying on cloud services. Therefore, it is crucial to comprehend properties and attributes of task failures to prevent them, or at least, to develop the capability to tolerate them. While there has been research conducted on failure analysis, there is a notable lack of emphasis on the application of Artificial Intelligence (AI) in characterizing and predicting task failures. This study aims to address this gap by developing a failure prediction framework capable of early identification of failed tasks and predicting the type of failure events that represent task states throughout their life cycle. We present a hybrid feature extraction and classification framework that uses SelectKBest for feature pre-selection and a GRU network as a sequence-level feature extractor. The extracted features are utilized by the GRU to train machine learning classifiers for the multi-class prediction phase including Random Forest (RF), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). The framework presents several benefits, including reducing resource wastage and Service Level Agreement (SLA) violations. The framework is evaluated based on the analysis of Google cluster traces in which the task states are Enable, Evict, Lost, Finish, Kill, Fail, Queue, Schedule, Update Pending, and Update Running. The findings show that a GRU model trained with the top 14 features achieves a test accuracy of 97.7% for feature extraction and that the combined GRU-RF yields the best predictive performance (overall RMSE=0.1415, Fail-class F1=0.99%, average AUC per class>0.98).
在云计算中,任务失败的预测对任务的执行和资源的利用有着重要的影响。与任务失败事件相关的潜在风险可能导致依赖云服务的客户的不满。因此,理解任务失败的特性和属性以防止它们,或者至少开发容忍它们的能力是至关重要的。虽然已经对失效分析进行了研究,但明显缺乏对人工智能(AI)在任务失效表征和预测中的应用的重视。本研究旨在通过开发一个故障预测框架来解决这一差距,该框架能够早期识别失败任务,并预测在其整个生命周期中代表任务状态的故障事件类型。我们提出了一个混合特征提取和分类框架,该框架使用SelectKBest进行特征预选,并使用GRU网络作为序列级特征提取器。GRU利用提取的特征来训练机器学习分类器,用于多类预测阶段,包括随机森林(RF)、k近邻(KNN)和支持向量机(SVM)。该框架提供了几个好处,包括减少资源浪费和服务水平协议(SLA)违反。该框架基于对谷歌集群跟踪的分析进行评估,其中任务状态为Enable、Evict、Lost、Finish、Kill、Fail、Queue、Schedule、Update Pending和Update Running。研究结果表明,使用前14个特征训练的GRU模型在特征提取方面达到了97.7%的测试准确率,并且GRU- rf组合产生了最佳的预测性能(总体RMSE=0.1415, Fail-class F1=0.99%,每类平均AUC >;0.98)。
{"title":"A deep learning approach for early prediction of task failures in cloud computing environments","authors":"Saba Aldomi ,&nbsp;Husam Suleiman ,&nbsp;Ali Shatnawi ,&nbsp;Luay Alawneh","doi":"10.1016/j.sasc.2026.200442","DOIUrl":"10.1016/j.sasc.2026.200442","url":null,"abstract":"<div><div>Prediction of task failures in cloud computing is of great importance due to its critical impact on task execution and resource utilization. Potential risks associated with failure events of tasks can lead to dissatisfaction among clients relying on cloud services. Therefore, it is crucial to comprehend properties and attributes of task failures to prevent them, or at least, to develop the capability to tolerate them. While there has been research conducted on failure analysis, there is a notable lack of emphasis on the application of Artificial Intelligence (AI) in characterizing and predicting task failures. This study aims to address this gap by developing a failure prediction framework capable of early identification of failed tasks and predicting the type of failure events that represent task states throughout their life cycle. We present a hybrid feature extraction and classification framework that uses SelectKBest for feature pre-selection and a GRU network as a sequence-level feature extractor. The extracted features are utilized by the GRU to train machine learning classifiers for the multi-class prediction phase including Random Forest (RF), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). The framework presents several benefits, including reducing resource wastage and Service Level Agreement (SLA) violations. The framework is evaluated based on the analysis of Google cluster traces in which the task states are Enable, Evict, Lost, Finish, Kill, Fail, Queue, Schedule, Update Pending, and Update Running. The findings show that a GRU model trained with the top 14 features achieves a test accuracy of 97.7% for feature extraction and that the combined GRU-RF yields the best predictive performance (overall <span><math><mrow><mtext>RMSE</mtext><mspace></mspace><mo>=</mo><mspace></mspace><mn>0</mn><mo>.</mo><mn>1415</mn></mrow></math></span>, Fail-class <span><math><mrow><mtext>F1</mtext><mspace></mspace><mo>=</mo><mspace></mspace><mn>0</mn><mo>.</mo><mn>99</mn><mtext>%</mtext></mrow></math></span>, average AUC per <span><math><mrow><mtext>class</mtext><mspace></mspace><mo>&gt;</mo><mspace></mspace><mn>0</mn><mo>.</mo><mn>98</mn></mrow></math></span>).</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200442"},"PeriodicalIF":3.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOX-TD-Plus: An accurate and fast text detection model YOLOX-TD-Plus:准确、快速的文本检测模型
IF 3.6 Pub Date : 2026-01-12 DOI: 10.1016/j.sasc.2026.200437
Deepak C.R., Padmavathi S.
The YOLO series of object detection algorithms has become a standard in a wide range of object detection applications. However, their application to text detection in the wild remains relatively unexplored. This paper presents a new convolutional neural network (CNN)-based model aimed at improving text detection performance through the introduction of a newly designed attention-concentrated enhanced cross-stage partial network (ACE-CSP) layer. The proposed model is built on the path aggregation feature pyramid network (PAFPN) architecture and incorporates ACE-CSP layer blocks, which we developed to facilitate improved information flow through the network and enhance its learning capability. The integration of channel and spatial attention in the ACE-CSP layers enables the network to focus more precisely on relevant text regions. This helps suppress irrelevant background activations, even in cluttered scenes. This design helps to reduce the imbalance in contributions from different feature pyramid layers, resulting in more consistent detection across varying text sizes. The proposed model, YOLOX-TD-Plus, shows significant improvements in text detection performance. We evaluated the model on the COCO-Text-v2.0 dataset, which includes multilingual and multi-oriented text instances. The experimental results show the effectiveness of the proposed architecture in solving text detection challenges in real-world scenarios. Specifically, YOLOX-TD-Plus-t improves Average Precision (AP) from 0.136 to 0.186 (a 36.8% relative improvement), and YOLOX-TD-Plus-l reaches a top AP of 0.341, surpassing the baseline’s 0.317.
YOLO系列目标检测算法已成为广泛的目标检测应用的标准。然而,它们在野外文本检测中的应用仍然相对未被探索。本文提出了一种新的基于卷积神经网络(CNN)的模型,旨在通过引入新设计的注意力集中增强跨阶段部分网络(ACE-CSP)层来提高文本检测性能。该模型建立在路径聚合特征金字塔网络(PAFPN)架构上,并结合了ACE-CSP层块,以促进网络中的信息流动并增强其学习能力。ACE-CSP层中通道和空间注意力的整合使网络能够更精确地关注相关的文本区域。这有助于抑制不相关的背景激活,即使在混乱的场景中也是如此。这种设计有助于减少不同特征金字塔层贡献的不平衡,从而在不同的文本大小中实现更一致的检测。所提出的模型ydox - td - plus在文本检测性能上有显著提高。我们在COCO-Text-v2.0数据集上评估了该模型,该数据集包括多语言和多面向的文本实例。实验结果表明,所提出的体系结构在解决现实场景中的文本检测挑战方面是有效的。具体来说,YOLOX-TD-Plus-t将平均精度(AP)从0.136提高到0.186(相对提高36.8%),而yolox - td - plus - 1达到了0.341的最高AP,超过了基线的0.317。
{"title":"YOLOX-TD-Plus: An accurate and fast text detection model","authors":"Deepak C.R.,&nbsp;Padmavathi S.","doi":"10.1016/j.sasc.2026.200437","DOIUrl":"10.1016/j.sasc.2026.200437","url":null,"abstract":"<div><div>The YOLO series of object detection algorithms has become a standard in a wide range of object detection applications. However, their application to text detection in the wild remains relatively unexplored. This paper presents a new convolutional neural network (CNN)-based model aimed at improving text detection performance through the introduction of a newly designed attention-concentrated enhanced cross-stage partial network (ACE-CSP) layer. The proposed model is built on the path aggregation feature pyramid network (PAFPN) architecture and incorporates ACE-CSP layer blocks, which we developed to facilitate improved information flow through the network and enhance its learning capability. The integration of channel and spatial attention in the ACE-CSP layers enables the network to focus more precisely on relevant text regions. This helps suppress irrelevant background activations, even in cluttered scenes. This design helps to reduce the imbalance in contributions from different feature pyramid layers, resulting in more consistent detection across varying text sizes. The proposed model, YOLOX-TD-Plus, shows significant improvements in text detection performance. We evaluated the model on the COCO-Text-v2.0 dataset, which includes multilingual and multi-oriented text instances. The experimental results show the effectiveness of the proposed architecture in solving text detection challenges in real-world scenarios. Specifically, YOLOX-TD-Plus-t improves Average Precision (AP) from 0.136 to 0.186 (a 36.8% relative improvement), and YOLOX-TD-Plus-l reaches a top AP of 0.341, surpassing the baseline’s 0.317.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200437"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid data balancing with MLP probabilities-based categorical boosting model for robust intrusion detection system in IoT environment 物联网环境下鲁棒入侵检测系统混合数据平衡与基于MLP概率的分类提升模型
IF 3.6 Pub Date : 2026-01-11 DOI: 10.1016/j.sasc.2026.200443
A. Mallikarjun, Pramoda Patro
The rapid expansion of the Internet of Things (IoT) has led to an estimated 29.3 billion connected devices by 2030, generating over 79.4 zettabytes of data annually. However, IoT networks remain highly vulnerable, with nearly 57% of IoT devices susceptible to cyber threats, including Denial-of-Service (DoS) and data spoofing attacks. Existing Intrusion Detection Systems (IDS) often suffer from class imbalance, leading to biased models and reduced detection accuracy for minority attack classes. To address these challenges, a novel Data-Balanced Machine Learning IDS (DBML-IDS) is proposed, integrating data preprocessing, Support Vector Machine (SVM) Weights-based Synthetic Minority Over-sampling Technique (SVWS) for improved data balancing, and a Multi-Layer Perceptron (MLP) Probabilities-based Categorical Boosting (MLPP-CB) classifier. The CICIoT2023 dataset, consisting of two classes (Normal and Attack), is used for evaluation. The proposed DBML-IDS framework ensures optimal feature distribution, mitigates overfitting, and enhances generalization for real-world IoT threat detection. Experimental results demonstrate that DBML-IDS achieves a superior classification performance with accuracy, precision, recall, and an F1-score of 0.9973, outperforming existing IDS models. These findings highlight the effectiveness of the proposed methodology in securing IoT environments against emerging cyber threats.
到2030年,物联网(IoT)的快速扩张预计将导致293亿台连接设备,每年产生超过79.4 zb的数据。然而,物联网网络仍然非常脆弱,近57%的物联网设备容易受到网络威胁,包括拒绝服务(DoS)和数据欺骗攻击。现有的入侵检测系统经常存在类不平衡的问题,导致模型存在偏差,降低了对少数攻击类的检测精度。为了解决这些挑战,提出了一种新的数据平衡机器学习IDS (DBML-IDS),该IDS集成了数据预处理,基于支持向量机(SVM)加权的合成少数派过采样技术(SVWS)用于改进数据平衡,以及基于多层感知机(MLP)概率的分类提升(MLPP-CB)分类器。CICIoT2023数据集由两类(正常和攻击)组成,用于评估。提出的DBML-IDS框架确保了最优的特征分布,减轻了过拟合,并增强了对现实世界物联网威胁检测的泛化。实验结果表明,DBML-IDS在准确率、精密度、查全率方面都取得了较好的分类性能,f1得分为0.9973,优于现有的IDS模型。这些发现突出了所提出的方法在保护物联网环境免受新兴网络威胁方面的有效性。
{"title":"Hybrid data balancing with MLP probabilities-based categorical boosting model for robust intrusion detection system in IoT environment","authors":"A. Mallikarjun,&nbsp;Pramoda Patro","doi":"10.1016/j.sasc.2026.200443","DOIUrl":"10.1016/j.sasc.2026.200443","url":null,"abstract":"<div><div>The rapid expansion of the Internet of Things (IoT) has led to an estimated 29.3 billion connected devices by 2030, generating over 79.4 zettabytes of data annually. However, IoT networks remain highly vulnerable, with nearly 57% of IoT devices susceptible to cyber threats, including Denial-of-Service (DoS) and data spoofing attacks. Existing Intrusion Detection Systems (IDS) often suffer from class imbalance, leading to biased models and reduced detection accuracy for minority attack classes. To address these challenges, a novel Data-Balanced Machine Learning IDS (DBML-IDS) is proposed, integrating data preprocessing, Support Vector Machine (SVM) Weights-based Synthetic Minority Over-sampling Technique (SVWS) for improved data balancing, and a Multi-Layer Perceptron (MLP) Probabilities-based Categorical Boosting (MLPP-CB) classifier. The CICIoT2023 dataset, consisting of two classes (Normal and Attack), is used for evaluation. The proposed DBML-IDS framework ensures optimal feature distribution, mitigates overfitting, and enhances generalization for real-world IoT threat detection. Experimental results demonstrate that DBML-IDS achieves a superior classification performance with accuracy, precision, recall, and an F1-score of 0.9973, outperforming existing IDS models. These findings highlight the effectiveness of the proposed methodology in securing IoT environments against emerging cyber threats.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200443"},"PeriodicalIF":3.6,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced OTP and facial recognition for e-learning authentication 增强OTP和面部识别的电子学习认证
IF 3.6 Pub Date : 2026-01-06 DOI: 10.1016/j.sasc.2026.200440
Aminou Halidou , Stéphane Gaël Raymond Ekodeck , Daramy Vandi Von Kallon , Christophe Armel Nteme , Jocelyn Edinio Zacko Gbadoubissa
Online services, particularly e-learning platforms, face significant challenges in authenticating users due to the absence of physical identification. This vulnerability can lead to security breaches that compromise the credibility of assessments. A robust authentication mechanism is crucial during the evaluation phase to ensure integrity and fairness.
A two-phase, multi-factor authentication framework is presented to strengthen security in e-learning environments. The first phase involves user authentication through credential submission and a OTP (One-Time Password) sent by SMS or email, establishing a 2FA (Two-Factor Authentication) process. The second phase employs real-time facial recognition during online examinations, utilizing a feature-based face detection technique with the Haar Cascade classifier and webcam images captured during registration.
The experimental results show an authentication accuracy of 80% in well lit conditions and 62% in low light environments, indicating a substantial improvement in security over existing methods. This approach provides a minimally intrusive but effective means of improving the reliability of online assessments.
在线服务,特别是电子学习平台,由于缺乏物理身份证明,在认证用户方面面临重大挑战。此漏洞可能导致安全性破坏,从而损害评估的可信度。在评估阶段,一个健壮的身份验证机制对于确保完整性和公平性至关重要。提出了一种两阶段多因素身份验证框架,以增强电子学习环境的安全性。第一阶段涉及通过凭据提交和通过SMS或电子邮件发送的OTP(一次性密码)进行用户身份验证,建立2FA(双因素身份验证)过程。第二阶段在在线考试期间采用实时面部识别,利用Haar级联分类器和注册期间捕获的网络摄像头图像的基于特征的面部检测技术。实验结果表明,该方法在良好光照条件下的认证准确率为80%,在弱光环境下的认证准确率为62%,表明与现有方法相比,安全性有了实质性的提高。这种方法为提高在线评估的可靠性提供了一种侵入性最小但有效的手段。
{"title":"Enhanced OTP and facial recognition for e-learning authentication","authors":"Aminou Halidou ,&nbsp;Stéphane Gaël Raymond Ekodeck ,&nbsp;Daramy Vandi Von Kallon ,&nbsp;Christophe Armel Nteme ,&nbsp;Jocelyn Edinio Zacko Gbadoubissa","doi":"10.1016/j.sasc.2026.200440","DOIUrl":"10.1016/j.sasc.2026.200440","url":null,"abstract":"<div><div>Online services, particularly e-learning platforms, face significant challenges in authenticating users due to the absence of physical identification. This vulnerability can lead to security breaches that compromise the credibility of assessments. A robust authentication mechanism is crucial during the evaluation phase to ensure integrity and fairness.</div><div>A two-phase, multi-factor authentication framework is presented to strengthen security in e-learning environments. The first phase involves user authentication through credential submission and a OTP (One-Time Password) sent by SMS or email, establishing a 2FA (Two-Factor Authentication) process. The second phase employs real-time facial recognition during online examinations, utilizing a feature-based face detection technique with the Haar Cascade classifier and webcam images captured during registration.</div><div>The experimental results show an authentication accuracy of 80% in well lit conditions and 62% in low light environments, indicating a substantial improvement in security over existing methods. This approach provides a minimally intrusive but effective means of improving the reliability of online assessments.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200440"},"PeriodicalIF":3.6,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic meta-learning with generative augmentation for cross-lingual Japanese few-shot named entity recognition 基于生成增强的动态元学习跨语言日语短镜头命名实体识别
IF 3.6 Pub Date : 2026-01-05 DOI: 10.1016/j.sasc.2026.200438
Demei Zhu , Qin Liu , Xiaoying Pan , Xiaoli Shao
Named Entity Recognition (NER) in Japanese is a challenging task due to data scarcity, limited cross-lingual transfer capabilities, and fuzzy entity boundaries, especially in low-resource environments. This research presents a novel framework, MAML-ProtoNet++, designed to overcome these challenges. The framework combines Model-Agnostic Meta-Learning (MAML), which allows for rapid parameter adaptation, with Prototypical Networks (ProtoNet) that perform prototype-based classification for few-shot learning. Additionally, the framework integrates cross-lingual contrastive pretraining using the multilingual mT5 model, which generates diverse pseudo-samples and optimizes the semantic alignment between Japanese and English entity pairs. To address the problem of insufficient annotated data, generative augmentation techniques and boundary verification methods are employed, improving the support set and entity boundary recognition. The experimental results demonstrate that MAML-ProtoNet++ outperforms existing models with a macro-average F1 score of 0.772 under a 5-shot setting. The boundary recognition accuracy is notably high, with 0.85 for start points and 0.84 for end points. Additionally, cross-lingual pretraining significantly improves semantic alignment, with cosine similarity between Japanese and English entities increasing from 0.61 to 0.85. These results highlight the robustness and adaptability of MAML-ProtoNet++ in handling complex few-shot and cross-lingual NER tasks. The findings suggest that this framework is a promising solution for NER in low-resource languages like Japanese, offering potential for broader applications in cross-lingual transfer learning.
由于数据的稀缺性、有限的跨语言传输能力和模糊的实体边界,特别是在资源匮乏的环境中,日语中的命名实体识别(NER)是一项具有挑战性的任务。本研究提出了一个新的框架,mml - protone++,旨在克服这些挑战。该框架结合了模型不可知元学习(MAML)和原型网络(ProtoNet),前者允许快速参数适应,后者执行基于原型的分类,进行少量的学习。此外,该框架使用多语言mT5模型集成了跨语言对比预训练,该模型生成了不同的伪样本,并优化了日语和英语实体对之间的语义对齐。为了解决标注数据不足的问题,采用了生成增强技术和边界验证方法,改进了支持集和实体边界识别。实验结果表明,在5次射击设置下,mml - protone++的宏观平均F1得分为0.772,优于现有模型。边界识别精度非常高,起始点为0.85,终点为0.84。此外,跨语言预训练显著提高了语义一致性,日语和英语实体之间的余弦相似度从0.61增加到0.85。这些结果突出了mml - proton++在处理复杂的少镜头和跨语言NER任务方面的鲁棒性和适应性。研究结果表明,该框架对于日语等低资源语言的NER是一个很有希望的解决方案,为跨语言迁移学习提供了更广泛的应用潜力。
{"title":"Dynamic meta-learning with generative augmentation for cross-lingual Japanese few-shot named entity recognition","authors":"Demei Zhu ,&nbsp;Qin Liu ,&nbsp;Xiaoying Pan ,&nbsp;Xiaoli Shao","doi":"10.1016/j.sasc.2026.200438","DOIUrl":"10.1016/j.sasc.2026.200438","url":null,"abstract":"<div><div>Named Entity Recognition (NER) in Japanese is a challenging task due to data scarcity, limited cross-lingual transfer capabilities, and fuzzy entity boundaries, especially in low-resource environments. This research presents a novel framework, MAML-ProtoNet++, designed to overcome these challenges. The framework combines Model-Agnostic Meta-Learning (MAML), which allows for rapid parameter adaptation, with Prototypical Networks (ProtoNet) that perform prototype-based classification for few-shot learning. Additionally, the framework integrates cross-lingual contrastive pretraining using the multilingual mT5 model, which generates diverse pseudo-samples and optimizes the semantic alignment between Japanese and English entity pairs. To address the problem of insufficient annotated data, generative augmentation techniques and boundary verification methods are employed, improving the support set and entity boundary recognition. The experimental results demonstrate that MAML-ProtoNet++ outperforms existing models with a macro-average F1 score of 0.772 under a 5-shot setting. The boundary recognition accuracy is notably high, with 0.85 for start points and 0.84 for end points. Additionally, cross-lingual pretraining significantly improves semantic alignment, with cosine similarity between Japanese and English entities increasing from 0.61 to 0.85. These results highlight the robustness and adaptability of MAML-ProtoNet++ in handling complex few-shot and cross-lingual NER tasks. The findings suggest that this framework is a promising solution for NER in low-resource languages like Japanese, offering potential for broader applications in cross-lingual transfer learning.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200438"},"PeriodicalIF":3.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cargo positioning and sorting model based on deep learning and sampling evaluation 基于深度学习和抽样评估的货物定位分拣模型
IF 3.6 Pub Date : 2026-01-05 DOI: 10.1016/j.sasc.2026.200441
Bing Xue, Zhaopeng Zhu
Due to the explosive growth of e-commerce, logistics centers are experiencing low efficiency in locating and sorting goods, so traditional manual processing is gradually transforming to automation and intelligence. Thus, by combining feature pyramid networks with deep image data, this study suggests a cargo positioning and sorting model that greatly enhances the classification and location of soft and deformable goods. Simultaneously, the robot grasping position selection is improved to enhance the accuracy and efficiency of sorting. The experimental results of the model indicated that the AP value achieved by the network was about 91.9%, while the area value of the offline curve was about 94.8%. The model realized fast and high-precision goods sorting in a dynamic environment, which effectively reduces the manpower dependence and operational errors, and provides a practical solution to improve the responsiveness and processing capability of the supply chain.
由于电子商务的爆炸式增长,物流中心在定位和分拣货物方面的效率低下,传统的人工处理正在逐步向自动化和智能化转变。因此,本研究将特征金字塔网络与深度图像数据相结合,提出了一种货物定位分拣模型,大大提高了软质、易变形货物的分类定位能力。同时,改进了机器人抓取位置的选择,提高了分拣的精度和效率。该模型的实验结果表明,网络实现的AP值约为91.9%,离线曲线的面积值约为94.8%。该模型实现了动态环境下快速、高精度的货物分拣,有效减少了对人力的依赖和操作失误,为提高供应链的响应能力和处理能力提供了切实可行的解决方案。
{"title":"Cargo positioning and sorting model based on deep learning and sampling evaluation","authors":"Bing Xue,&nbsp;Zhaopeng Zhu","doi":"10.1016/j.sasc.2026.200441","DOIUrl":"10.1016/j.sasc.2026.200441","url":null,"abstract":"<div><div>Due to the explosive growth of e-commerce, logistics centers are experiencing low efficiency in locating and sorting goods, so traditional manual processing is gradually transforming to automation and intelligence. Thus, by combining feature pyramid networks with deep image data, this study suggests a cargo positioning and sorting model that greatly enhances the classification and location of soft and deformable goods. Simultaneously, the robot grasping position selection is improved to enhance the accuracy and efficiency of sorting. The experimental results of the model indicated that the AP value achieved by the network was about 91.9%, while the area value of the offline curve was about 94.8%. The model realized fast and high-precision goods sorting in a dynamic environment, which effectively reduces the manpower dependence and operational errors, and provides a practical solution to improve the responsiveness and processing capability of the supply chain.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200441"},"PeriodicalIF":3.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Systems and Soft Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1