首页 > 最新文献

Intelligent Systems with Applications最新文献

英文 中文
Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models 可解释提升的形式概念观点:极端梯度提升和梯度提升模型的格理论框架
IF 4.3 Pub Date : 2025-09-01 Epub Date: 2025-08-26 DOI: 10.1016/j.iswa.2025.200569
Sherif Eneye Shuaib , Pakwan Riyapan , Jirapond Muangprathub
Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).
We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.
The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.
Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.
基于树的集成方法,如极端梯度增强(XGBoost)和梯度增强模型(GBM),由于其强大的预测能力而被广泛用于监督学习。然而,它们复杂的体系结构常常妨碍可解释性。本文将最初为随机森林开发的格理论框架扩展到增强算法,通过形式概念分析(FCA)对其内部逻辑进行结构化分析。我们正式采用了四种概念视图:叶子,树,树谓词和间隔谓词来解释boost特有的顺序学习和优化过程。使用来自OpenML CC18基准套件的汽车评估数据集的二进制版本,我们进行了系统的参数研究,以检查树深度和树数量等超参数如何影响模型性能和概念复杂性。随机森林结果从先前的文献被用作比较基线。结果表明,XGBoost的测试精度最高,而GBM在泛化误差方面表现出更高的稳定性。从概念上讲,增强方法生成更紧凑和可解释的叶视图,但在更高级的视图中保留丰富的结构信息。相比之下,随机森林倾向于产生更密集和更冗余的概念格。这些权衡突出了通过FCA解释的激励方法如何在绩效和透明度之间取得平衡。总的来说,这项工作通过展示如何将基于格子的概念视图系统地扩展到复杂的促进模型,从而在不牺牲预测能力的情况下提供可解释的见解,从而有助于可解释的人工智能。
{"title":"Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models","authors":"Sherif Eneye Shuaib ,&nbsp;Pakwan Riyapan ,&nbsp;Jirapond Muangprathub","doi":"10.1016/j.iswa.2025.200569","DOIUrl":"10.1016/j.iswa.2025.200569","url":null,"abstract":"<div><div>Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).</div><div>We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.</div><div>The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.</div><div>Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200569"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual question answering for medical diagnosis 用于医学诊断的视觉问答
Pub Date : 2025-09-01 Epub Date: 2025-07-09 DOI: 10.1016/j.iswa.2025.200545
Nawel Ben Chaabane, Mohamed Bal-Ghaoui
The use of Artificial Intelligence (AI) in medical diagnosis is a breakthrough in healthcare, improving both accuracy and efficiency. Recently, a significant advancement has been made toward the development of multimodal AI systems that can process and integrate multiple types of data or modalities. This ability is key for interpreting medical images, such as X-rays, CT, and MRI scans, as well as textual data like electronic health records (EHRs) and clinical notes. In this era, Visual Question Answering (VQA) systems have demonstrated a potential use case in the medical domain. These systems, typically based on Vision-Language Models (VLMs), can answer natural lan- guage questions based on medical images, offering precise and relevant re- sponses that help doctors make better decisions.
In this article, we evaluate existing medical VQA models along with general and trending ones to make medical diagnoses. In particular, we focus on addressing abnormality questions considered challenging in the literature. Our approach consists of evaluating the Zero-Shot (ZS) general and domain-specific capabilities of different models using two created datasets, and fine-tuning the best-found models on the training set of the abnormality dataset before evaluating their performances quantitatively and qualitatively. IdeficMed, a generative domain-specific model, achieved better consistency and VQA outcomes by only training 0.22 % of its parameters. Additionally, we employed uncertainty quantification techniques (e.g., Monte Carlo dropout) to assess the confidence of the fine-tuned models in their predictions. We also conducted a sensitivity analysis on input perturbations, such as image noise and ambiguous questions.
人工智能(AI)在医疗诊断中的应用是医疗保健领域的一个突破,提高了准确性和效率。最近,多模态人工智能系统的发展取得了重大进展,可以处理和集成多种类型的数据或模式。这种能力对于解释医学图像(如x射线、CT和MRI扫描)以及文本数据(如电子健康记录(EHRs)和临床记录)至关重要。在这个时代,可视化问答(VQA)系统已经在医疗领域展示了一个潜在的用例。这些系统通常基于视觉语言模型(VLMs),可以回答基于医学图像的自然语言问题,提供精确和相关的回答,帮助医生做出更好的决定。在本文中,我们评估了现有的医学VQA模型以及一般和趋势模型,以进行医学诊断。特别是,我们专注于解决异常问题认为具有挑战性的文献。我们的方法包括使用两个创建的数据集评估不同模型的Zero-Shot (ZS)一般和特定领域的能力,并在对异常数据集的训练集进行微调之前,对其性能进行定量和定性评估。IdeficMed是一个生成领域特定模型,仅训练0.22%的参数就获得了更好的一致性和VQA结果。此外,我们采用不确定性量化技术(例如,蒙特卡罗退出)来评估精细调整模型在其预测中的置信度。我们还对输入扰动(如图像噪声和模糊问题)进行了灵敏度分析。
{"title":"Visual question answering for medical diagnosis","authors":"Nawel Ben Chaabane,&nbsp;Mohamed Bal-Ghaoui","doi":"10.1016/j.iswa.2025.200545","DOIUrl":"10.1016/j.iswa.2025.200545","url":null,"abstract":"<div><div>The use of Artificial Intelligence (AI) in medical diagnosis is a breakthrough in healthcare, improving both accuracy and efficiency. Recently, a significant advancement has been made toward the development of multimodal AI systems that can process and integrate multiple types of data or modalities. This ability is key for interpreting medical images, such as X-rays, CT, and MRI scans, as well as textual data like electronic health records (EHRs) and clinical notes. In this era, Visual Question Answering (VQA) systems have demonstrated a potential use case in the medical domain. These systems, typically based on Vision-Language Models (VLMs), can answer natural lan- guage questions based on medical images, offering precise and relevant re- sponses that help doctors make better decisions.</div><div>In this article, we evaluate existing medical VQA models along with general and trending ones to make medical diagnoses. In particular, we focus on addressing abnormality questions considered challenging in the literature. Our approach consists of evaluating the Zero-Shot (ZS) general and domain-specific capabilities of different models using two created datasets, and fine-tuning the best-found models on the training set of the abnormality dataset before evaluating their performances quantitatively and qualitatively. IdeficMed, a generative domain-specific model, achieved better consistency and VQA outcomes by only training 0.22 % of its parameters. Additionally, we employed uncertainty quantification techniques (e.g., Monte Carlo dropout) to assess the confidence of the fine-tuned models in their predictions. We also conducted a sensitivity analysis on input perturbations, such as image noise and ambiguous questions.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200545"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144703262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning alloying design of biodegradable zinc alloy for bone implants using XGBoost and Bayesian optimization 基于XGBoost和贝叶斯优化的生物可降解锌合金骨植入物的机器学习合金化设计
Pub Date : 2025-09-01 Epub Date: 2025-06-21 DOI: 10.1016/j.iswa.2025.200549
Mohanad Deif , Hani Attar , Mohammad Aljaidi , Ayoub Alsarhan , Dimah Al-Fraihat , Ahmed Solyman
Developing implants using biodegradable materials eliminates the need for secondary surgery, improves both mechanical and biological properties, and enhances biocompatibility. This study proposes a machine learning approach based on Bayesian optimization (BO) and an eXtreme Gradient Boosting (XGBoost) algorithm to design a biodegradable Zinc (Zn) alloy and forecast percentage of elements in the Zn alloy for bone implants. The dataset employed in this study comprised 1182 samples of Zn alloys obtained from supplementary articles from Google Scholar and the mat web database. For forecasting the mechanical parameters Yield Stress (YS), Ductility, and Ultimate Tensile Strength (UTS), the suggested method got maximum R2 values of 0.85, 0.87, and 0.81 demonstrating its exceptional predictive capacity. In addition, the model created a Zn biodegradable alloy with UTS of 363.55 Mpa, YS of 318.93 Mpa, and Ductility of 14 %, which are regarded as good mechanical characteristics meet bone implant criteria. The BO-XGBoost model can expedite the production of the proper alloy for several medical applications, saving time, money, and effort.
使用可生物降解材料开发植入物消除了二次手术的需要,改善了机械和生物性能,并增强了生物相容性。本研究提出了一种基于贝叶斯优化(BO)和极限梯度提升(XGBoost)算法的机器学习方法来设计可生物降解的锌(Zn)合金,并预测用于骨植入物的锌合金中元素的百分比。本研究使用的数据集包括从谷歌Scholar和mat web数据库的补充文章中获得的1182个Zn合金样品。对于屈服应力(YS)、延展性(Ductility)和极限抗拉强度(maximum Tensile Strength, UTS)等力学参数的预测,该方法的R2值分别为0.85、0.87和0.81,具有较好的预测能力。此外,该模型制备的锌可生物降解合金的力学性能良好,UTS为363.55 Mpa, YS为318.93 Mpa,延展性为14%,符合植骨标准。BO-XGBoost模型可以加快几种医疗应用的适当合金的生产,节省时间,金钱和精力。
{"title":"Machine learning alloying design of biodegradable zinc alloy for bone implants using XGBoost and Bayesian optimization","authors":"Mohanad Deif ,&nbsp;Hani Attar ,&nbsp;Mohammad Aljaidi ,&nbsp;Ayoub Alsarhan ,&nbsp;Dimah Al-Fraihat ,&nbsp;Ahmed Solyman","doi":"10.1016/j.iswa.2025.200549","DOIUrl":"10.1016/j.iswa.2025.200549","url":null,"abstract":"<div><div>Developing implants using biodegradable materials eliminates the need for secondary surgery, improves both mechanical and biological properties, and enhances biocompatibility. This study proposes a machine learning approach based on Bayesian optimization (BO) and an eXtreme Gradient Boosting (XGBoost) algorithm to design a biodegradable Zinc (Zn) alloy and forecast percentage of elements in the Zn alloy for bone implants. The dataset employed in this study comprised 1182 samples of Zn alloys obtained from supplementary articles from Google Scholar and the mat web database. For forecasting the mechanical parameters Yield Stress (YS), Ductility, and Ultimate Tensile Strength (UTS), the suggested method got maximum R<sup>2</sup> values of 0.85, 0.87, and 0.81 demonstrating its exceptional predictive capacity. In addition, the model created a Zn biodegradable alloy with UTS of 363.55 Mpa, YS of 318.93 Mpa, and Ductility of 14 %, which are regarded as good mechanical characteristics meet bone implant criteria. The BO-XGBoost model can expedite the production of the proper alloy for several medical applications, saving time, money, and effort.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200549"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time semantic segmentation network leveraging spatial and contextual features for enhanced scene understanding 利用空间和上下文特征增强场景理解的实时语义分割网络
Pub Date : 2025-09-01 Epub Date: 2025-06-24 DOI: 10.1016/j.iswa.2025.200542
Haifeng Sima , Meng Gao , Lanlan Liu
Real-time semantic segmentation of images requires both rich contextual and accurate spatial information. However, Multiple downsampling in deep convolutional neural networks often lead to loss of such information, resulting in reduced segmentation accuracy. To address the above problems, we propose SPCONet, a lightweight real-time semantic segmentation network that integrates spatial and contextual features. The network incorporates three key modules: (1) a Spatial Feature Aggregation Module (SFAM) that captures fine spatial details from shallow layers using spatially separable convolutions with multiple kernel sizes; (2) a Contextual Information Retrieval Module (CIRM) that extracts semantic context from deeper layers using dynamic convolution; (3) an Attention Fusion Module (AFM) that combines spatial and contextual features via local and global attention mechanisms. Quantitative experiments show that SPCONet achieves 77.5% and 75.3% mIoU at 74 FPS and 82 FPS on the Cityscapes and CamVid datasets, respectively. These results suggest that SPCONet provides an effective balance between segmentation accuracy and real-time inference capability.
图像的实时语义分割既需要丰富的上下文信息,又需要准确的空间信息。然而,在深度卷积神经网络中,多次降采样往往会导致这些信息的丢失,从而降低分割精度。为了解决上述问题,我们提出了SPCONet,一个集成了空间和上下文特征的轻量级实时语义分割网络。该网络包含三个关键模块:(1)空间特征聚合模块(sfm),该模块使用具有多个核大小的空间可分离卷积从浅层捕获精细空间细节;(2)上下文信息检索模块(CIRM),利用动态卷积从更深层提取语义上下文;(3)通过局部和全局注意机制将空间特征和上下文特征结合起来的注意融合模块(AFM)。定量实验表明,在cityscape和CamVid数据集上,SPCONet在74 FPS和82 FPS下的mIoU分别达到77.5%和75.3%。这些结果表明,SPCONet在分割精度和实时推理能力之间提供了有效的平衡。
{"title":"A real-time semantic segmentation network leveraging spatial and contextual features for enhanced scene understanding","authors":"Haifeng Sima ,&nbsp;Meng Gao ,&nbsp;Lanlan Liu","doi":"10.1016/j.iswa.2025.200542","DOIUrl":"10.1016/j.iswa.2025.200542","url":null,"abstract":"<div><div>Real-time semantic segmentation of images requires both rich contextual and accurate spatial information. However, Multiple downsampling in deep convolutional neural networks often lead to loss of such information, resulting in reduced segmentation accuracy. To address the above problems, we propose SPCONet, a lightweight real-time semantic segmentation network that integrates spatial and contextual features. The network incorporates three key modules: (1) a Spatial Feature Aggregation Module (SFAM) that captures fine spatial details from shallow layers using spatially separable convolutions with multiple kernel sizes; (2) a Contextual Information Retrieval Module (CIRM) that extracts semantic context from deeper layers using dynamic convolution; (3) an Attention Fusion Module (AFM) that combines spatial and contextual features via local and global attention mechanisms. Quantitative experiments show that SPCONet achieves 77.5% and 75.3% mIoU at 74 FPS and 82 FPS on the Cityscapes and CamVid datasets, respectively. These results suggest that SPCONet provides an effective balance between segmentation accuracy and real-time inference capability.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200542"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144522421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion recognition and forecasting from wearable data via cluster-guided attention with cross-species pretraining 基于聚类引导注意力和跨物种预训练的可穿戴数据情感识别和预测
IF 4.3 Pub Date : 2025-09-01 Epub Date: 2025-07-30 DOI: 10.1016/j.iswa.2025.200560
Wonjik Kim , Gaku Kutsuzawa , Michiyo Maruyama
Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.
可穿戴设备能够持续获取生理信号,为日常生活中的实时情绪监测提供了可能。然而,由于个体差异、标签模糊性和有限的注释数据,情感识别仍然具有挑战性。这项研究提出了一种轻量级的、集群引导的注意力模型,用于二元情绪识别(积极与消极)和预测(提前两小时),这些预测来自心率和步数等可穿戴信号。为了提高泛化,我们在潜在空间中利用无监督聚类,并使用来自小鼠的结构化行为和生理数据整合跨物种预训练。我们的框架通过基于表情符号的自我报告界面减少了注释负担,并执行主题内和跨主题验证。在人体可穿戴数据上的实验结果表明,我们的方法在准确率和宏观f1分数上都优于经典和轻量级深度学习基线,当前情绪识别的准确率约为74.4%(宏观f1: 71.5%), 1小时预测的准确率约为72.9%(宏观f1: 70.7%), 2小时预测的准确率约为65.5%(宏观f1: 63.0%)。此外,基于鼠标的预训练产生一致的性能提升,特别是在长期预测任务中。这些研究结果表明,生物知情的注意机制和跨领域知识转移可以显著增强低资源可穿戴数据的情绪建模。
{"title":"Emotion recognition and forecasting from wearable data via cluster-guided attention with cross-species pretraining","authors":"Wonjik Kim ,&nbsp;Gaku Kutsuzawa ,&nbsp;Michiyo Maruyama","doi":"10.1016/j.iswa.2025.200560","DOIUrl":"10.1016/j.iswa.2025.200560","url":null,"abstract":"<div><div>Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200560"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Certified Accuracy and Robustness: How different architectures stand up to adversarial attacks 认证的准确性和健壮性:不同的架构如何经受对抗性攻击
Pub Date : 2025-09-01 Epub Date: 2025-07-07 DOI: 10.1016/j.iswa.2025.200555
Azryl Elmy Sarih , Nagender Aneja , Ong Wee Hong
Adversarial attacks are a concern for image classification using neural networks. Numerous methods have been created to minimize the effects of attacks, where the best defense against such attacks is through adversarial training, which has proven to be the most successful to date. Due to the nature of adversarial attacks, it is difficult to assess the capabilities of a network to defend. The standard method of assessing a network’s performance in supervised image classification tasks is based on accuracy. However, this assessment method, while still important, is insufficient when adversarial attacks are included. A new metric called certified accuracy is used to assess network performance when samples are perturbed by adversarial noise. This paper supplements certified accuracy with an abstention rate to give more insight into the network’s robustness. Abstention rate measures the percentage of the network that failed to keep its prediction unchanged as the perturbation strength increases from zero to specified strength. The study focuses on popular and good-performing CNN-based architectures, specifically EfficientNet-B7, ResNet-50, ResNet-101, Wide-ResNet-101, and transformer architectures such as CaiT and ViT-B/16. The selected architectures are trained in adversarial and standard methods and then certified on CIFAR-10 datasets perturbed with Gaussian noises of different strengths. Our results show that transformers are more resilient to adversarial attacks than CNN-based architectures by a significant margin. Transformers exhibit better certified accuracy and tolerance against stronger noises than CNN-based architectures, demonstrating good robustness with and without adversarial training. The width and depth of a network have little effect on achieving robustness against adversarial attacks, but rather, the techniques that are deployed in the network are more impactful, where attention mechanisms have been shown to improve a network’s robustness.
对抗性攻击是使用神经网络进行图像分类的一个问题。为了尽量减少攻击的影响,已经创建了许多方法,其中针对此类攻击的最佳防御是通过对抗性训练,这已被证明是迄今为止最成功的。由于对抗性攻击的性质,很难评估网络的防御能力。评估网络在监督图像分类任务中的性能的标准方法是基于准确性。然而,这种评估方法虽然仍然很重要,但在包括对抗性攻击时就不够了。当样本受到对抗性噪声干扰时,使用了一种称为认证精度的新度量来评估网络性能。本文用弃权率补充认证精度,以更深入地了解网络的鲁棒性。弃权率衡量的是当扰动强度从零增加到指定强度时,网络未能保持其预测不变的百分比。该研究的重点是流行的和性能良好的基于cnn的架构,特别是高效网- b7、ResNet-50、ResNet-101、Wide-ResNet-101,以及CaiT和ViT-B/16等变压器架构。采用对抗性和标准方法对所选架构进行训练,然后在受不同强度高斯噪声干扰的CIFAR-10数据集上进行认证。我们的研究结果表明,与基于cnn的架构相比,变压器对对抗性攻击的弹性更强。与基于cnn的架构相比,变压器具有更好的认证精度和对更强噪声的容忍度,在有无对抗性训练的情况下都表现出良好的鲁棒性。网络的宽度和深度对实现对对抗性攻击的鲁棒性几乎没有影响,相反,在网络中部署的技术更有影响力,其中注意力机制已被证明可以提高网络的鲁棒性。
{"title":"Certified Accuracy and Robustness: How different architectures stand up to adversarial attacks","authors":"Azryl Elmy Sarih ,&nbsp;Nagender Aneja ,&nbsp;Ong Wee Hong","doi":"10.1016/j.iswa.2025.200555","DOIUrl":"10.1016/j.iswa.2025.200555","url":null,"abstract":"<div><div>Adversarial attacks are a concern for image classification using neural networks. Numerous methods have been created to minimize the effects of attacks, where the best defense against such attacks is through adversarial training, which has proven to be the most successful to date. Due to the nature of adversarial attacks, it is difficult to assess the capabilities of a network to defend. The standard method of assessing a network’s performance in supervised image classification tasks is based on accuracy. However, this assessment method, while still important, is insufficient when adversarial attacks are included. A new metric called certified accuracy is used to assess network performance when samples are perturbed by adversarial noise. This paper supplements certified accuracy with an abstention rate to give more insight into the network’s robustness. Abstention rate measures the percentage of the network that failed to keep its prediction unchanged as the perturbation strength increases from zero to specified strength. The study focuses on popular and good-performing CNN-based architectures, specifically EfficientNet-B7, ResNet-50, ResNet-101, Wide-ResNet-101, and transformer architectures such as CaiT and ViT-B/16. The selected architectures are trained in adversarial and standard methods and then certified on CIFAR-10 datasets perturbed with Gaussian noises of different strengths. Our results show that transformers are more resilient to adversarial attacks than CNN-based architectures by a significant margin. Transformers exhibit better certified accuracy and tolerance against stronger noises than CNN-based architectures, demonstrating good robustness with and without adversarial training. The width and depth of a network have little effect on achieving robustness against adversarial attacks, but rather, the techniques that are deployed in the network are more impactful, where attention mechanisms have been shown to improve a network’s robustness.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200555"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situation-aware Cyber–Physical–Social System for Cultural Heritage 情境感知文化遗产的网络-物理-社会系统
Pub Date : 2025-09-01 Epub Date: 2025-06-13 DOI: 10.1016/j.iswa.2025.200544
Francesco Colace , Giuseppe D’Aniello , Massimo De Santo , Rosario Gaeta , Gabriel Zuchtriegel
The safeguard of cultural heritage (CH) is one of the most of interest issues for all the countries, like Italy, known for their thousand-year history. Cultural properties have to be maintained regularly and effectively so that the condition of such properties remains good at all times. Human operators have always been the ones in charge of monitoring and maintaining these properties, with domain experts capable of understanding when and how the maintenance has to be done. In our paper, we define a CH asset as a Cyber–Physical–Social System. We designed and proposed a prototype of a Situation-aware Cyber–Physical–Social System (CPSS) for Cultural Heritage, capable of supporting the human operator situation awareness. The CPSS is a Machine Learning (ML) and expert based system equipped with modules for capturing information, which are then processed with ML techniques to identify asset maintenance issues, understanding how they will evolve, and what are the priorities in the maintenance activity to be performed. We propose three case studies relating respectively to: four structures in the archaeological site of Pompeii, three in the archaeological site of Paestum, and three related to the area the archaeological site of the Colosseum, in Rome, for the safeguarding of which the system uses vulnerability indexes, calculated using prior knowledge related to these structures, maintenance issues detected from aerial photos using a YoloV7 detection model, and context space theory with weather and anthropogenic flow data. We showed how it was possible to identify critical and dangerous situations for these zones, with vulnerability indexes capable of mitigating damaged and dangerous areas to be left in that state with the advent of adverse weather phenomena, which indeed from the photos appeared damaged and flooded.
文化遗产的保护是所有国家最关心的问题之一,如意大利,以其千年历史而闻名。必须定期和有效地维护文化财产,使这些财产的状况在任何时候都保持良好。人工操作员一直负责监控和维护这些属性,领域专家能够了解何时以及如何进行维护。在本文中,我们将CH资产定义为一个网络-物理-社会系统。我们设计并提出了一个态势感知的文化遗产网络-物理-社会系统(CPSS)的原型,能够支持人类操作员的态势感知。CPSS是一个基于机器学习(ML)和专家的系统,配备了用于捕获信息的模块,然后用ML技术处理这些信息,以识别资产维护问题,了解它们将如何演变,以及要执行的维护活动中的优先级是什么。我们提出三个案例研究,分别涉及:庞贝考古遗址的4个结构,帕埃斯图姆考古遗址的3个结构,以及罗马斗兽场考古遗址的3个结构,对于这些结构的保护,系统使用脆弱性指数,使用与这些结构相关的先验知识计算,使用YoloV7检测模型从航空照片中检测到的维护问题,以及使用天气和人为流量数据的上下文空间理论。我们展示了如何识别这些区域的关键和危险情况,脆弱性指数能够减轻受损和危险地区在不利天气现象出现时留下的状态,从照片中确实出现了受损和洪水。
{"title":"Situation-aware Cyber–Physical–Social System for Cultural Heritage","authors":"Francesco Colace ,&nbsp;Giuseppe D’Aniello ,&nbsp;Massimo De Santo ,&nbsp;Rosario Gaeta ,&nbsp;Gabriel Zuchtriegel","doi":"10.1016/j.iswa.2025.200544","DOIUrl":"10.1016/j.iswa.2025.200544","url":null,"abstract":"<div><div>The safeguard of cultural heritage (CH) is one of the most of interest issues for all the countries, like Italy, known for their thousand-year history. Cultural properties have to be maintained regularly and effectively so that the condition of such properties remains good at all times. Human operators have always been the ones in charge of monitoring and maintaining these properties, with domain experts capable of understanding when and how the maintenance has to be done. In our paper, we define a CH asset as a Cyber–Physical–Social System. We designed and proposed a prototype of a Situation-aware Cyber–Physical–Social System (CPSS) for Cultural Heritage, capable of supporting the human operator situation awareness. The CPSS is a Machine Learning (ML) and expert based system equipped with modules for capturing information, which are then processed with ML techniques to identify asset maintenance issues, understanding how they will evolve, and what are the priorities in the maintenance activity to be performed. We propose three case studies relating respectively to: four structures in the archaeological site of Pompeii, three in the archaeological site of Paestum, and three related to the area the archaeological site of the Colosseum, in Rome, for the safeguarding of which the system uses vulnerability indexes, calculated using prior knowledge related to these structures, maintenance issues detected from aerial photos using a YoloV7 detection model, and context space theory with weather and anthropogenic flow data. We showed how it was possible to identify critical and dangerous situations for these zones, with vulnerability indexes capable of mitigating damaged and dangerous areas to be left in that state with the advent of adverse weather phenomena, which indeed from the photos appeared damaged and flooded.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200544"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Paranasal sinus analysis based on deep learning and machine learning techniques: A comprehensive survey 基于深度学习和机器学习技术的鼻窦分析:综合调查
Pub Date : 2025-09-01 Epub Date: 2025-07-18 DOI: 10.1016/j.iswa.2025.200559
Ali Alsalama, Saad Harous, Ashraf Elnagar
This survey provides an in-depth review of recent advancements in forensic anthropology through the application of imaging and modeling techniques for paranasal sinus structures. The focus is on exploring various studies that leverage the paranasal sinuses for the identification of individuals and demographic analysis, including age and gender estimation, especially when traditional methods such as fingerprint analysis, dental records, or DNA profiling are not feasible. Additionally, the survey aims to serve as a foundation for future work in similar analyses and segmentation tasks. These methods are especially useful in forensic contexts, such as those involving skeletonized remains where other anatomical structures are absent. The paper discusses several case studies, including the segmentation of paranasal sinuses as well as their classification for establishing biological profiles in diverse populations. The effectiveness of these 3D modeling approaches in predicting demographic characteristics such as sex, age, and ethnicity is also highlighted. Special emphasis is placed on the robustness and reliability of sinus morphology as both a forensic identifier and a tool for demographic inference.
这项调查提供了一个深入的审查,最近的进展,法医人类学通过成像和建模技术的应用副鼻窦结构。重点是探索利用鼻窦进行个人识别和人口统计分析的各种研究,包括年龄和性别估计,特别是当指纹分析、牙科记录或DNA分析等传统方法不可行的时候。此外,调查的目的是作为在类似的分析和分割任务的未来工作的基础。这些方法在法医环境中特别有用,例如那些涉及其他解剖结构缺失的骨骼遗骸。本文讨论了几个案例研究,包括鼻窦的分割以及在不同人群中建立生物学概况的分类。这些3D建模方法在预测人口特征(如性别、年龄和种族)方面的有效性也得到了强调。特别强调的是稳健性和可靠性的鼻窦形态作为法医鉴定和人口统计推断的工具。
{"title":"Paranasal sinus analysis based on deep learning and machine learning techniques: A comprehensive survey","authors":"Ali Alsalama,&nbsp;Saad Harous,&nbsp;Ashraf Elnagar","doi":"10.1016/j.iswa.2025.200559","DOIUrl":"10.1016/j.iswa.2025.200559","url":null,"abstract":"<div><div>This survey provides an in-depth review of recent advancements in forensic anthropology through the application of imaging and modeling techniques for paranasal sinus structures. The focus is on exploring various studies that leverage the paranasal sinuses for the identification of individuals and demographic analysis, including age and gender estimation, especially when traditional methods such as fingerprint analysis, dental records, or DNA profiling are not feasible. Additionally, the survey aims to serve as a foundation for future work in similar analyses and segmentation tasks. These methods are especially useful in forensic contexts, such as those involving skeletonized remains where other anatomical structures are absent. The paper discusses several case studies, including the segmentation of paranasal sinuses as well as their classification for establishing biological profiles in diverse populations. The effectiveness of these 3D modeling approaches in predicting demographic characteristics such as sex, age, and ethnicity is also highlighted. Special emphasis is placed on the robustness and reliability of sinus morphology as both a forensic identifier and a tool for demographic inference.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200559"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models autonomously generate unique and profound insights in fundamental analysis? 大型语言模型能否在基础分析中自主地产生独特而深刻的见解?
IF 4.3 Pub Date : 2025-09-01 Epub Date: 2025-08-12 DOI: 10.1016/j.iswa.2025.200566
Tao Xu , Zhe Piao , Tadashi Mukai , Yuri Murayama , Kiyoshi Izumi
Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.
基本面分析在股票投资中发挥着至关重要的作用,但其复杂性长期以来限制了人工智能(AI)的参与。然而,大型语言模型(llm)的最新进展为人工智能处理基本分析开辟了新的可能性。尽管有这种潜力,利用法学硕士产生实际有用的产出仍然是一个不小的挑战,现有的研究仍处于早期阶段。本文旨在以一种新颖的方式提高法学硕士在基本分析方面的表现,从人类分析师的实践中汲取灵感。我们首先提出了一种新的自主基本面分析系统(AutoFAS),它使LLM代理能够对目标公司的各种主题进行分析。接下来,我们允许LLM代理通过探索他们认为重要的各种主题,模仿人类分析师的经验积累,自主地对AutoFAS指定的公司进行研究。然后,当出现新的研究主题时,代理根据他们积累的分析生成报告。实验表明,使用AutoFAS, LLM代理可以自主和逻辑地探索目标公司的各个方面。对他们对新的研究课题的分析的评价表明,通过积累的分析,他们自然可以产生更独特和深刻的见解。这类似于人类产生新想法的过程。我们的工作强调了在复杂的基础分析中应用法学硕士的一个有希望的方向,弥合了人类专业知识和法学硕士分析之间的差距。
{"title":"Can large language models autonomously generate unique and profound insights in fundamental analysis?","authors":"Tao Xu ,&nbsp;Zhe Piao ,&nbsp;Tadashi Mukai ,&nbsp;Yuri Murayama ,&nbsp;Kiyoshi Izumi","doi":"10.1016/j.iswa.2025.200566","DOIUrl":"10.1016/j.iswa.2025.200566","url":null,"abstract":"<div><div>Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200566"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal expert system for automated durian ripeness classification using deep learning 基于深度学习的榴莲成熟度自动分类多模式专家系统
IF 4.3 Pub Date : 2025-09-01 Epub Date: 2025-07-31 DOI: 10.1016/j.iswa.2025.200563
Santi Sukkasem, Watchareewan Jitsakul, Phayung Meesad
Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA (p<0.05) confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.
准确的榴莲成熟度分级对于质量控制和减少收获后损失至关重要。人工检查仍然是主观的和不一致的,这促使了对自动化方法的需求。我们提出了一种多模态方法,该方法集成了卷积神经网络(cnn)用于基于图像的分类和循环神经网络(rnn)用于自动文本描述。经过4个成熟度阶段的16000张标注图像的训练,该模型获得了较高的分类准确率(MobileNetV2: 95.50%)和优异的字幕性能(ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164)。虽然加权求和融合表现出优越的性能,但最终选择了串联,因为它简单且实际部署的可行性。采用单因素方差分析(p<0.05)进行统计验证,证实了研究结果的显著性。这些结果突出了所提出的多模态方法作为一种实用且可解释的自动化榴莲成熟度评估框架的潜力。
{"title":"Multi-modal expert system for automated durian ripeness classification using deep learning","authors":"Santi Sukkasem,&nbsp;Watchareewan Jitsakul,&nbsp;Phayung Meesad","doi":"10.1016/j.iswa.2025.200563","DOIUrl":"10.1016/j.iswa.2025.200563","url":null,"abstract":"<div><div>Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200563"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1