首页 > 最新文献

Machine learning with applications最新文献

英文 中文
Enhanced prediction of karst spring discharge using a hybrid LSTM-XGBoost model optimized with grid search 基于网格搜索优化的混合LSTM-XGBoost模型增强岩溶泉流量预测
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-09-23 DOI: 10.1016/j.mlwa.2025.100740
Xiaomei Liu
Globally, intensifying droughts taxed water supplies, particularly in karst areas where it is difficult to predict spring discharge due to complex hydrology. Data-driven models represent a viable alternative, with the significance of karst aquifers to freshwater production. To enhance the accuracy of spring discharge prediction, this study introduces a new LSTM-XGBoost hybrid model for more accurate karst spring discharge prediction in Chaharmahal Bakhtiari Province, Iran. The hybrid model exploits the benefits of LSTM in capturing temporal dependency and the strength of XGBoost in modeling nonlinear relationships, and Grid Search is utilized for tuning hyperparameters. The performance of the LSTM-XGBoost model is compared with the optimized ML models. The study utilizes a dataset of 3,266 day, month, and spring discharge records of the Dehghara Springs. The results depict the excellence of the suggested LSTM-XGBoost hybrid model with the highest test R2 = 0.8798, Explained Variance (EV) = 0.8857, and the lowest error metrics (MAE = 0.3355, RMSE = 0.5795, MAPE = 21.84%). The hybrid model outperforms both the baseline traditional and Deep Learning (DL). Feature importance analysis reveals that seasonal factors, particularly the month with an importance score of 0.919, have a significantly greater impact on spring discharge than daily variations. The proposed LSTM-XGBoost hybrid model provides a reliable and accurate tool for karst spring discharge prediction, offering valuable insights for water resource management in regions affected by climate change and increasing water demand.
在全球范围内,日益严重的干旱对供水造成了负担,特别是在喀斯特地区,由于复杂的水文环境,很难预测泉水的流量。考虑到喀斯特含水层对淡水产量的重要性,数据驱动模型是一种可行的替代方案。为了提高岩溶泉流量预测的精度,本文引入了一种新的LSTM-XGBoost混合模型,对伊朗Chaharmahal Bakhtiari省岩溶泉流量进行了更准确的预测。该混合模型利用了LSTM在捕获时间依赖性方面的优势和XGBoost在建模非线性关系方面的优势,并利用网格搜索来调整超参数。将LSTM-XGBoost模型的性能与优化后的ML模型进行了比较。该研究利用了Dehghara泉的3266天、月和春季流量记录的数据集。结果表明,LSTM-XGBoost混合模型的检验系数最高,R2 = 0.8798,解释方差(EV) = 0.8857,误差指标最低,MAE = 0.3355, RMSE = 0.5795, MAPE = 21.84%。混合模型的性能优于基线传统学习和深度学习(DL)。特征重要度分析表明,季节因素对春季流量的影响显著大于日变化,特别是月份的重要度得分为0.919。提出的LSTM-XGBoost混合模型为喀斯特泉流量预测提供了可靠、准确的工具,为受气候变化影响和用水需求增加地区的水资源管理提供了有价值的见解。
{"title":"Enhanced prediction of karst spring discharge using a hybrid LSTM-XGBoost model optimized with grid search","authors":"Xiaomei Liu","doi":"10.1016/j.mlwa.2025.100740","DOIUrl":"10.1016/j.mlwa.2025.100740","url":null,"abstract":"<div><div>Globally, intensifying droughts taxed water supplies, particularly in karst areas where it is difficult to predict spring discharge due to complex hydrology. Data-driven models represent a viable alternative, with the significance of karst aquifers to freshwater production. To enhance the accuracy of spring discharge prediction, this study introduces a new LSTM-XGBoost hybrid model for more accurate karst spring discharge prediction in Chaharmahal Bakhtiari Province, Iran. The hybrid model exploits the benefits of LSTM in capturing temporal dependency and the strength of XGBoost in modeling nonlinear relationships, and Grid Search is utilized for tuning hyperparameters. The performance of the LSTM-XGBoost model is compared with the optimized ML models. The study utilizes a dataset of 3,266 day, month, and spring discharge records of the Dehghara Springs. The results depict the excellence of the suggested LSTM-XGBoost hybrid model with the highest test R<sup>2</sup> = 0.8798, Explained Variance (EV) = 0.8857, and the lowest error metrics (MAE = 0.3355, RMSE = 0.5795, MAPE = 21.84%). The hybrid model outperforms both the baseline traditional and Deep Learning (DL). Feature importance analysis reveals that seasonal factors, particularly the month with an importance score of 0.919, have a significantly greater impact on spring discharge than daily variations. The proposed LSTM-XGBoost hybrid model provides a reliable and accurate tool for karst spring discharge prediction, offering valuable insights for water resource management in regions affected by climate change and increasing water demand.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100740"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145268143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty quantification by large language models 大型语言模型的不确定性量化
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-11-01 DOI: 10.1016/j.mlwa.2025.100773
Dorianis M. Perez, Bryan E. Kaiser, Ismael Boureima
As reasoning capabilities of large language models (LLMs) continue to advance, they are being integrated into increasingly complex scientific workflows, with the goal of developing agents capable of generating evidence-based explanations and testing hypotheses and theories. However, despite their rapid progress, most existing evaluations of LLM reasoning focus on accuracy or consistency rather than on uncertainty quantification (UQ), which is essential for evidence-based reasoning because it quantifies the degree of trustworthiness of evidence-based explanations. Current approaches to LLM uncertainty remain fragmented, often lacking standardized benchmarks that test models under varying task complexities. To address this gap, we introduce the first benchmark suite designed to evaluate UQ by LLM-based agents and tools. The benchmark targets one of the most fundamental UQ problem: estimating whether one quantity is probably larger than another under uncertainty. It includes two progressively complex tasks: a simple inequality test, where models judge whether one of two sets of samples is “larger,” “smaller,” or “uncertain” with 95% confidence, and a complex inequality test, where models assess interventional probabilities requiring multiple intermediate calculations. We found that reasoning models are generally capable of UQ (scores 70%) in the simple inequality case but do not score appreciably better than random guessing (scores 33%) for the complex inequality case if the UQ method and intermediate steps are not provided in the prompt. Our implementation is available at https://github.com/bekaiser-LANL/tether.
随着大型语言模型(llm)推理能力的不断提高,它们正被整合到日益复杂的科学工作流程中,其目标是开发能够生成基于证据的解释和测试假设和理论的代理。然而,尽管LLM推理取得了快速进展,但大多数现有的评估都集中在准确性或一致性上,而不是不确定性量化(UQ),这对于基于证据的推理至关重要,因为它量化了基于证据的解释的可信度程度。当前LLM不确定性的方法仍然是碎片化的,通常缺乏在不同任务复杂性下测试模型的标准化基准。为了解决这一差距,我们引入了第一个基准测试套件,旨在通过基于llm的代理和工具评估UQ。基准的目标是最基本的UQ问题之一:估计在不确定的情况下,一个量是否可能大于另一个量。它包括两个逐渐复杂的任务:一个简单的不等式检验,模型判断两组样本中的一个是“更大”、“更小”还是“不确定”,置信度为95%;另一个复杂的不等式检验,模型评估需要多次中间计算的干预概率。我们发现,推理模型在简单不等式情况下通常能够获得UQ(得分≥70%),但如果提示中没有提供UQ方法和中间步骤,则推理模型的得分并不明显优于复杂不等式情况下的随机猜测(得分≥33%)。我们的实现可以在https://github.com/bekaiser-LANL/tether上获得。
{"title":"Uncertainty quantification by large language models","authors":"Dorianis M. Perez,&nbsp;Bryan E. Kaiser,&nbsp;Ismael Boureima","doi":"10.1016/j.mlwa.2025.100773","DOIUrl":"10.1016/j.mlwa.2025.100773","url":null,"abstract":"<div><div>As reasoning capabilities of large language models (LLMs) continue to advance, they are being integrated into increasingly complex scientific workflows, with the goal of developing agents capable of generating evidence-based explanations and testing hypotheses and theories. However, despite their rapid progress, most existing evaluations of LLM reasoning focus on accuracy or consistency rather than on uncertainty quantification (UQ), which is essential for evidence-based reasoning because it quantifies the degree of trustworthiness of evidence-based explanations. Current approaches to LLM uncertainty remain fragmented, often lacking standardized benchmarks that test models under varying task complexities. To address this gap, we introduce the first benchmark suite designed to evaluate UQ by LLM-based agents and tools. The benchmark targets one of the most fundamental UQ problem: estimating whether one quantity is probably larger than another under uncertainty. It includes two progressively complex tasks: a simple inequality test, where models judge whether one of two sets of samples is “larger,” “smaller,” or “uncertain” with 95% confidence, and a complex inequality test, where models assess interventional probabilities requiring multiple intermediate calculations. We found that reasoning models are generally capable of UQ (scores <span><math><mrow><mo>≳</mo><mn>70</mn><mtext>%</mtext></mrow></math></span>) in the simple inequality case but do not score appreciably better than random guessing (scores <span><math><mrow><mo>∼</mo><mn>33</mn><mtext>%</mtext></mrow></math></span>) for the complex inequality case if the UQ method and intermediate steps are not provided in the prompt. Our implementation is available at <span><span>https://github.com/bekaiser-LANL/tether</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100773"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal abuse and sentiment analysis of Hollywood movie dialogues using language models 基于语言模型的好莱坞电影对白纵向滥用与情感分析
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-10-14 DOI: 10.1016/j.mlwa.2025.100749
Rohitash Chandra, Guoxiang Ren
Over the past decades, there has been an increase in the prevalence of abusive and violent content in Hollywood movies. In this study, we use language models to explore the longitudinal abuse and sentiment analysis of Hollywood Oscar and blockbuster movie dialogues from 1950 to 2024. We provide an analysis of subtitles for over a thousand movies, which are categorised into four genres. We employ fine-tuned language models to examine the trends and shifts in emotional and abusive content over the past seven decades. Findings reveal significant temporal changes in movie dialogues, which reflect broader social and cultural influences. Overall, the emotional tendencies in the films are diverse, and the detection of abusive content also exhibits significant fluctuations. The results show a gradual rise in abusive content in recent decades, reflecting social norms and regulatory policy changes. Genres such as thrillers still present a higher frequency of abusive content that emphasises the ongoing narrative role of violence and conflict. At the same time, underlying positive emotions such as humour and optimism remain prevalent in most of the movies. Furthermore, the gradual increase of abusive content in movie dialogues has been significant over the last two decades, where Oscar-nominated movies overtook the top ten blockbusters.
在过去的几十年里,好莱坞电影中出现了越来越多的辱骂和暴力内容。在这项研究中,我们使用语言模型对1950年至2024年好莱坞奥斯卡和大片对话进行纵向滥用和情感分析。我们对一千多部电影的字幕进行了分析,这些电影被分为四种类型。我们使用精细的语言模型来研究过去70年来情感和辱骂内容的趋势和变化。研究结果揭示了电影对白的重大时间变化,这反映了更广泛的社会和文化影响。总体而言,电影中的情感倾向是多样化的,对虐待内容的检测也呈现出明显的波动。结果显示,近几十年来,滥用内容逐渐增多,反映了社会规范和监管政策的变化。惊悚类游戏仍然呈现出更频繁的暴力内容,强调暴力和冲突的持续叙事作用。与此同时,幽默和乐观等潜在的积极情绪在大多数电影中仍然普遍存在。此外,在过去的二十年里,电影对白中的辱骂内容逐渐增加,奥斯卡提名电影超过了十大大片。
{"title":"Longitudinal abuse and sentiment analysis of Hollywood movie dialogues using language models","authors":"Rohitash Chandra,&nbsp;Guoxiang Ren","doi":"10.1016/j.mlwa.2025.100749","DOIUrl":"10.1016/j.mlwa.2025.100749","url":null,"abstract":"<div><div>Over the past decades, there has been an increase in the prevalence of abusive and violent content in Hollywood movies. In this study, we use language models to explore the longitudinal abuse and sentiment analysis of Hollywood Oscar and blockbuster movie dialogues from 1950 to 2024. We provide an analysis of subtitles for over a thousand movies, which are categorised into four genres. We employ fine-tuned language models to examine the trends and shifts in emotional and abusive content over the past seven decades. Findings reveal significant temporal changes in movie dialogues, which reflect broader social and cultural influences. Overall, the emotional tendencies in the films are diverse, and the detection of abusive content also exhibits significant fluctuations. The results show a gradual rise in abusive content in recent decades, reflecting social norms and regulatory policy changes. Genres such as thrillers still present a higher frequency of abusive content that emphasises the ongoing narrative role of violence and conflict. At the same time, underlying positive emotions such as humour and optimism remain prevalent in most of the movies. Furthermore, the gradual increase of abusive content in movie dialogues has been significant over the last two decades, where Oscar-nominated movies overtook the top ten blockbusters.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100749"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-informed oracle training for enhancing active learning without external knowledge 基于模型的oracle训练,在没有外部知识的情况下增强主动学习
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-10-27 DOI: 10.1016/j.mlwa.2025.100775
Yujin Cha
In real-world applications of active learning frameworks, human oracles are often imperfect, and label noise is introduced into the learning process. This issue can be mitigated by further training the oracle using previous knowledge acquired by the model. However, it remains unclear whether model-informed oracle training can significantly improve performance. This study investigates whether recursive feedback between the model and the oracle can induce a knowledge augmentation effect, defined as a statistically significant improvement in model performance after receiving feedback from a self-data-trained oracle. To this end, we implemented a bidirectional active learning framework in which the model assists oracle learning by selectively transferring prior knowledge. In a closed-loop environment without external data, the model performs informative sample selection from an unlabeled pool, querying the oracle for labels, and retraining on the updated dataset. Simultaneously, the oracle is updated by learning from samples from the model’s training data that exhibit high uncertainty from the oracle’s perspective. This framework was empirically validated through a behavioral experiment involving 252 clinicians performing a medical image interpretation task. The results showed that model-informed oracle training enhanced both oracle accuracy and model performance. Moreover, when oracle learning was constrained by a fixed learning budget, a sampling strategy jointly balancing uncertainty and representativeness yielded the strongest effect. These findings provide compelling empirical evidence of the knowledge augmentation effect arising from human learning within a closed-loop active learning framework.
在主动学习框架的实际应用中,人类的预言器通常是不完美的,并且在学习过程中引入了标签噪声。这个问题可以通过使用模型获得的先前知识进一步训练oracle来缓解。然而,目前还不清楚基于模型的oracle训练是否能显著提高性能。本研究探讨了模型和oracle之间的递归反馈是否可以诱导知识增强效应,即在接受自数据训练的oracle的反馈后,模型性能在统计上显着提高。为此,我们实现了一个双向主动学习框架,其中模型通过选择性地转移先验知识来帮助oracle学习。在没有外部数据的闭环环境中,模型从未标记的池中执行信息样本选择,向oracle查询标签,并对更新的数据集进行再训练。同时,通过从模型的训练数据中学习样本来更新oracle,从oracle的角度来看,这些数据显示出很高的不确定性。通过252名临床医生执行医学图像解释任务的行为实验,该框架得到了经验验证。结果表明,基于模型的oracle训练提高了oracle的准确率和模型性能。此外,当oracle学习受到固定学习预算的约束时,联合平衡不确定性和代表性的抽样策略效果最强。这些发现提供了令人信服的经验证据,证明在闭环主动学习框架下,人类学习产生的知识增强效应。
{"title":"Model-informed oracle training for enhancing active learning without external knowledge","authors":"Yujin Cha","doi":"10.1016/j.mlwa.2025.100775","DOIUrl":"10.1016/j.mlwa.2025.100775","url":null,"abstract":"<div><div>In real-world applications of active learning frameworks, human oracles are often imperfect, and label noise is introduced into the learning process. This issue can be mitigated by further training the oracle using previous knowledge acquired by the model. However, it remains unclear whether model-informed oracle training can significantly improve performance. This study investigates whether recursive feedback between the model and the oracle can induce a knowledge augmentation effect, defined as a statistically significant improvement in model performance after receiving feedback from a self-data-trained oracle. To this end, we implemented a bidirectional active learning framework in which the model assists oracle learning by selectively transferring prior knowledge. In a closed-loop environment without external data, the model performs informative sample selection from an unlabeled pool, querying the oracle for labels, and retraining on the updated dataset. Simultaneously, the oracle is updated by learning from samples from the model’s training data that exhibit high uncertainty from the oracle’s perspective. This framework was empirically validated through a behavioral experiment involving 252 clinicians performing a medical image interpretation task. The results showed that model-informed oracle training enhanced both oracle accuracy and model performance. Moreover, when oracle learning was constrained by a fixed learning budget, a sampling strategy jointly balancing uncertainty and representativeness yielded the strongest effect. These findings provide compelling empirical evidence of the knowledge augmentation effect arising from human learning within a closed-loop active learning framework.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100775"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-aware stable diffusion for traditional architectural decoration design 传统建筑装饰设计的结构意识稳定扩散
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-09-12 DOI: 10.1016/j.mlwa.2025.100735
Jianhong Yang , Guoyong Wang
The intelligent generation of traditional architectural styles faces significant challenges in structural integrity and style consistency. While existing methods can generate numerous realistic images, they lack a deep understanding of structural elements in traditional architectural decorative design. This paper proposes a Structure-aware Stable Diffusion (SSD) model, which enhances the model's comprehension of architectural features through three key innovations. First, we design a structure-aware feature injection module that adaptively fuses extracted architectural structural information with original features during the U-net upsampling phase, enhancing the model's understanding of geometric structures. Second, we introduce a dual-path text enhancement strategy that combines structural descriptions with original descriptions to provide richer textual guidance signals for the generation process. Finally, we design a progressive injection strategy that dynamically controls the injection intensity of structural information through cosine scheduling, ultimately achieving effective internalization of structural knowledge. Experimental results show that compared to existing methods, our model effectively improves both the diversity of generated traditional architectural decorations and the rationality of their structures, thus providing an effective new technical approach for traditional architectural decorative design.
传统建筑风格的智能生成在结构完整性和风格一致性方面面临着重大挑战。虽然现有的方法可以生成大量逼真的图像,但缺乏对传统建筑装饰设计中结构元素的深刻理解。本文提出了一个结构感知的稳定扩散(SSD)模型,该模型通过三个关键创新增强了模型对建筑特征的理解。首先,设计了结构感知特征注入模块,在U-net上采样阶段将提取的建筑结构信息与原始特征自适应融合,增强模型对几何结构的理解;其次,引入结构化描述与原始描述相结合的双路径文本增强策略,为生成过程提供更丰富的文本引导信号。最后,我们设计了一种渐进式注入策略,通过余弦调度动态控制结构信息的注入强度,最终实现结构知识的有效内化。实验结果表明,与现有方法相比,该模型有效地提高了生成的传统建筑装饰的多样性和结构的合理性,为传统建筑装饰设计提供了一种有效的新技术途径。
{"title":"Structure-aware stable diffusion for traditional architectural decoration design","authors":"Jianhong Yang ,&nbsp;Guoyong Wang","doi":"10.1016/j.mlwa.2025.100735","DOIUrl":"10.1016/j.mlwa.2025.100735","url":null,"abstract":"<div><div>The intelligent generation of traditional architectural styles faces significant challenges in structural integrity and style consistency. While existing methods can generate numerous realistic images, they lack a deep understanding of structural elements in traditional architectural decorative design. This paper proposes a Structure-aware Stable Diffusion (SSD) model, which enhances the model's comprehension of architectural features through three key innovations. First, we design a structure-aware feature injection module that adaptively fuses extracted architectural structural information with original features during the U-net upsampling phase, enhancing the model's understanding of geometric structures. Second, we introduce a dual-path text enhancement strategy that combines structural descriptions with original descriptions to provide richer textual guidance signals for the generation process. Finally, we design a progressive injection strategy that dynamically controls the injection intensity of structural information through cosine scheduling, ultimately achieving effective internalization of structural knowledge. Experimental results show that compared to existing methods, our model effectively improves both the diversity of generated traditional architectural decorations and the rationality of their structures, thus providing an effective new technical approach for traditional architectural decorative design.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100735"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145108027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LapSDNMF: Label propagation assisted soft-constrained deep non-negative matrix factorisation for semi-supervised multi-view clustering labsdnmf:标签传播辅助软约束深度非负矩阵分解半监督多视图聚类
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-11-03 DOI: 10.1016/j.mlwa.2025.100783
Sohan Dinusha Liyana Gunawardena, Khanh Luong, Thirunavukarasu Balasubramaniam, Richi Nayak
Semi-supervised methods based on non-negative matrix factorisation have emerged as a popular approach for clustering. However, the pressing challenge of capturing complex non-linear relationships within multi-view data is seldom considered in the semi-supervised context.
This study introduces a fundamentally novel framework: Label Propagation Assisted Soft-constrained Deep Non-negative Matrix Factorisation for Semi-supervised Multi-view Clustering (LapSDNMF).
LapSDNMF innovatively integrates deep hierarchical modelling with label propagation and soft constraint to jointly exploit the non-linear representation learning and extract accurate latent features from limited labelled data. By embedding a predictive membership matrix as a soft constraint, it enables similarly labelled samples to be projected into shared regions, better reflecting real-world data structures. The incorporation of graph-based regularisation within the deep architecture facilitates effective label propagation while preserving the manifold structure at each layer. LapSDNMF unifies deep learning and graph-theoretic techniques within a coherent optimisation framework. We also develop a novel, efficient algorithm based on multiplicative update rules to solve the resulting optimisation problem.
LapSDNMF significantly outperforms state-of-the-art multi-view clustering methods across five diverse real-world datasets. Specifically, it achieves improvements in F-score of 10.2%, 7.2%, 8.8%, 1.4%, and 6.1% on the Yale, Reuters-MinMax, Caltech7, 3-Sources, and Caltech20 datasets, respectively, compared with the best-performing baseline method.
基于非负矩阵分解的半监督方法已经成为一种流行的聚类方法。然而,在半监督环境中,捕获多视图数据中复杂非线性关系的紧迫挑战很少被考虑。本研究引入了一个全新的框架:标签传播辅助软约束深度非负矩阵分解半监督多视图聚类(LapSDNMF)。LapSDNMF创新地将深度层次建模与标签传播和软约束相结合,共同利用非线性表示学习,从有限的标记数据中提取准确的潜在特征。通过嵌入预测隶属矩阵作为软约束,它可以将类似标记的样本投影到共享区域,从而更好地反映现实世界的数据结构。在深度架构中结合基于图的正则化有助于有效的标签传播,同时保留每层的流形结构。LapSDNMF在一个连贯的优化框架内统一了深度学习和图论技术。我们还开发了一种基于乘法更新规则的新型高效算法来解决由此产生的优化问题。LapSDNMF在五个不同的现实世界数据集上显著优于最先进的多视图聚类方法。具体来说,与表现最好的基线方法相比,它在耶鲁、路透社- minmax、Caltech7、3-Sources和Caltech20数据集上的f分数分别提高了10.2%、7.2%、8.8%、1.4%和6.1%。
{"title":"LapSDNMF: Label propagation assisted soft-constrained deep non-negative matrix factorisation for semi-supervised multi-view clustering","authors":"Sohan Dinusha Liyana Gunawardena,&nbsp;Khanh Luong,&nbsp;Thirunavukarasu Balasubramaniam,&nbsp;Richi Nayak","doi":"10.1016/j.mlwa.2025.100783","DOIUrl":"10.1016/j.mlwa.2025.100783","url":null,"abstract":"<div><div>Semi-supervised methods based on non-negative matrix factorisation have emerged as a popular approach for clustering. However, the pressing challenge of capturing complex non-linear relationships within multi-view data is seldom considered in the semi-supervised context.</div><div>This study introduces a fundamentally novel framework: Label Propagation Assisted Soft-constrained Deep Non-negative Matrix Factorisation for Semi-supervised Multi-view Clustering (LapSDNMF).</div><div>LapSDNMF innovatively integrates deep hierarchical modelling with label propagation and soft constraint to jointly exploit the non-linear representation learning and extract accurate latent features from limited labelled data. By embedding a predictive membership matrix as a soft constraint, it enables similarly labelled samples to be projected into shared regions, better reflecting real-world data structures. The incorporation of graph-based regularisation within the deep architecture facilitates effective label propagation while preserving the manifold structure at each layer. LapSDNMF unifies deep learning and graph-theoretic techniques within a coherent optimisation framework. We also develop a novel, efficient algorithm based on multiplicative update rules to solve the resulting optimisation problem.</div><div>LapSDNMF significantly outperforms state-of-the-art multi-view clustering methods across five diverse real-world datasets. Specifically, it achieves improvements in F-score of 10.2%, 7.2%, 8.8%, 1.4%, and 6.1% on the Yale, Reuters-MinMax, Caltech7, 3-Sources, and Caltech20 datasets, respectively, compared with the best-performing baseline method.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100783"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecasting stock market anomalies in emerging markets: An OPTUNA-optimized isolation forest and K-means approach 新兴市场股票市场异常预测:optuna优化隔离森林和k -均值方法
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-11-04 DOI: 10.1016/j.mlwa.2025.100770
Seyed Pendar Toufighi , Amir Mohammad Khani , Arman Rezasoltani , Iman Ghasemian Sahebi , Jan Vang
Forecasting financial anomalies in emerging markets is critical for informed investment and risk management. This study proposes a novel machine learning framework that integrates an OPTUNA-optimized Isolation Forest algorithm with K-Means clustering to detect and classify stock market anomalies in Iran Khodro, one of Iran’s largest automotive firms. Leveraging daily stock data from 2001 to 2022, the model enhances anomaly detection accuracy by tuning hyperparameters through Bayesian optimization, significantly reducing false positives compared to standard implementations. The K-Means clustering algorithm further segments the detected anomalies into meaningful behavioral categories based on price and trading volume dynamics. Results reveal distinct periods of market disruption aligned with major political and economic events, including sanctions, currency volatility, and the COVID-19 pandemic. This hybrid approach demonstrates a robust, efficient, and interpretable method for forecasting abnormal market behavior in high-volatility, low-transparency environments. The framework holds promise for broader application in forecasting stock anomalies across other emerging financial markets.
预测新兴市场的金融异常对于明智的投资和风险管理至关重要。本研究提出了一种新的机器学习框架,该框架集成了optuna优化的隔离森林算法和K-Means聚类,以检测和分类伊朗最大的汽车公司之一伊朗Khodro的股票市场异常。利用2001年至2022年的每日股票数据,该模型通过贝叶斯优化调整超参数,提高了异常检测的准确性,与标准实现相比,显著减少了误报。K-Means聚类算法将检测到的异常进一步细分为基于价格和交易量动态的有意义的行为类别。结果显示,与重大政治和经济事件(包括制裁、货币波动和COVID-19大流行)相关的不同时期的市场中断。这种混合方法展示了一种在高波动性、低透明度环境下预测异常市场行为的稳健、高效和可解释的方法。该框架有望在预测其他新兴金融市场的股票异常方面得到更广泛的应用。
{"title":"Forecasting stock market anomalies in emerging markets: An OPTUNA-optimized isolation forest and K-means approach","authors":"Seyed Pendar Toufighi ,&nbsp;Amir Mohammad Khani ,&nbsp;Arman Rezasoltani ,&nbsp;Iman Ghasemian Sahebi ,&nbsp;Jan Vang","doi":"10.1016/j.mlwa.2025.100770","DOIUrl":"10.1016/j.mlwa.2025.100770","url":null,"abstract":"<div><div>Forecasting financial anomalies in emerging markets is critical for informed investment and risk management. This study proposes a novel machine learning framework that integrates an OPTUNA-optimized Isolation Forest algorithm with K-Means clustering to detect and classify stock market anomalies in Iran Khodro, one of Iran’s largest automotive firms. Leveraging daily stock data from 2001 to 2022, the model enhances anomaly detection accuracy by tuning hyperparameters through Bayesian optimization, significantly reducing false positives compared to standard implementations. The K-Means clustering algorithm further segments the detected anomalies into meaningful behavioral categories based on price and trading volume dynamics. Results reveal distinct periods of market disruption aligned with major political and economic events, including sanctions, currency volatility, and the COVID-19 pandemic. This hybrid approach demonstrates a robust, efficient, and interpretable method for forecasting abnormal market behavior in high-volatility, low-transparency environments. The framework holds promise for broader application in forecasting stock anomalies across other emerging financial markets.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100770"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutionary AdaBoost ensemble: A machine learning framework for depression detection 进化AdaBoost集成:抑郁症检测的机器学习框架
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-10-06 DOI: 10.1016/j.mlwa.2025.100748
Ruhollah Sayeri , Behnam Barzegar , Yaser Bozorgi rad , Nasser Mikaeilvand , Mohammad Hassan Tayarani Najaran
Depression is a prevalent and debilitating mental health disorder that often goes undiagnosed due to the lack of accessible, objective screening tools. This paper introduces EVAdaBoost, an Evolutionary AdaBoost ensemble framework designed for automated depression detection from voice signals. The method leverages a diverse set of signal processing techniques—including Fourier, Wavelet, Walsh, Hilbert–Huang, and OpenSmile, as well as time–frequency transformations for convolutional neural networks (CNNs). Each feature set is used to train a specialised AdaBoost ensemble, with Broad Learning Systems (BLS) serving as efficient weak learners. A key innovation of EVAdaBoost is its use of a quantum-inspired evolutionary algorithm to optimise the feature subsets assigned to each AdaBoost model. Instead of using all extracted features, which may include noise, redundancy, and irrelevant data, EVAdaBoost evolves to select diverse and high-performing subsets of features for each AdaBoost base learner, automatically discarding non-informative features. This evolutionary selection enhances both classification accuracy and computational efficiency. Additionally, an evolutionary pruning algorithm is employed to find the optimal subset of AdaBoost algorithms that offer the best performance at reduced computational cost. Experiments across nine feature types and multiple benchmark classifiers show that EVAdaBoost consistently outperforms state-of-the-art methods in accuracy, sensitivity (TPR), specificity (TNR), and precision (PPV). The results underscore the potential of hybrid evolutionary ensemble learning for non-invasive, speech-based mental health screening.
抑郁症是一种普遍的、使人衰弱的精神健康障碍,由于缺乏可获得的、客观的筛查工具,往往无法得到诊断。EVAdaBoost是一种进化AdaBoost集成框架,用于从语音信号中自动检测抑郁。该方法利用了一系列不同的信号处理技术,包括傅里叶、小波、沃尔什、希尔伯特-黄和OpenSmile,以及卷积神经网络(cnn)的时频变换。每个特征集都用来训练一个专门的AdaBoost集合,而广义学习系统(BLS)则作为高效的弱学习器。EVAdaBoost的一个关键创新是它使用了量子进化算法来优化分配给每个AdaBoost模型的特征子集。EVAdaBoost不是使用所有提取的特征(可能包括噪声、冗余和不相关数据),而是为每个AdaBoost基础学习器选择多样化和高性能的特征子集,自动丢弃非信息特征。这种进化选择提高了分类精度和计算效率。此外,采用进化剪枝算法来寻找AdaBoost算法的最优子集,以减少计算成本提供最佳性能。在9种特征类型和多个基准分类器上进行的实验表明,EVAdaBoost在准确性、灵敏度(TPR)、特异性(TNR)和精度(PPV)方面始终优于最先进的方法。研究结果强调了混合进化集成学习在非侵入性、基于语言的心理健康筛查中的潜力。
{"title":"Evolutionary AdaBoost ensemble: A machine learning framework for depression detection","authors":"Ruhollah Sayeri ,&nbsp;Behnam Barzegar ,&nbsp;Yaser Bozorgi rad ,&nbsp;Nasser Mikaeilvand ,&nbsp;Mohammad Hassan Tayarani Najaran","doi":"10.1016/j.mlwa.2025.100748","DOIUrl":"10.1016/j.mlwa.2025.100748","url":null,"abstract":"<div><div>Depression is a prevalent and debilitating mental health disorder that often goes undiagnosed due to the lack of accessible, objective screening tools. This paper introduces EVAdaBoost, an Evolutionary AdaBoost ensemble framework designed for automated depression detection from voice signals. The method leverages a diverse set of signal processing techniques—including Fourier, Wavelet, Walsh, Hilbert–Huang, and OpenSmile, as well as time–frequency transformations for convolutional neural networks (CNNs). Each feature set is used to train a specialised AdaBoost ensemble, with Broad Learning Systems (BLS) serving as efficient weak learners. A key innovation of EVAdaBoost is its use of a quantum-inspired evolutionary algorithm to optimise the feature subsets assigned to each AdaBoost model. Instead of using all extracted features, which may include noise, redundancy, and irrelevant data, EVAdaBoost evolves to select diverse and high-performing subsets of features for each AdaBoost base learner, automatically discarding non-informative features. This evolutionary selection enhances both classification accuracy and computational efficiency. Additionally, an evolutionary pruning algorithm is employed to find the optimal subset of AdaBoost algorithms that offer the best performance at reduced computational cost. Experiments across nine feature types and multiple benchmark classifiers show that EVAdaBoost consistently outperforms state-of-the-art methods in accuracy, sensitivity (TPR), specificity (TNR), and precision (PPV). The results underscore the potential of hybrid evolutionary ensemble learning for non-invasive, speech-based mental health screening.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100748"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145268140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-based multispectral texture inpainting and denoising 基于模型的多光谱纹理绘制与去噪
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-11-13 DOI: 10.1016/j.mlwa.2025.100772
Michal Haindl, Vojtěch Havlíček, Pavel Žid
Visual texture inpainting and denoising aim not necessarily to recover the exact pixel-wise correspondence of the original, often unobservable, texture, but rather to reconstruct a texture that is visually indistinguishable from the original. This objective differs from standard image restoration goals and therefore may require fundamentally different restoration techniques. This work presents two multispectral texture restoration methods capable of simultaneously reducing additive Gaussian or Poisson noise and inpainting missing textural regions without visible seams or repetitions. Both methods rely on descriptive three-dimensional statistical spatial models. The first method employs a complex three-dimensional spatial Gaussian mixture model, particularly suited for regular or near-regular textures. The second method uses a causal simultaneous autoregressive model, which is more appropriate for random textures or scenarios with limited training data. Importantly, both models are inherently multispectral, enabling the restoration of even hyperspectral textures. As such, they avoid the spectral quality compromises typically encountered in many alternative approaches. The Gaussian and Poisson noise reduction achieved by the proposed method is compared with four alternative approaches, showing an average improvement of 1%–16% across the spectral range while avoiding the blurring artifacts observed in some of the other methods.
视觉纹理绘制和去噪的目的不一定是恢复原始纹理(通常是不可观察的)在像素上的精确对应关系,而是重建与原始纹理在视觉上无法区分的纹理。这个目标不同于标准的图像恢复目标,因此可能需要根本不同的恢复技术。这项工作提出了两种多光谱纹理恢复方法,能够同时减少加性高斯或泊松噪声,并在没有可见接缝或重复的缺失纹理区域进行涂漆。这两种方法都依赖于描述性的三维统计空间模型。第一种方法采用复杂的三维空间高斯混合模型,特别适用于规则或接近规则的纹理。第二种方法使用因果同步自回归模型,该模型更适合随机纹理或训练数据有限的场景。重要的是,这两种模型本质上都是多光谱的,甚至可以恢复高光谱纹理。因此,它们避免了在许多替代方法中通常遇到的光谱质量妥协。通过与四种替代方法的高斯和泊松降噪进行比较,发现该方法在整个光谱范围内平均提高了1%-16%,同时避免了在其他一些方法中观察到的模糊伪像。
{"title":"Model-based multispectral texture inpainting and denoising","authors":"Michal Haindl,&nbsp;Vojtěch Havlíček,&nbsp;Pavel Žid","doi":"10.1016/j.mlwa.2025.100772","DOIUrl":"10.1016/j.mlwa.2025.100772","url":null,"abstract":"<div><div>Visual texture inpainting and denoising aim not necessarily to recover the exact pixel-wise correspondence of the original, often unobservable, texture, but rather to reconstruct a texture that is visually indistinguishable from the original. This objective differs from standard image restoration goals and therefore may require fundamentally different restoration techniques. This work presents two multispectral texture restoration methods capable of simultaneously reducing additive Gaussian or Poisson noise and inpainting missing textural regions without visible seams or repetitions. Both methods rely on descriptive three-dimensional statistical spatial models. The first method employs a complex three-dimensional spatial Gaussian mixture model, particularly suited for regular or near-regular textures. The second method uses a causal simultaneous autoregressive model, which is more appropriate for random textures or scenarios with limited training data. Importantly, both models are inherently multispectral, enabling the restoration of even hyperspectral textures. As such, they avoid the spectral quality compromises typically encountered in many alternative approaches. The Gaussian and Poisson noise reduction achieved by the proposed method is compared with four alternative approaches, showing an average improvement of 1%–16% across the spectral range while avoiding the blurring artifacts observed in some of the other methods.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100772"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the effectiveness of LLMs for explainable deep reinforcement learning 评估llm对可解释深度强化学习的有效性
IF 4.9 Pub Date : 2025-12-01 Epub Date: 2025-11-14 DOI: 10.1016/j.mlwa.2025.100795
Ayoub Belouadah, Marcelo Luis Ruiz-Rodríguez, Sylvain Kubler, Yves Le Traon
Understanding the decision-making of reinforcement learning (RL) agents is essential for real-world deployment. Existing eXplainable RL (XRL) techniques, such as feature attribution and policy visualization, provide insight but remain inaccessible to non-experts. Large Language Models (LLMs) offer a natural-language alternative, yet often lack logical consistency and alignment with agent goals. This study benchmarks three explanation generation methods: Chain-of-Thought (CoT) prompting as the standard baseline used in prior work, Monte Carlo Tree Search (MCTS) augmentation, and supervised fine-tuning (SFT) across various models. Evaluations using Soundness and Fidelity show that CoT frequently produces reasoning errors, whereas MCTS improves quality for larger models (avg. +23% Soundness, +17% Fidelity), while SFT yields greater and more consistent gains for smaller ones (+58% Soundness, +52% Fidelity), underscoring the need to align methods with model capacity. An LLM-as-a-Judge framework further validates these findings, showing strong agreement with human assessments (weighted Cohen’s κ=0.77, Spearman ρ=0.88), supporting scalable and reliable assessment of textual explanations.
理解强化学习(RL)代理的决策对于实际部署至关重要。现有的可解释RL (eXplainable RL, XRL)技术,如特征归因和策略可视化,提供了洞察力,但对于非专家来说仍然无法访问。大型语言模型(llm)提供了一种自然语言替代方案,但通常缺乏逻辑一致性和与代理目标的一致性。本研究对三种解释生成方法进行了基准测试:思想链(CoT)提示作为先前工作中使用的标准基线,蒙特卡罗树搜索(MCTS)增强,以及各种模型的监督微调(SFT)。使用稳健性和保真度的评估表明,CoT经常产生推理错误,而MCTS提高了大型模型的质量(平均+23%稳健性,+17%保真度),而SFT对较小的模型产生了更大更一致的收益(+58%稳健性,+52%保真度),强调了将方法与模型容量保持一致的需要。法学硕士作为法官的框架进一步验证了这些发现,显示出与人类评估的强烈一致性(加权科恩的κ=0.77,斯皮尔曼ρ=0.88),支持可扩展和可靠的文本解释评估。
{"title":"Evaluating the effectiveness of LLMs for explainable deep reinforcement learning","authors":"Ayoub Belouadah,&nbsp;Marcelo Luis Ruiz-Rodríguez,&nbsp;Sylvain Kubler,&nbsp;Yves Le Traon","doi":"10.1016/j.mlwa.2025.100795","DOIUrl":"10.1016/j.mlwa.2025.100795","url":null,"abstract":"<div><div>Understanding the decision-making of reinforcement learning (RL) agents is essential for real-world deployment. Existing eXplainable RL (XRL) techniques, such as feature attribution and policy visualization, provide insight but remain inaccessible to non-experts. Large Language Models (LLMs) offer a natural-language alternative, yet often lack logical consistency and alignment with agent goals. This study benchmarks three explanation generation methods: Chain-of-Thought (CoT) prompting as the standard baseline used in prior work, Monte Carlo Tree Search (MCTS) augmentation, and supervised fine-tuning (SFT) across various models. Evaluations using Soundness and Fidelity show that CoT frequently produces reasoning errors, whereas MCTS improves quality for larger models (avg. <span><math><mrow><mo>+</mo><mn>23</mn><mtext>%</mtext></mrow></math></span> Soundness, <span><math><mrow><mo>+</mo><mn>17</mn><mtext>%</mtext></mrow></math></span> Fidelity), while SFT yields greater and more consistent gains for smaller ones (<span><math><mrow><mo>+</mo><mn>58</mn><mtext>%</mtext></mrow></math></span> Soundness, <span><math><mrow><mo>+</mo><mn>52</mn><mtext>%</mtext></mrow></math></span> Fidelity), underscoring the need to align methods with model capacity. An LLM-as-a-Judge framework further validates these findings, showing strong agreement with human assessments (weighted Cohen’s <span><math><mrow><mi>κ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>77</mn></mrow></math></span>, Spearman <span><math><mrow><mi>ρ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>88</mn></mrow></math></span>), supporting scalable and reliable assessment of textual explanations.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100795"},"PeriodicalIF":4.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning with applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1