首页 > 最新文献

Machine learning with applications最新文献

英文 中文
Neural-Enhanced Two-Step Modified Newton–Lavrentiev Method: A structure-preserving deep learning approach for ill-posed inverse problems 神经增强两步改进Newton-Lavrentiev方法:一种病态逆问题的保结构深度学习方法
IF 4.9 Pub Date : 2025-10-17 DOI: 10.1016/j.mlwa.2025.100761
Suresan Pareth
Ill-posed inverse problems frequently arise in scientific and medical imaging, where recovering stable and high-fidelity solutions from incomplete or noisy data remains a central challenge. Motivated by this need, we propose a novel hybrid solver framework, the Neural-Enhanced Two-Step Modified Newton–Lavrentiev Method (NE-TSMNLM), which integrates deep neural corrections into the classical Two-Step Modified Newton–Lavrentiev Method for solving nonlinear inverse problems. Unlike black-box neural operators, our design preserves the convergence structure of the classical iteration while embedding neural modules for adaptive correction, regularization, and convergence prediction.
We establish theoretical guarantees on stability and convergence: under mild assumptions, the NE-TSMNLM method inherits the convergence of the classical TSMNLM and improves the effective convergence rate to q̃=q1+β with β>0. This demonstrates the acceleration effect due to neural corrections, which has been theoretically proven.
We validate the proposed framework on synthetic and medical inverse problems, including low-dose Computed Tomography (CT) reconstruction, where NE-TSMNLM achieves a 50% radiation dose reduction while maintaining structural fidelity. Initial implementations show promising results with slight degradation (e.g., 17.3% error increase) due to untrained modules and data scarcity. We identify clear pathways for improvement using Transformer-based modules, residual-aware training, and scalable synthetic data.
These results position NE-TSMNLM as a structure-preserving neural framework with rigorous mathematical guarantees, bridging classical regularization theory and deep learning for stable, efficient, and interpretable scientific machine learning.
不适定逆问题经常出现在科学和医学成像中,其中从不完整或嘈杂的数据中恢复稳定和高保真的解决方案仍然是一个核心挑战。基于这一需求,我们提出了一种新的混合求解器框架,即神经增强的两步修正牛顿-拉夫伦提耶夫方法(NE-TSMNLM),它将深度神经校正集成到经典的两步修正牛顿-拉夫伦提耶夫方法中,用于求解非线性逆问题。与黑盒神经算子不同,我们的设计保留了经典迭代的收敛结构,同时嵌入了自适应校正、正则化和收敛预测的神经模块。建立了稳定性和收敛性的理论保证:在温和的假设条件下,NE-TSMNLM方法继承了经典TSMNLM的收敛性,并在β>;0时将有效收敛率提高到q =q1+β。这证明了由神经校正引起的加速效应,这在理论上已经得到了证明。我们在合成和医学逆问题上验证了所提出的框架,包括低剂量计算机断层扫描(CT)重建,其中NE-TSMNLM在保持结构保真度的同时实现了50%的辐射剂量降低。由于未经训练的模块和数据稀缺,最初的实现显示出有希望的结果,但有轻微的退化(例如,错误增加17.3%)。我们使用基于transformer的模块、残差感知训练和可扩展的合成数据来确定明确的改进途径。这些结果将NE-TSMNLM定位为具有严格数学保证的结构保持神经框架,将经典正则化理论和深度学习连接起来,实现稳定、高效和可解释的科学机器学习。
{"title":"Neural-Enhanced Two-Step Modified Newton–Lavrentiev Method: A structure-preserving deep learning approach for ill-posed inverse problems","authors":"Suresan Pareth","doi":"10.1016/j.mlwa.2025.100761","DOIUrl":"10.1016/j.mlwa.2025.100761","url":null,"abstract":"<div><div>Ill-posed inverse problems frequently arise in scientific and medical imaging, where recovering stable and high-fidelity solutions from incomplete or noisy data remains a central challenge. Motivated by this need, we propose a novel hybrid solver framework, the <strong>Neural-Enhanced Two-Step Modified Newton–Lavrentiev Method (NE-TSMNLM)</strong>, which integrates deep neural corrections into the classical Two-Step Modified Newton–Lavrentiev Method for solving nonlinear inverse problems. Unlike black-box neural operators, our design preserves the convergence structure of the classical iteration while embedding neural modules for adaptive correction, regularization, and convergence prediction.</div><div>We establish theoretical guarantees on stability and convergence: under mild assumptions, the NE-TSMNLM method inherits the convergence of the classical TSMNLM and improves the effective convergence rate to <span><math><mrow><mover><mrow><mi>q</mi></mrow><mrow><mo>̃</mo></mrow></mover><mo>=</mo><msup><mrow><mi>q</mi></mrow><mrow><mn>1</mn><mo>+</mo><mi>β</mi></mrow></msup></mrow></math></span> with <span><math><mrow><mi>β</mi><mo>&gt;</mo><mn>0</mn></mrow></math></span>. This demonstrates the acceleration effect due to neural corrections, which has been theoretically proven.</div><div>We validate the proposed framework on synthetic and medical inverse problems, including low-dose Computed Tomography (CT) reconstruction, where NE-TSMNLM achieves a 50% radiation dose reduction while maintaining structural fidelity. Initial implementations show promising results with slight degradation (e.g., 17.3% error increase) due to untrained modules and data scarcity. We identify clear pathways for improvement using Transformer-based modules, residual-aware training, and scalable synthetic data.</div><div>These results position NE-TSMNLM as a structure-preserving neural framework with rigorous mathematical guarantees, bridging classical regularization theory and deep learning for stable, efficient, and interpretable scientific machine learning.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100761"},"PeriodicalIF":4.9,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid stacked sparse autoencoder for robust feature extraction and classification in sparse data across multiple domains 混合堆叠稀疏自编码器用于多域稀疏数据的鲁棒特征提取和分类
IF 4.9 Pub Date : 2025-10-17 DOI: 10.1016/j.mlwa.2025.100764
Abdussamad , Said Jadid Abdulkadir , Hanita Daud , Rajalingam Sokkalingam , Iliyas Karim Khan
Tabular data is the most used data format in applied mathematics, cybersecurity, finance, and healthcare, and it presents distinct issues due to its intrinsic sparsity, with the majority of values being zero. These factors inhibit effective feature selection and reduce prediction accuracy. The Stacked Sparse Autoencoder (SSAE) model has shown great promise for feature selection in the prediction challenge. However, SSAE struggles to extract meaningful features for sparse data prediction and requires an additional machine learning classifier on the latent space for accurate predictions, thereby increasing the computational complexity. This paper presents a Hybrid-Stacked Sparse Autoencoder (HSSAE) algorithm, with a custom hybrid loss function α(L1)+(1α)L2 with binary cross-entropy to address these limitations. The proposed algorithm offers a unified framework that seamlessly integrates feature selection and prediction tasks in sparse data to improve feature extraction and reduce the computational complexity of sparse data. Three datasets, with sparsity levels of 43%, 53.32%, and 74.41%, were used in experiments to assess the performance of the HSSAE algorithm. Analyzed using several criteria, the HSSAE algorithm was shown to be much better than conventional SSAE latent space paired with machine learning classifiers such as Logistic Regression (LR), Support Vector Machine (SVM), XGBoost, and AdaBoost. Furthermore, HSSAE also surpasses deep learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron Networks (MLP), and Recurrent Neural Networks (RNN), establishing its superiority in handling sparse data prediction tasks. The ability of the proposed HSSAE algorithm to generate effective feature selection makes the model robust and suitable for any sparse data applications, especially for sensitive applications such as healthcare and cybersecurity, which require high accuracy in prediction.
表格数据是应用数学、网络安全、金融和医疗保健中最常用的数据格式,由于其固有的稀疏性(大多数值为零),它带来了明显的问题。这些因素抑制了有效的特征选择,降低了预测精度。堆叠稀疏自编码器(SSAE)模型在预测挑战中的特征选择方面显示出很大的前景。然而,SSAE很难提取有意义的特征来进行稀疏数据预测,并且需要在潜在空间上额外的机器学习分类器来进行准确的预测,从而增加了计算复杂度。本文提出了一种混合堆叠稀疏自编码器(HSSAE)算法,该算法使用自定义的混合损失函数α(L1)+(1−α)L2和二元交叉熵来解决这些限制。该算法提供了一个统一的框架,将稀疏数据中的特征选择和预测任务无缝集成,从而提高了特征提取的效率,降低了稀疏数据的计算复杂度。实验中使用稀疏度分别为43%、53.32%和74.41%的3个数据集来评估HSSAE算法的性能。使用多个标准进行分析,结果表明,HSSAE算法比传统的SSAE潜在空间与机器学习分类器(如逻辑回归(LR)、支持向量机(SVM)、XGBoost和AdaBoost)相匹配要好得多。此外,HSSAE还超越了卷积神经网络(CNN)、多层感知器网络(MLP)、递归神经网络(RNN)等深度学习算法,在处理稀疏数据预测任务方面具有优势。所提出的HSSAE算法生成有效特征选择的能力使模型具有鲁棒性,适用于任何稀疏数据应用,特别是医疗保健和网络安全等需要高精度预测的敏感应用。
{"title":"Hybrid stacked sparse autoencoder for robust feature extraction and classification in sparse data across multiple domains","authors":"Abdussamad ,&nbsp;Said Jadid Abdulkadir ,&nbsp;Hanita Daud ,&nbsp;Rajalingam Sokkalingam ,&nbsp;Iliyas Karim Khan","doi":"10.1016/j.mlwa.2025.100764","DOIUrl":"10.1016/j.mlwa.2025.100764","url":null,"abstract":"<div><div>Tabular data is the most used data format in applied mathematics, cybersecurity, finance, and healthcare, and it presents distinct issues due to its intrinsic sparsity, with the majority of values being zero. These factors inhibit effective feature selection and reduce prediction accuracy. The Stacked Sparse Autoencoder (SSAE) model has shown great promise for feature selection in the prediction challenge. However, SSAE struggles to extract meaningful features for sparse data prediction and requires an additional machine learning classifier on the latent space for accurate predictions, thereby increasing the computational complexity. This paper presents a Hybrid-Stacked Sparse Autoencoder (HSSAE) algorithm, with a custom hybrid loss function <span><math><mrow><mi>α</mi><mrow><mo>(</mo><msub><mi>L</mi><mn>1</mn></msub><mo>)</mo></mrow><mo>+</mo><mrow><mo>(</mo><mrow><mn>1</mn><mo>−</mo><mi>α</mi></mrow><mo>)</mo></mrow><msub><mi>L</mi><mn>2</mn></msub></mrow></math></span> with binary cross-entropy to address these limitations. The proposed algorithm offers a unified framework that seamlessly integrates feature selection and prediction tasks in sparse data to improve feature extraction and reduce the computational complexity of sparse data. Three datasets, with sparsity levels of 43%, 53.32%, and 74.41%, were used in experiments to assess the performance of the HSSAE algorithm. Analyzed using several criteria, the HSSAE algorithm was shown to be much better than conventional SSAE latent space paired with machine learning classifiers such as Logistic Regression (LR), Support Vector Machine (SVM), XGBoost, and AdaBoost. Furthermore, HSSAE also surpasses deep learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron Networks (MLP), and Recurrent Neural Networks (RNN), establishing its superiority in handling sparse data prediction tasks. The ability of the proposed HSSAE algorithm to generate effective feature selection makes the model robust and suitable for any sparse data applications, especially for sensitive applications such as healthcare and cybersecurity, which require high accuracy in prediction.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100764"},"PeriodicalIF":4.9,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt design for medical question answering with Large Language Models 基于大型语言模型的医学问答提示设计
IF 4.9 Pub Date : 2025-10-16 DOI: 10.1016/j.mlwa.2025.100758
Leonid Kuligin , Jacqueline Lammert , Aleksandr Ostapenko , Keno Bressem , Martin Boeker , Maximilian Tschochohei
The combination of prompting technique and the choice of a foundational model determines end-to-end workflow performance on a given task. We aim to provide comprehensive guidance for the best-performing prompting techniques for various LLMs for medical question-answering. We aim to provide comprehensive guidance for the best-performing prompting techniques for a variety of LLM for medical question-answering. We evaluated 15 large LLMs (incl. Claude 3.5 Sonnet, Gemini pro, Llama, Mistral, OpenAI GPT-4o and 4.1) and 6 smaller models (incl. Gemma, Mistral Nemo, Llama 3.1, Gemini flash) across five prompting techniques on neuro-oncology exam questions. Using the established MedQA dataset and a novel neuro-oncology question set, we compared basic prompting, chain-of-thought reasoning, and more complex agent-based methods incorporating external search capabilities. Results showed that the Reasoning and Acting (ReAct) approach combined with giving LLM access to Google Search performed best on large models like Claude 3.5 Sonnet (81.7% accuracy and 85.5% for v2). We also showed that large models significantly outperformed smaller ones on the MedQA dataset (79.3% vs. 51.2% accuracy) and that complex agentic patterns like Language Agent Tree Search provided minimal benefits despite 5x higher latency. We recommend practitioners to experiment with various techniques given their specific use case and foundational model, and favor simple prompting patterns with large models, as they offer the best balance of accuracy and efficiency.
提示技术和基础模型的选择的组合决定了给定任务的端到端工作流性能。我们的目标是为各种法学硕士提供最佳的医学问答提示技术提供全面的指导。我们的目标是为各种LLM医学问答提供最佳表现的提示技术的全面指导。我们评估了15个大型llm(包括Claude 3.5 Sonnet, Gemini pro, Llama, Mistral, OpenAI gpt - 40和4.1)和6个小型模型(包括Gemma, Mistral Nemo, Llama 3.1, Gemini flash)在神经肿瘤学考试问题上的五种提示技术。使用已建立的MedQA数据集和一个新的神经肿瘤学问题集,我们比较了基本提示、思维链推理和更复杂的基于代理的方法,包括外部搜索功能。结果表明,推理和行为(ReAct)方法结合LLM访问谷歌Search在大型模型(如Claude 3.5 Sonnet)上表现最好(准确率为81.7%,v2为85.5%)。我们还表明,大型模型在MedQA数据集上的表现明显优于小型模型(准确率为79.3%对51.2%),而复杂的代理模式(如语言代理树搜索)提供了最小的好处,尽管延迟提高了5倍。我们建议从业人员在给定其特定用例和基础模型的情况下,对各种技术进行实验,并倾向于使用大型模型的简单提示模式,因为它们提供了准确性和效率的最佳平衡。
{"title":"Prompt design for medical question answering with Large Language Models","authors":"Leonid Kuligin ,&nbsp;Jacqueline Lammert ,&nbsp;Aleksandr Ostapenko ,&nbsp;Keno Bressem ,&nbsp;Martin Boeker ,&nbsp;Maximilian Tschochohei","doi":"10.1016/j.mlwa.2025.100758","DOIUrl":"10.1016/j.mlwa.2025.100758","url":null,"abstract":"<div><div>The combination of prompting technique and the choice of a foundational model determines end-to-end workflow performance on a given task. We aim to provide comprehensive guidance for the best-performing prompting techniques for various LLMs for medical question-answering. We aim to provide comprehensive guidance for the best-performing prompting techniques for a variety of LLM for medical question-answering. We evaluated 15 large LLMs (incl. Claude 3.5 Sonnet, Gemini pro, Llama, Mistral, OpenAI GPT-4o and 4.1) and 6 smaller models (incl. Gemma, Mistral Nemo, Llama 3.1, Gemini flash) across five prompting techniques on neuro-oncology exam questions. Using the established MedQA dataset and a novel neuro-oncology question set, we compared basic prompting, chain-of-thought reasoning, and more complex agent-based methods incorporating external search capabilities. Results showed that the Reasoning and Acting (ReAct) approach combined with giving LLM access to Google Search performed best on large models like Claude 3.5 Sonnet (81.7% accuracy and 85.5% for v2). We also showed that large models significantly outperformed smaller ones on the MedQA dataset (79.3% vs. 51.2% accuracy) and that complex agentic patterns like Language Agent Tree Search provided minimal benefits despite 5x higher latency. We recommend practitioners to experiment with various techniques given their specific use case and foundational model, and favor simple prompting patterns with large models, as they offer the best balance of accuracy and efficiency.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100758"},"PeriodicalIF":4.9,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DICOMP: Deep Reinforcement Learning for Integer Compression DICOMP:整数压缩的深度强化学习
IF 4.9 Pub Date : 2025-10-15 DOI: 10.1016/j.mlwa.2025.100756
Mohamad Khalil Farhat, Ji Zhang, Xiaohui Tao, Tianning Li
This paper presents DICOMP (Deep Reinforcement Learning for Integer Compression), a novel approach that employs Deep Reinforcement Learning (DRL) to optimize integer compression. DICOMP is the first known approach to apply reinforcement learning specifically to integer compression, filling a significant gap in current research. Unlike traditional methods based on statistical or dictionary techniques, DICOMP formulates compression as a sequential decision-making problem. The core innovation involves a DRL agent that explores various mathematical operations to minimize an integer’s memory size. The discovered optimal strategy was dividing by a set of four prime factors, which effectively transforms its representation into a compact base-4 encoding. This process enables lossless size reduction without relying on hand-crafted strategies. Experiments on diverse datasets show that this invented strategy achieves a reduction in size of more than 80%, outperforming both traditional and other learning-based methods. Despite its learning-based nature, it maintains competitive speed and decompression efficiency, making it practical for use in resource-constrained environments. DICOMP thus represents a significant advancement in intelligent, efficient, and flexible compression techniques.
DICOMP (Deep Reinforcement Learning for Integer Compression)是一种利用深度强化学习(Deep Reinforcement Learning, DRL)优化整数压缩的新方法。DICOMP是已知的第一个将强化学习专门应用于整数压缩的方法,填补了当前研究中的一个重大空白。与基于统计或字典技术的传统方法不同,DICOMP将压缩表述为一个顺序决策问题。核心创新涉及一个DRL代理,它探索各种数学运算以最小化整数的内存大小。发现的最优策略是除以一组4个素数因子,有效地将其表示转换为紧凑的4进制编码。这个过程可以实现无损的尺寸缩减,而不依赖于手工制作的策略。在不同数据集上的实验表明,这种发明的策略实现了80%以上的尺寸缩减,优于传统和其他基于学习的方法。尽管它是基于学习的,但它保持了极具竞争力的速度和解压效率,使其在资源受限的环境中实用。因此,DICOMP代表了智能、高效和灵活压缩技术的重大进步。
{"title":"DICOMP: Deep Reinforcement Learning for Integer Compression","authors":"Mohamad Khalil Farhat,&nbsp;Ji Zhang,&nbsp;Xiaohui Tao,&nbsp;Tianning Li","doi":"10.1016/j.mlwa.2025.100756","DOIUrl":"10.1016/j.mlwa.2025.100756","url":null,"abstract":"<div><div>This paper presents DICOMP (Deep Reinforcement Learning for Integer Compression), a novel approach that employs Deep Reinforcement Learning (DRL) to optimize integer compression. DICOMP is the first known approach to apply reinforcement learning specifically to integer compression, filling a significant gap in current research. Unlike traditional methods based on statistical or dictionary techniques, DICOMP formulates compression as a sequential decision-making problem. The core innovation involves a DRL agent that explores various mathematical operations to minimize an integer’s memory size. The discovered optimal strategy was dividing by a set of four prime factors, which effectively transforms its representation into a compact base-4 encoding. This process enables lossless size reduction without relying on hand-crafted strategies. Experiments on diverse datasets show that this invented strategy achieves a reduction in size of more than 80%, outperforming both traditional and other learning-based methods. Despite its learning-based nature, it maintains competitive speed and decompression efficiency, making it practical for use in resource-constrained environments. DICOMP thus represents a significant advancement in intelligent, efficient, and flexible compression techniques.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100756"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel explainable AI-based design optimization framework to estimate sustainability and economic impacts of reinforced concrete structures 一种新的可解释的基于人工智能的设计优化框架,用于估计钢筋混凝土结构的可持续性和经济影响
IF 4.9 Pub Date : 2025-10-15 DOI: 10.1016/j.mlwa.2025.100760
Nadeem Iqbal, Khurram Shabbir, Mohamed Noureldin
Commonly, structures are designed with a focus on safety and serviceability, while structural sustainability is often overlooked at the preliminary design stage. Optimizing a design that considers environmental, economic, and structural factors early in the process requires substantial time and resources. This paper introduces an innovative Explainable Artificial Intelligence (XAI) approach to optimize the environmental and economic impacts of reinforced concrete building designs at the early stage. First, machine learning (ML) models are developed to predict carbon emissions, embodied energy, and life cycle costs based on materials and basic construction information. Then, XAI techniques such as SHAP, PDP, ICE, and LIME are used to identify key input features that influence Life Cycle Assessment (LCA) and Life Cycle Cost Assessment (LCCA). Finally, the counterfactual (CF) technique optimizes design by modifying these key features. The results show that XGBoost is the best-performing model (R2 = 0.99) for the dataset. XAI analysis identifies material quantity as the most influential variable, with other significant factors including concrete strength, distance to construction and disposal sites, vehicle capacity, and the daily volume of concrete poured. Using these insights, CF optimization reduces both LCA and LCCA by 10–20%, as predefined desired target outcomes. This study demonstrates the potential of XAI and ML to optimize the design process at the preliminary stage, balancing sustainability and economic efficiency.
通常,结构设计的重点是安全性和可使用性,而结构的可持续性往往在初步设计阶段被忽视。在过程的早期考虑环境、经济和结构因素的优化设计需要大量的时间和资源。本文介绍了一种创新的可解释人工智能(XAI)方法,用于优化早期钢筋混凝土建筑设计的环境和经济影响。首先,开发机器学习(ML)模型,根据材料和基本建筑信息预测碳排放、隐含能源和生命周期成本。然后,使用诸如SHAP、PDP、ICE和LIME等XAI技术来识别影响生命周期评估(LCA)和生命周期成本评估(LCCA)的关键输入特征。最后,反事实(CF)技术通过修改这些关键特征来优化设计。结果表明,对于数据集,XGBoost是性能最好的模型(R2 = 0.99)。XAI分析认为,材料数量是影响最大的变量,其他重要因素包括混凝土强度、到施工和处置地点的距离、车辆容量和每天浇注的混凝土体积。使用这些见解,CF优化可以将LCA和LCCA降低10-20%,达到预定义的预期目标结果。这项研究展示了XAI和ML在优化设计过程的潜力,在初步阶段,平衡可持续性和经济效率。
{"title":"A novel explainable AI-based design optimization framework to estimate sustainability and economic impacts of reinforced concrete structures","authors":"Nadeem Iqbal,&nbsp;Khurram Shabbir,&nbsp;Mohamed Noureldin","doi":"10.1016/j.mlwa.2025.100760","DOIUrl":"10.1016/j.mlwa.2025.100760","url":null,"abstract":"<div><div>Commonly, structures are designed with a focus on safety and serviceability, while structural sustainability is often overlooked at the preliminary design stage. Optimizing a design that considers environmental, economic, and structural factors early in the process requires substantial time and resources. This paper introduces an innovative Explainable Artificial Intelligence (XAI) approach to optimize the environmental and economic impacts of reinforced concrete building designs at the early stage. First, machine learning (ML) models are developed to predict carbon emissions, embodied energy, and life cycle costs based on materials and basic construction information. Then, XAI techniques such as SHAP, PDP, ICE, and LIME are used to identify key input features that influence Life Cycle Assessment (LCA) and Life Cycle Cost Assessment (LCCA). Finally, the counterfactual (CF) technique optimizes design by modifying these key features. The results show that XGBoost is the best-performing model (R<sup>2</sup> = 0.99) for the dataset. XAI analysis identifies material quantity as the most influential variable, with other significant factors including concrete strength, distance to construction and disposal sites, vehicle capacity, and the daily volume of concrete poured. Using these insights, CF optimization reduces both LCA and LCCA by 10–20%, as predefined desired target outcomes. This study demonstrates the potential of XAI and ML to optimize the design process at the preliminary stage, balancing sustainability and economic efficiency.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100760"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal abuse and sentiment analysis of Hollywood movie dialogues using language models 基于语言模型的好莱坞电影对白纵向滥用与情感分析
IF 4.9 Pub Date : 2025-10-14 DOI: 10.1016/j.mlwa.2025.100749
Rohitash Chandra, Guoxiang Ren
Over the past decades, there has been an increase in the prevalence of abusive and violent content in Hollywood movies. In this study, we use language models to explore the longitudinal abuse and sentiment analysis of Hollywood Oscar and blockbuster movie dialogues from 1950 to 2024. We provide an analysis of subtitles for over a thousand movies, which are categorised into four genres. We employ fine-tuned language models to examine the trends and shifts in emotional and abusive content over the past seven decades. Findings reveal significant temporal changes in movie dialogues, which reflect broader social and cultural influences. Overall, the emotional tendencies in the films are diverse, and the detection of abusive content also exhibits significant fluctuations. The results show a gradual rise in abusive content in recent decades, reflecting social norms and regulatory policy changes. Genres such as thrillers still present a higher frequency of abusive content that emphasises the ongoing narrative role of violence and conflict. At the same time, underlying positive emotions such as humour and optimism remain prevalent in most of the movies. Furthermore, the gradual increase of abusive content in movie dialogues has been significant over the last two decades, where Oscar-nominated movies overtook the top ten blockbusters.
在过去的几十年里,好莱坞电影中出现了越来越多的辱骂和暴力内容。在这项研究中,我们使用语言模型对1950年至2024年好莱坞奥斯卡和大片对话进行纵向滥用和情感分析。我们对一千多部电影的字幕进行了分析,这些电影被分为四种类型。我们使用精细的语言模型来研究过去70年来情感和辱骂内容的趋势和变化。研究结果揭示了电影对白的重大时间变化,这反映了更广泛的社会和文化影响。总体而言,电影中的情感倾向是多样化的,对虐待内容的检测也呈现出明显的波动。结果显示,近几十年来,滥用内容逐渐增多,反映了社会规范和监管政策的变化。惊悚类游戏仍然呈现出更频繁的暴力内容,强调暴力和冲突的持续叙事作用。与此同时,幽默和乐观等潜在的积极情绪在大多数电影中仍然普遍存在。此外,在过去的二十年里,电影对白中的辱骂内容逐渐增加,奥斯卡提名电影超过了十大大片。
{"title":"Longitudinal abuse and sentiment analysis of Hollywood movie dialogues using language models","authors":"Rohitash Chandra,&nbsp;Guoxiang Ren","doi":"10.1016/j.mlwa.2025.100749","DOIUrl":"10.1016/j.mlwa.2025.100749","url":null,"abstract":"<div><div>Over the past decades, there has been an increase in the prevalence of abusive and violent content in Hollywood movies. In this study, we use language models to explore the longitudinal abuse and sentiment analysis of Hollywood Oscar and blockbuster movie dialogues from 1950 to 2024. We provide an analysis of subtitles for over a thousand movies, which are categorised into four genres. We employ fine-tuned language models to examine the trends and shifts in emotional and abusive content over the past seven decades. Findings reveal significant temporal changes in movie dialogues, which reflect broader social and cultural influences. Overall, the emotional tendencies in the films are diverse, and the detection of abusive content also exhibits significant fluctuations. The results show a gradual rise in abusive content in recent decades, reflecting social norms and regulatory policy changes. Genres such as thrillers still present a higher frequency of abusive content that emphasises the ongoing narrative role of violence and conflict. At the same time, underlying positive emotions such as humour and optimism remain prevalent in most of the movies. Furthermore, the gradual increase of abusive content in movie dialogues has been significant over the last two decades, where Oscar-nominated movies overtook the top ten blockbusters.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100749"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time detection of acoustic anomalies in drone servo motors using edge-based machine learning 基于边缘机器学习的无人机伺服电机声异常实时检测
IF 4.9 Pub Date : 2025-10-14 DOI: 10.1016/j.mlwa.2025.100755
Tal Kfir , Sahar Tuvyahu , Boaz Ben Moshe , Or Haim Anidjar
The growing demand for Unmanned Aerial Vehicles (UAVs) has led to a significant increase in their variety and usage, emphasizing the need for resilient and autonomous onboard monitoring systems. To address this, we present a lightweight, scalable solution for real-time anomaly detection focused on the mechanical servos that control UAV flight dynamics. While conventional deep learning methods offer high accuracy, they often require substantial computational and memory resources, making them unsuitable for the constrained environments of small aircraft.
In this study, we introduce a real-time anomaly detection framework that combines edge computing and Internet of Things (IoT) principles to analyze acoustic signals from UAV servo motors. Our system leverages Tiny Machine Learning (TinyML) techniques to perform local data processing and inference directly on embedded hardware, minimizing latency and energy consumption.
The proposed method uses a compact neural network deployed on an ultra-lightweight microcontroller (under 100 grams) to classify servo conditions. Acoustic data collected under multiple fault scenarios were minimally preprocessed and fed into the model. Experimental evaluation shows promising performance with 86% accuracy, 86% recall, and 87% precision. This edge-based AI approach supports distributed deployment across UAV fleets, reduces reliance on external infrastructure, and enhances both safety and maintenance efficiency in diverse operational environments.
对无人机(uav)的需求不断增长,导致其种类和用途显著增加,强调了对弹性和自主机载监控系统的需求。为了解决这个问题,我们提出了一个轻量级的、可扩展的实时异常检测解决方案,专注于控制无人机飞行动力学的机械伺服。虽然传统的深度学习方法提供了很高的准确性,但它们通常需要大量的计算和内存资源,这使得它们不适合小型飞机的受限环境。在这项研究中,我们引入了一个结合边缘计算和物联网(IoT)原理的实时异常检测框架来分析无人机伺服电机的声信号。我们的系统利用微型机器学习(TinyML)技术直接在嵌入式硬件上执行本地数据处理和推理,最大限度地减少延迟和能耗。所提出的方法使用部署在超轻微控制器(100克以下)上的紧凑神经网络来对伺服条件进行分类。在多种故障情况下收集的声学数据进行了最小程度的预处理并输入到模型中。实验结果表明,该方法的准确率为86%,查全率为86%,查准率为87%。这种基于边缘的人工智能方法支持跨无人机机群的分布式部署,减少对外部基础设施的依赖,并在不同的操作环境中提高安全性和维护效率。
{"title":"Real-time detection of acoustic anomalies in drone servo motors using edge-based machine learning","authors":"Tal Kfir ,&nbsp;Sahar Tuvyahu ,&nbsp;Boaz Ben Moshe ,&nbsp;Or Haim Anidjar","doi":"10.1016/j.mlwa.2025.100755","DOIUrl":"10.1016/j.mlwa.2025.100755","url":null,"abstract":"<div><div>The growing demand for Unmanned Aerial Vehicles (UAVs) has led to a significant increase in their variety and usage, emphasizing the need for resilient and autonomous onboard monitoring systems. To address this, we present a lightweight, scalable solution for real-time anomaly detection focused on the mechanical servos that control UAV flight dynamics. While conventional deep learning methods offer high accuracy, they often require substantial computational and memory resources, making them unsuitable for the constrained environments of small aircraft.</div><div>In this study, we introduce a real-time anomaly detection framework that combines edge computing and Internet of Things (IoT) principles to analyze acoustic signals from UAV servo motors. Our system leverages Tiny Machine Learning (TinyML) techniques to perform local data processing and inference directly on embedded hardware, minimizing latency and energy consumption.</div><div>The proposed method uses a compact neural network deployed on an ultra-lightweight microcontroller (under 100 grams) to classify servo conditions. Acoustic data collected under multiple fault scenarios were minimally preprocessed and fed into the model. Experimental evaluation shows promising performance with 86% accuracy, 86% recall, and 87% precision. This edge-based AI approach supports distributed deployment across UAV fleets, reduces reliance on external infrastructure, and enhances both safety and maintenance efficiency in diverse operational environments.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100755"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive survey on deep reinforcement learning in object tracking 深度强化学习在目标跟踪中的研究综述
IF 4.9 Pub Date : 2025-10-10 DOI: 10.1016/j.mlwa.2025.100745
Hy Nguyen , Srikanth Thudumu , Hung Du , Rajesh Vasa , Kon Mouzakis
The exploration of Deep Reinforcement Learning (DRL) in Object Tracking (OT) represents an emerging paradigm and is gaining traction as an alternative to conventional CNN-based methods. DRL’s ability to integrate spatial and temporal context and learn from interactions makes it particularly suited for the sequential decision-making required in OT. The survey reviews a range of DRL-based methods for OT, systematically collating and analyzing existing research to highlight trends and challenges. It also provides an evaluation of different DRL algorithms, categorizing them based on their performance in various dynamic environments. Additionally, we analyze existing evaluation benchmarks and simulators, along with the challenges, potential solutions, and trends in DRL-based OT methods. This paper aims to bridge the fragmented literature on DRL applications in OT, providing a unified view that identifies common approaches, challenges, and potential synergies.
深度强化学习(DRL)在目标跟踪(OT)中的探索代表了一种新兴的范式,并且作为传统的基于cnn的方法的替代方案正在获得关注。DRL整合空间和时间背景并从交互中学习的能力使其特别适合OT所需的顺序决策。该调查回顾了一系列基于drl的OT方法,系统地整理和分析了现有研究,以突出趋势和挑战。它还提供了对不同DRL算法的评估,根据它们在各种动态环境中的性能对它们进行分类。此外,我们还分析了现有的评估基准和模拟器,以及基于drl的OT方法的挑战、潜在解决方案和趋势。本文旨在弥合关于DRL在OT中的应用的零散文献,提供一个统一的观点,确定通用的方法、挑战和潜在的协同作用。
{"title":"A comprehensive survey on deep reinforcement learning in object tracking","authors":"Hy Nguyen ,&nbsp;Srikanth Thudumu ,&nbsp;Hung Du ,&nbsp;Rajesh Vasa ,&nbsp;Kon Mouzakis","doi":"10.1016/j.mlwa.2025.100745","DOIUrl":"10.1016/j.mlwa.2025.100745","url":null,"abstract":"<div><div>The exploration of Deep Reinforcement Learning (DRL) in Object Tracking (OT) represents an emerging paradigm and is gaining traction as an alternative to conventional CNN-based methods. DRL’s ability to integrate spatial and temporal context and learn from interactions makes it particularly suited for the sequential decision-making required in OT. The survey reviews a range of DRL-based methods for OT, systematically collating and analyzing existing research to highlight trends and challenges. It also provides an evaluation of different DRL algorithms, categorizing them based on their performance in various dynamic environments. Additionally, we analyze existing evaluation benchmarks and simulators, along with the challenges, potential solutions, and trends in DRL-based OT methods. This paper aims to bridge the fragmented literature on DRL applications in OT, providing a unified view that identifies common approaches, challenges, and potential synergies.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100745"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text-enhanced multimodal deep learning models for predicting chloride transport in concrete 用于预测混凝土中氯化物迁移的文本增强多模态深度学习模型
IF 4.9 Pub Date : 2025-10-10 DOI: 10.1016/j.mlwa.2025.100753
Bingbing Guo , Yujie Jiao , Yan Wang , Fengling Zhang , Yuanfei Guo , Qinghao Guan
Reinforced concrete (RC) structures are widely used in civil engineering, and accurate prediction of chloride transport is essential for durability design and service life estimation. Existing machine learning models for predicting the chloride transport in concrete have primarily relied on researchers' expertise for feature construction. However, the factors affecting the chloride transport are numerous and highly complex, making manual feature engineering inefficient and labor-intensive. This study developed text-enhanced multimodal models that integrate natural language processing (NLP) with deep neural network (DNN) to automatically extract features from textual information, including properties of raw materials, experimental methods, chloride attack mechanisms and comments. The results demonstrate that the developed multimodal models have learned prior knowledge, which enables them to achieve significantly higher accuracy than numerical-data-only DNN models. Among these models, the multi-head self-attention model performs the best by capturing features from multiple angles and enabling parallel computation. Crucially, the text-enhanced multimodal models can maintain high accuracy even with limited numerical data.
钢筋混凝土(RC)结构在土木工程中应用广泛,其氯离子输运量的准确预测对其耐久性设计和使用寿命估计至关重要。用于预测混凝土中氯离子迁移的现有机器学习模型主要依赖于研究人员在特征构建方面的专业知识。然而,影响氯离子迁移的因素众多且非常复杂,使得人工特征工程效率低下且劳动强度大。本研究开发了文本增强多模态模型,该模型将自然语言处理(NLP)与深度神经网络(DNN)相结合,从文本信息中自动提取特征,包括原材料属性、实验方法、氯化物攻击机制和评论。结果表明,所开发的多模态模型已经学习了先验知识,这使得它们比仅使用数字数据的深度神经网络模型具有更高的精度。在这些模型中,多头自注意模型从多个角度捕获特征并实现并行计算,性能最好。关键是,文本增强的多模态模型即使在有限的数值数据下也能保持较高的精度。
{"title":"Text-enhanced multimodal deep learning models for predicting chloride transport in concrete","authors":"Bingbing Guo ,&nbsp;Yujie Jiao ,&nbsp;Yan Wang ,&nbsp;Fengling Zhang ,&nbsp;Yuanfei Guo ,&nbsp;Qinghao Guan","doi":"10.1016/j.mlwa.2025.100753","DOIUrl":"10.1016/j.mlwa.2025.100753","url":null,"abstract":"<div><div>Reinforced concrete (RC) structures are widely used in civil engineering, and accurate prediction of chloride transport is essential for durability design and service life estimation. Existing machine learning models for predicting the chloride transport in concrete have primarily relied on researchers' expertise for feature construction. However, the factors affecting the chloride transport are numerous and highly complex, making manual feature engineering inefficient and labor-intensive. This study developed text-enhanced multimodal models that integrate natural language processing (NLP) with deep neural network (DNN) to automatically extract features from textual information, including properties of raw materials, experimental methods, chloride attack mechanisms and comments. The results demonstrate that the developed multimodal models have learned prior knowledge, which enables them to achieve significantly higher accuracy than numerical-data-only DNN models. Among these models, the multi-head self-attention model performs the best by capturing features from multiple angles and enabling parallel computation. Crucially, the text-enhanced multimodal models can maintain high accuracy even with limited numerical data.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100753"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive modeling for quality prediction in multi-stage manufacturing systems using artificial intelligence 基于人工智能的多阶段制造系统质量预测建模
IF 4.9 Pub Date : 2025-10-10 DOI: 10.1016/j.mlwa.2025.100754
Luis Fernando Agredano Gonzalez, Soumaya Yacout
Predicting quality characteristics in multi-stage manufacturing systems (MMSs) poses challenges due to the propagation of variation across stages. In MMSs, any variation introduced at an earlier stage can be amplified in subsequent stages. Many industries rely on in-process quality inspections to monitor and adjust manufacturing processes. Based on inspection outcomes, workers often make process adjustments to maintain product specifications. These adjustments are frequently guided by individual experience rather than systematic methods. This reliance on subjective judgment introduces variability in quality outcomes, as worker evaluations may differ. Moreover, unnecessary adjustments can inadvertently increase variation, further destabilizing the process.
This study examines the literature of machine learning algorithms used for quality prediction in MMSs. Selected methods include partial least squares regression, principal component regression, support vector machines with linear and radial basis functions, random forest, k-nearest neighbors XGboost and Feed Forward Neural Network.
We applied these techniques to an MMS that produces aircraft engine parts. The process involves intermediate inspections using coordinate measuring machines (CMM). Our predictions rely solely on in-process inspection data, without incorporating process parameters or sensor readings. Historical quality characteristic (QC) data guides the predictions for subsequent stages, including final inspections. This enables proactive quality control and production flow optimization.
The results demonstrate that the chosen models can predict the QCs’ values for both consecutive and advanced stages in the MMS. Limitations and future directions are discussed.
在多阶段制造系统(mms)中,由于各阶段之间的变化传播,预测质量特性提出了挑战。在mms中,在早期阶段引入的任何变异都可能在随后的阶段被放大。许多行业依靠过程质量检查来监控和调整制造过程。根据检查结果,工人经常调整工艺以保持产品规格。这些调整往往以个人经验为指导,而不是系统的方法。这种对主观判断的依赖引入了质量结果的可变性,因为工人的评价可能不同。此外,不必要的调整可能会无意中增加变化,进一步破坏过程的稳定。本研究考察了mms中用于质量预测的机器学习算法的文献。选择的方法包括偏最小二乘回归、主成分回归、线性基和径向基支持向量机、随机森林、k近邻XGboost和前馈神经网络。我们将这些技术应用于生产飞机发动机部件的MMS。该过程包括使用三坐标测量机(CMM)进行中间检测。我们的预测完全依赖于过程中检测数据,不包含工艺参数或传感器读数。历史质量特征(QC)数据指导后续阶段的预测,包括最终检查。这使得主动质量控制和生产流程优化成为可能。结果表明,所选择的模型可以预测MMS连续和晚期阶段的qc值。讨论了局限性和未来发展方向。
{"title":"Predictive modeling for quality prediction in multi-stage manufacturing systems using artificial intelligence","authors":"Luis Fernando Agredano Gonzalez,&nbsp;Soumaya Yacout","doi":"10.1016/j.mlwa.2025.100754","DOIUrl":"10.1016/j.mlwa.2025.100754","url":null,"abstract":"<div><div>Predicting quality characteristics in multi-stage manufacturing systems (MMSs) poses challenges due to the propagation of variation across stages. In MMSs, any variation introduced at an earlier stage can be amplified in subsequent stages. Many industries rely on in-process quality inspections to monitor and adjust manufacturing processes. Based on inspection outcomes, workers often make process adjustments to maintain product specifications. These adjustments are frequently guided by individual experience rather than systematic methods. This reliance on subjective judgment introduces variability in quality outcomes, as worker evaluations may differ. Moreover, unnecessary adjustments can inadvertently increase variation, further destabilizing the process.</div><div>This study examines the literature of machine learning algorithms used for quality prediction in MMSs. Selected methods include partial least squares regression, principal component regression, support vector machines with linear and radial basis functions, random forest, k-nearest neighbors XGboost and Feed Forward Neural Network.</div><div>We applied these techniques to an MMS that produces aircraft engine parts. The process involves intermediate inspections using coordinate measuring machines (CMM). Our predictions rely solely on in-process inspection data, without incorporating process parameters or sensor readings. Historical quality characteristic (QC) data guides the predictions for subsequent stages, including final inspections. This enables proactive quality control and production flow optimization.</div><div>The results demonstrate that the chosen models can predict the QCs’ values for both consecutive and advanced stages in the MMS. Limitations and future directions are discussed.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100754"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning with applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1