首页 > 最新文献

Machine learning with applications最新文献

英文 中文
Machine learning based adaptive soft error mitigation efficiency 基于机器学习的自适应软错误缓解效率
IF 4.9 Pub Date : 2025-11-25 DOI: 10.1016/j.mlwa.2025.100797
Nicholas Maurer, Mohammed Abdallah
This work presents a novel adaptive framework for soft error mitigation in space-based systems, designed to resolve the fundamental conflict between system performance and radiation protection. By leveraging a Long Short-Term Memory (LSTM) model to predict real-time solar particle flux, our approach dynamically enables or disables software-based mitigation techniques. This contrasts with the static, "always-on" methods of existing systems, offering a significant improvement in computational efficiency. The proposed LSTM model was trained on NASA solar particle flux data, achieving a mean average error of 7.65e-6, demonstrating its high accuracy in predicting nonlinear particle events. Our simulation, which applies this predictive model to a tiered system of redundant processing, checkpointing, and watchdog timers, shows a substantial reduction in overhead. During the 18,414-second test period, the combined adaptive mitigation methods introduced only 20.75–51.6 s of overhead, representing a 99.4 % reduction in overhead compared to continuous, static mitigation. This research's primary contribution is a demonstrated proof-of-concept for an intelligent, self-adaptive system that can maintain high reliability while drastically improving performance. This approach provides a pathway for utilizing more cost-effective commercial-off-the-shelf (COTS) processors in radiation-intensive environments.
这项工作提出了一种新的自适应框架,用于天基系统的软误差缓解,旨在解决系统性能和辐射防护之间的根本冲突。通过利用长短期记忆(LSTM)模型来预测实时太阳粒子通量,我们的方法动态启用或禁用基于软件的缓解技术。这与现有系统的静态“永远在线”方法形成对比,显著提高了计算效率。本文提出的LSTM模型在NASA太阳粒子通量数据上进行了训练,平均误差为7.65e-6,对非线性粒子事件的预测精度较高。我们的模拟将该预测模型应用于冗余处理、检查点和看门狗计时器的分层系统,结果显示开销大幅减少。在18414秒的测试期间,组合的自适应缓解方法只带来了20.75-51.6秒的开销,与连续的静态缓解相比,减少了99.4%的开销。这项研究的主要贡献是对智能自适应系统的概念验证,该系统可以在大幅提高性能的同时保持高可靠性。这种方法为在辐射密集的环境中利用更具成本效益的商用现成(COTS)处理器提供了一条途径。
{"title":"Machine learning based adaptive soft error mitigation efficiency","authors":"Nicholas Maurer,&nbsp;Mohammed Abdallah","doi":"10.1016/j.mlwa.2025.100797","DOIUrl":"10.1016/j.mlwa.2025.100797","url":null,"abstract":"<div><div>This work presents a novel adaptive framework for soft error mitigation in space-based systems, designed to resolve the fundamental conflict between system performance and radiation protection. By leveraging a Long Short-Term Memory (LSTM) model to predict real-time solar particle flux, our approach dynamically enables or disables software-based mitigation techniques. This contrasts with the static, \"always-on\" methods of existing systems, offering a significant improvement in computational efficiency. The proposed LSTM model was trained on NASA solar particle flux data, achieving a mean average error of 7.65e-6, demonstrating its high accuracy in predicting nonlinear particle events. Our simulation, which applies this predictive model to a tiered system of redundant processing, checkpointing, and watchdog timers, shows a substantial reduction in overhead. During the 18,414-second test period, the combined adaptive mitigation methods introduced only 20.75–51.6 s of overhead, representing a 99.4 % reduction in overhead compared to continuous, static mitigation. This research's primary contribution is a demonstrated proof-of-concept for an intelligent, self-adaptive system that can maintain high reliability while drastically improving performance. This approach provides a pathway for utilizing more cost-effective commercial-off-the-shelf (COTS) processors in radiation-intensive environments.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"23 ","pages":"Article 100797"},"PeriodicalIF":4.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing model-specific and model-agnostic features importance methods using machine learning with technical indicators: A NASDAQ sector-based study 使用机器学习和技术指标比较模型特定和模型不可知特征重要性方法:一项基于纳斯达克行业的研究
IF 4.9 Pub Date : 2025-11-25 DOI: 10.1016/j.mlwa.2025.100799
Jeonghoe Lee, Lin Cai
Predicting stock prices is crucial for making informed investment decisions as stock markets significantly influence the global economy. Although previous studies have explored feature importance methods for stock price prediction, comprehensive comparisons of those methods have been limited. This study aims to provide a detailed comparison of different feature importance methods for selecting technical indicators to predict stock prices. Specifically, this research analyzed financial data from the 11 sectors of the NASDAQ. A moving window forecasting framework was implemented to dynamically capture the evolving patterns in financial markets over time. Model-specific feature importance methods were compared with model-agnostic approaches. Multiple machine learning algorithms, including Random Forest (RF), and Multi-layer Neural Network (MNNs), were employed to forecast stock prices. Additionally, extensive hyperparameter tuning was conducted to improve model explainability, contributing to the field of Explainable Artificial Intelligence (XAI). The results highlight the predictive effectiveness of different feature importance methods in selecting optimal technical indicators, thereby offering valuable insights for enhancing stock price forecasting accuracy and model transparency. In summary, this research offers a comprehensive comparison of feature importance methods, emphasizing their application in the selection of technical indicators in a dynamic, rolling prediction setting.
由于股市对全球经济的影响很大,预测股价对于做出明智的投资决策至关重要。虽然已有研究探索了特征重要性方法用于股票价格预测,但对这些方法的综合比较有限。本研究旨在对选择技术指标预测股价的不同特征重要性方法进行详细比较。具体来说,本研究分析了纳斯达克11个板块的财务数据。实现了一个移动窗口预测框架,以动态捕捉金融市场随时间变化的模式。将特定于模型的特征重要性方法与不可知模型的方法进行了比较。采用随机森林(Random Forest, RF)和多层神经网络(Multi-layer Neural Network, MNNs)等多种机器学习算法进行股价预测。此外,进行了广泛的超参数调优以提高模型的可解释性,为可解释人工智能(Explainable Artificial Intelligence, XAI)领域做出了贡献。结果突出了不同特征重要性方法在选择最优技术指标方面的预测效果,从而为提高股价预测精度和模型透明度提供了有价值的见解。综上所述,本研究对特征重要性方法进行了全面比较,强调了它们在动态滚动预测环境下技术指标选择中的应用。
{"title":"Comparing model-specific and model-agnostic features importance methods using machine learning with technical indicators: A NASDAQ sector-based study","authors":"Jeonghoe Lee,&nbsp;Lin Cai","doi":"10.1016/j.mlwa.2025.100799","DOIUrl":"10.1016/j.mlwa.2025.100799","url":null,"abstract":"<div><div>Predicting stock prices is crucial for making informed investment decisions as stock markets significantly influence the global economy. Although previous studies have explored feature importance methods for stock price prediction, comprehensive comparisons of those methods have been limited. This study aims to provide a detailed comparison of different feature importance methods for selecting technical indicators to predict stock prices. Specifically, this research analyzed financial data from the 11 sectors of the NASDAQ. A moving window forecasting framework was implemented to dynamically capture the evolving patterns in financial markets over time. Model-specific feature importance methods were compared with model-agnostic approaches. Multiple machine learning algorithms, including Random Forest (RF), and Multi-layer Neural Network (MNNs), were employed to forecast stock prices. Additionally, extensive hyperparameter tuning was conducted to improve model explainability, contributing to the field of Explainable Artificial Intelligence (XAI). The results highlight the predictive effectiveness of different feature importance methods in selecting optimal technical indicators, thereby offering valuable insights for enhancing stock price forecasting accuracy and model transparency. In summary, this research offers a comprehensive comparison of feature importance methods, emphasizing their application in the selection of technical indicators in a dynamic, rolling prediction setting.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"23 ","pages":"Article 100799"},"PeriodicalIF":4.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking data-efficient artificial intelligence for low-resource settings 重新思考低资源环境下的数据高效人工智能
IF 4.9 Pub Date : 2025-11-19 DOI: 10.1016/j.mlwa.2025.100796
Ronald Katende
Recent advances in AI have been driven by data abundance and computational scale, assumptions that rarely hold in low-resource environments. We examine how constraints in data, compute, connectivity, and institutional capacity reshape what effective AI should be. Using a structured mixed-methods review and PRISMA-inspired protocol over 300+ studies, we compare data-efficient approaches, physics-informed models, few-shot and self-supervised learning, parameter-efficient fine-tuning, TinyML, and federated learning, and evaluate them across deployment axes (data needs, compute footprint, latency, robustness, interpretability, and maintenance). Across health, agriculture, climate, and education, we show that lean, operator-informed, and locally validated methods often outperform conventional large-scale models under real constraints. We argue that data-efficient AI is not a stopgap but a foundational paradigm for equitable and sustainable innovation, and we provide a decision matrix and research-policy agenda to guide practitioners and funders in low-resource settings.
人工智能的最新进展是由数据丰富和计算规模驱动的,这些假设在资源匮乏的环境中很少成立。我们研究了数据、计算、连接和机构能力方面的限制如何重塑有效的人工智能。使用结构化的混合方法审查和超过300项研究的prisma启发协议,我们比较了数据高效方法、物理信息模型、少量和自监督学习、参数高效微调、TinyML和联邦学习,并跨部署轴(数据需求、计算足迹、延迟、鲁棒性、可解释性和维护)对它们进行了评估。在卫生、农业、气候和教育领域,我们表明,在实际约束条件下,精益、操作人员知情和本地验证的方法通常优于传统的大规模模型。我们认为,数据高效的人工智能不是权宜之计,而是公平和可持续创新的基本范式,我们提供了一个决策矩阵和研究政策议程,以指导低资源环境下的从业者和资助者。
{"title":"Rethinking data-efficient artificial intelligence for low-resource settings","authors":"Ronald Katende","doi":"10.1016/j.mlwa.2025.100796","DOIUrl":"10.1016/j.mlwa.2025.100796","url":null,"abstract":"<div><div>Recent advances in AI have been driven by data abundance and computational scale, assumptions that rarely hold in low-resource environments. We examine how constraints in data, compute, connectivity, and institutional capacity reshape what effective AI should be. Using a structured mixed-methods review and PRISMA-inspired protocol over 300+ studies, we compare data-efficient approaches, physics-informed models, few-shot and self-supervised learning, parameter-efficient fine-tuning, TinyML, and federated learning, and evaluate them across deployment axes (data needs, compute footprint, latency, robustness, interpretability, and maintenance). Across health, agriculture, climate, and education, we show that lean, operator-informed, and locally validated methods often outperform conventional large-scale models under real constraints. We argue that data-efficient AI is not a stopgap but a foundational paradigm for equitable and sustainable innovation, and we provide a decision matrix and research-policy agenda to guide practitioners and funders in low-resource settings.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"23 ","pages":"Article 100796"},"PeriodicalIF":4.9,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the effectiveness of LLMs for explainable deep reinforcement learning 评估llm对可解释深度强化学习的有效性
IF 4.9 Pub Date : 2025-11-14 DOI: 10.1016/j.mlwa.2025.100795
Ayoub Belouadah, Marcelo Luis Ruiz-Rodríguez, Sylvain Kubler, Yves Le Traon
Understanding the decision-making of reinforcement learning (RL) agents is essential for real-world deployment. Existing eXplainable RL (XRL) techniques, such as feature attribution and policy visualization, provide insight but remain inaccessible to non-experts. Large Language Models (LLMs) offer a natural-language alternative, yet often lack logical consistency and alignment with agent goals. This study benchmarks three explanation generation methods: Chain-of-Thought (CoT) prompting as the standard baseline used in prior work, Monte Carlo Tree Search (MCTS) augmentation, and supervised fine-tuning (SFT) across various models. Evaluations using Soundness and Fidelity show that CoT frequently produces reasoning errors, whereas MCTS improves quality for larger models (avg. +23% Soundness, +17% Fidelity), while SFT yields greater and more consistent gains for smaller ones (+58% Soundness, +52% Fidelity), underscoring the need to align methods with model capacity. An LLM-as-a-Judge framework further validates these findings, showing strong agreement with human assessments (weighted Cohen’s κ=0.77, Spearman ρ=0.88), supporting scalable and reliable assessment of textual explanations.
理解强化学习(RL)代理的决策对于实际部署至关重要。现有的可解释RL (eXplainable RL, XRL)技术,如特征归因和策略可视化,提供了洞察力,但对于非专家来说仍然无法访问。大型语言模型(llm)提供了一种自然语言替代方案,但通常缺乏逻辑一致性和与代理目标的一致性。本研究对三种解释生成方法进行了基准测试:思想链(CoT)提示作为先前工作中使用的标准基线,蒙特卡罗树搜索(MCTS)增强,以及各种模型的监督微调(SFT)。使用稳健性和保真度的评估表明,CoT经常产生推理错误,而MCTS提高了大型模型的质量(平均+23%稳健性,+17%保真度),而SFT对较小的模型产生了更大更一致的收益(+58%稳健性,+52%保真度),强调了将方法与模型容量保持一致的需要。法学硕士作为法官的框架进一步验证了这些发现,显示出与人类评估的强烈一致性(加权科恩的κ=0.77,斯皮尔曼ρ=0.88),支持可扩展和可靠的文本解释评估。
{"title":"Evaluating the effectiveness of LLMs for explainable deep reinforcement learning","authors":"Ayoub Belouadah,&nbsp;Marcelo Luis Ruiz-Rodríguez,&nbsp;Sylvain Kubler,&nbsp;Yves Le Traon","doi":"10.1016/j.mlwa.2025.100795","DOIUrl":"10.1016/j.mlwa.2025.100795","url":null,"abstract":"<div><div>Understanding the decision-making of reinforcement learning (RL) agents is essential for real-world deployment. Existing eXplainable RL (XRL) techniques, such as feature attribution and policy visualization, provide insight but remain inaccessible to non-experts. Large Language Models (LLMs) offer a natural-language alternative, yet often lack logical consistency and alignment with agent goals. This study benchmarks three explanation generation methods: Chain-of-Thought (CoT) prompting as the standard baseline used in prior work, Monte Carlo Tree Search (MCTS) augmentation, and supervised fine-tuning (SFT) across various models. Evaluations using Soundness and Fidelity show that CoT frequently produces reasoning errors, whereas MCTS improves quality for larger models (avg. <span><math><mrow><mo>+</mo><mn>23</mn><mtext>%</mtext></mrow></math></span> Soundness, <span><math><mrow><mo>+</mo><mn>17</mn><mtext>%</mtext></mrow></math></span> Fidelity), while SFT yields greater and more consistent gains for smaller ones (<span><math><mrow><mo>+</mo><mn>58</mn><mtext>%</mtext></mrow></math></span> Soundness, <span><math><mrow><mo>+</mo><mn>52</mn><mtext>%</mtext></mrow></math></span> Fidelity), underscoring the need to align methods with model capacity. An LLM-as-a-Judge framework further validates these findings, showing strong agreement with human assessments (weighted Cohen’s <span><math><mrow><mi>κ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>77</mn></mrow></math></span>, Spearman <span><math><mrow><mi>ρ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>88</mn></mrow></math></span>), supporting scalable and reliable assessment of textual explanations.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100795"},"PeriodicalIF":4.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-based multispectral texture inpainting and denoising 基于模型的多光谱纹理绘制与去噪
IF 4.9 Pub Date : 2025-11-13 DOI: 10.1016/j.mlwa.2025.100772
Michal Haindl, Vojtěch Havlíček, Pavel Žid
Visual texture inpainting and denoising aim not necessarily to recover the exact pixel-wise correspondence of the original, often unobservable, texture, but rather to reconstruct a texture that is visually indistinguishable from the original. This objective differs from standard image restoration goals and therefore may require fundamentally different restoration techniques. This work presents two multispectral texture restoration methods capable of simultaneously reducing additive Gaussian or Poisson noise and inpainting missing textural regions without visible seams or repetitions. Both methods rely on descriptive three-dimensional statistical spatial models. The first method employs a complex three-dimensional spatial Gaussian mixture model, particularly suited for regular or near-regular textures. The second method uses a causal simultaneous autoregressive model, which is more appropriate for random textures or scenarios with limited training data. Importantly, both models are inherently multispectral, enabling the restoration of even hyperspectral textures. As such, they avoid the spectral quality compromises typically encountered in many alternative approaches. The Gaussian and Poisson noise reduction achieved by the proposed method is compared with four alternative approaches, showing an average improvement of 1%–16% across the spectral range while avoiding the blurring artifacts observed in some of the other methods.
视觉纹理绘制和去噪的目的不一定是恢复原始纹理(通常是不可观察的)在像素上的精确对应关系,而是重建与原始纹理在视觉上无法区分的纹理。这个目标不同于标准的图像恢复目标,因此可能需要根本不同的恢复技术。这项工作提出了两种多光谱纹理恢复方法,能够同时减少加性高斯或泊松噪声,并在没有可见接缝或重复的缺失纹理区域进行涂漆。这两种方法都依赖于描述性的三维统计空间模型。第一种方法采用复杂的三维空间高斯混合模型,特别适用于规则或接近规则的纹理。第二种方法使用因果同步自回归模型,该模型更适合随机纹理或训练数据有限的场景。重要的是,这两种模型本质上都是多光谱的,甚至可以恢复高光谱纹理。因此,它们避免了在许多替代方法中通常遇到的光谱质量妥协。通过与四种替代方法的高斯和泊松降噪进行比较,发现该方法在整个光谱范围内平均提高了1%-16%,同时避免了在其他一些方法中观察到的模糊伪像。
{"title":"Model-based multispectral texture inpainting and denoising","authors":"Michal Haindl,&nbsp;Vojtěch Havlíček,&nbsp;Pavel Žid","doi":"10.1016/j.mlwa.2025.100772","DOIUrl":"10.1016/j.mlwa.2025.100772","url":null,"abstract":"<div><div>Visual texture inpainting and denoising aim not necessarily to recover the exact pixel-wise correspondence of the original, often unobservable, texture, but rather to reconstruct a texture that is visually indistinguishable from the original. This objective differs from standard image restoration goals and therefore may require fundamentally different restoration techniques. This work presents two multispectral texture restoration methods capable of simultaneously reducing additive Gaussian or Poisson noise and inpainting missing textural regions without visible seams or repetitions. Both methods rely on descriptive three-dimensional statistical spatial models. The first method employs a complex three-dimensional spatial Gaussian mixture model, particularly suited for regular or near-regular textures. The second method uses a causal simultaneous autoregressive model, which is more appropriate for random textures or scenarios with limited training data. Importantly, both models are inherently multispectral, enabling the restoration of even hyperspectral textures. As such, they avoid the spectral quality compromises typically encountered in many alternative approaches. The Gaussian and Poisson noise reduction achieved by the proposed method is compared with four alternative approaches, showing an average improvement of 1%–16% across the spectral range while avoiding the blurring artifacts observed in some of the other methods.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100772"},"PeriodicalIF":4.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust anomaly detection through multi-modal autoencoder fusion for small vehicle damage detection 基于多模态自编码器融合的小型车辆损伤鲁棒检测
IF 4.9 Pub Date : 2025-11-12 DOI: 10.1016/j.mlwa.2025.100794
Sara Khan , Mehmed Yüksel , Frank Kirchner
Wear and tear detection in fleet and shared vehicle systems is a critical challenge, particularly in rental and car-sharing services, where minor damage, such as dents, scratches, and underbody impacts, often goes unnoticed or is detected too late. Currently, manual inspection methods are the default approach, but are labour-intensive and prone to human error. In contrast, state-of-the-art image-based methods are less reliable when the vehicle is moving, and they cannot effectively capture underbody damage due to limited visual access and spatial coverage. This work introduces a novel multi-modal architecture based on anomaly detection to address these issues. Sensors such as Inertial Measurement Units (IMUs) and microphones are integrated into a compact device mounted on the vehicle’s windshield. This approach supports real-time damage detection while avoiding the need for highly resource-intensive sensors. We developed multiple variants of multi-modal autoencoder-based architectures and evaluated them against unimodal and state-of-the-art methods. Our multi-modal ensemble model with pooling achieved the highest performance, with a Receiver Operating Characteristic-Area Under Curve (ROC-AUC) of 92%, demonstrating its effectiveness in real-world applications. This approach can also be extended to other applications, such as improving automotive safety. It can integrate with airbag systems for efficient deployment and help autonomous vehicles by complementing other sensors in collision detection.
车队和共享车辆系统的磨损检测是一项关键挑战,特别是在租赁和汽车共享服务中,轻微的损坏,如凹痕、划痕和车底撞击,往往被忽视或检测得太晚。目前,人工检查方法是默认的方法,但这是劳动密集型的,容易出现人为错误。相比之下,最先进的基于图像的方法在车辆移动时不太可靠,而且由于视觉访问和空间覆盖范围有限,它们无法有效捕获车底损坏。本文介绍了一种基于异常检测的新型多模态体系结构来解决这些问题。惯性测量单元(imu)和麦克风等传感器集成到安装在车辆挡风玻璃上的紧凑型设备中。这种方法支持实时损伤检测,同时避免了对高度资源密集型传感器的需求。我们开发了基于多模态自编码器的架构的多种变体,并针对单模态和最先进的方法对它们进行了评估。我们的带有池的多模态集成模型实现了最高的性能,接收器工作特性曲线下面积(ROC-AUC)为92%,证明了其在实际应用中的有效性。这种方法也可以扩展到其他应用,例如提高汽车安全性。它可以与安全气囊系统集成,以实现高效部署,并通过补充其他传感器来帮助自动驾驶汽车进行碰撞检测。
{"title":"Robust anomaly detection through multi-modal autoencoder fusion for small vehicle damage detection","authors":"Sara Khan ,&nbsp;Mehmed Yüksel ,&nbsp;Frank Kirchner","doi":"10.1016/j.mlwa.2025.100794","DOIUrl":"10.1016/j.mlwa.2025.100794","url":null,"abstract":"<div><div>Wear and tear detection in fleet and shared vehicle systems is a critical challenge, particularly in rental and car-sharing services, where minor damage, such as dents, scratches, and underbody impacts, often goes unnoticed or is detected too late. Currently, manual inspection methods are the default approach, but are labour-intensive and prone to human error. In contrast, state-of-the-art image-based methods are less reliable when the vehicle is moving, and they cannot effectively capture underbody damage due to limited visual access and spatial coverage. This work introduces a novel multi-modal architecture based on anomaly detection to address these issues. Sensors such as Inertial Measurement Units (IMUs) and microphones are integrated into a compact device mounted on the vehicle’s windshield. This approach supports real-time damage detection while avoiding the need for highly resource-intensive sensors. We developed multiple variants of multi-modal autoencoder-based architectures and evaluated them against unimodal and state-of-the-art methods. Our multi-modal ensemble model with pooling achieved the highest performance, with a Receiver Operating Characteristic-Area Under Curve (ROC-AUC) of 92%, demonstrating its effectiveness in real-world applications. This approach can also be extended to other applications, such as improving automotive safety. It can integrate with airbag systems for efficient deployment and help autonomous vehicles by complementing other sensors in collision detection.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100794"},"PeriodicalIF":4.9,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deep learning system for infant-cry recognition with real-time notification in resource-constrained environments 资源受限环境下具有实时通知的婴儿啼哭识别轻量级深度学习系统
IF 4.9 Pub Date : 2025-11-10 DOI: 10.1016/j.mlwa.2025.100791
Héritier Nsenge Mpia , Muyisa Mumbere Kavalami , Grâce Kasereka Lusenge , Kakule Pascal Ushindi , Dieu-Donné Kambale Kyalengekania , Olivier Muzembe Ciswaka
Ensuring infant safety is a major challenge, especially when constant supervision is not possible. Crying is the main acoustic cue that reveals an infant’s needs. However, most baby monitors perform poorly in noisy or low-resource environments. The authors propose a lightweight deep-learning system that links YAMNet transfer embeddings with a compact convolutional neural network (CNN). A Flask microservice connects the model to WhatsApp, sending alerts to caregivers in real time. The framework runs smoothly on a Raspberry Pi 4B and was trained on 9000 audio clips drawn from Kaggle and home recordings. The CNN reached 95.2 % accuracy, 0.93 F1-score, and 0.96 ROC-AUC, surpassing both MLP and Random Forest models. Latency from audio capture to message delivery stays below 0.8 s, even with background noise. By combining deep-audio transfer learning, IoT-based communication, and instant messaging, this work delivers a novel, reproducible, and low-cost intelligent monitoring solution for infant-cry detection in resource-limited settings.
确保婴儿安全是一项重大挑战,特别是在无法进行持续监督的情况下。哭声是揭示婴儿需求的主要声音线索。然而,大多数婴儿监视器在嘈杂或资源匮乏的环境中表现不佳。作者提出了一个轻量级的深度学习系统,将YAMNet迁移嵌入与一个紧凑的卷积神经网络(CNN)联系起来。Flask微服务将该模型与WhatsApp连接起来,实时向护理人员发送警报。该框架在树莓派4B上平稳运行,并接受了来自Kaggle和家庭录音的9000个音频剪辑的训练。CNN的准确率达到95.2%,f1得分为0.93,ROC-AUC为0.96,超过了MLP和Random Forest模型。即使有背景噪音,从音频捕获到消息传递的延迟也保持在0.8秒以下。通过将深度音频迁移学习、基于物联网的通信和即时通讯相结合,本研究为资源有限环境下的婴儿啼哭检测提供了一种新颖、可重复、低成本的智能监测解决方案。
{"title":"Lightweight deep learning system for infant-cry recognition with real-time notification in resource-constrained environments","authors":"Héritier Nsenge Mpia ,&nbsp;Muyisa Mumbere Kavalami ,&nbsp;Grâce Kasereka Lusenge ,&nbsp;Kakule Pascal Ushindi ,&nbsp;Dieu-Donné Kambale Kyalengekania ,&nbsp;Olivier Muzembe Ciswaka","doi":"10.1016/j.mlwa.2025.100791","DOIUrl":"10.1016/j.mlwa.2025.100791","url":null,"abstract":"<div><div>Ensuring infant safety is a major challenge, especially when constant supervision is not possible. Crying is the main acoustic cue that reveals an infant’s needs. However, most baby monitors perform poorly in noisy or low-resource environments. The authors propose a lightweight deep-learning system that links YAMNet transfer embeddings with a compact convolutional neural network (CNN). A Flask microservice connects the model to WhatsApp, sending alerts to caregivers in real time. The framework runs smoothly on a Raspberry Pi 4B and was trained on 9000 audio clips drawn from Kaggle and home recordings. The CNN reached 95.2 % accuracy, 0.93 F1-score, and 0.96 ROC-AUC, surpassing both MLP and Random Forest models. Latency from audio capture to message delivery stays below 0.8 s, even with background noise. By combining deep-audio transfer learning, IoT-based communication, and instant messaging, this work delivers a novel, reproducible, and low-cost intelligent monitoring solution for infant-cry detection in resource-limited settings.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100791"},"PeriodicalIF":4.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCLSA: Advancing Bangla sentiment analysis with concept-level reasoning and efficiency BCLSA:以概念级推理和效率推进孟加拉语情绪分析
IF 4.9 Pub Date : 2025-11-08 DOI: 10.1016/j.mlwa.2025.100793
Mohammad Aman Ullah
Accurate sentiment analysis in Bangla remains a significant research challenge due to limited annotated corpora, complex morphology, insufficient linguistic resources, and the absence of interpretable concept-level knowledge bases. Existing approaches often struggle to capture context-dependent sentiment, idiomatic expressions, and domain adaptability, further constrained by the low-resource nature of the language. To address these limitations, this study proposes the Bangla Concept-Level Sentiment Analysis (BCLSA) framework, introducing two dedicated algorithms: a Bangla-specific concept extraction method and the Concept-Level Sentiment Analysis for Bangla (CLSA-Bn) weighted scoring algorithm. The first extracts sentiment-bearing concepts through syntactic pattern recognition, multiword expression detection, and affective lexicon mapping, while the second refines polarity estimation via negation handling, modifier scaling, and weighted aggregation for interpretable classification. To mitigate data scarcity and morphological variation, BCLSA applies language-specific preprocessing, including Unicode normalization, phonetic correction, and lemmatization. Evaluations on 10,243 formal news articles and 12,084 informal social media comments show that CLSA-Bn outperforms the Bi-LSTM and SVM baselines, achieving 90.2 % Accuracy, 90 % Macro-F1, 85 % Matthews Correlation Coefficient (MCC), and 94 % Area Under the Curve (AUC) for formal text, and 86.8 % Accuracy, 86 % Macro-F1, and 91 % AUC for informal text. The proposed Concept-Level Polarity Accuracy (CLPA) metric confirmed semantic fidelity above 88 %. Efficiency analysis revealed that CLSA-Bn requires only 30 s initialization, 5 ms inference, and a 50 MB model. Error rate analysis further confirmed robustness with the lowest misclassification ratios (9.8 % formal, 13.2 % informal), demonstrating balanced improvement in performance and error minimization.
由于孟加拉语的标注语料库有限、形态复杂、语言资源不足以及缺乏可解释的概念级知识库,准确的孟加拉语情感分析仍然是一个重大的研究挑战。现有的方法常常难以捕捉上下文相关的情感、习惯表达和领域适应性,这进一步受到语言低资源特性的限制。为了解决这些限制,本研究提出了孟加拉语概念级情感分析(BCLSA)框架,引入了两种专用算法:孟加拉语特定概念提取方法和孟加拉语概念级情感分析(CLSA-Bn)加权评分算法。第一种方法通过句法模式识别、多词表达检测和情感词汇映射提取情感承载概念,而第二种方法通过否定处理、修饰语缩放和可解释分类的加权聚合来改进极性估计。为了减轻数据稀缺性和形态变化,BCLSA应用了特定于语言的预处理,包括Unicode规范化、语音校正和词法化。对10,243篇正式新闻文章和12,084篇非正式社交媒体评论的评估表明,CLSA-Bn优于Bi-LSTM和SVM基线,在正式文本中达到90.2%的准确率、90%的宏观f1、85%的马修斯相关系数(MCC)和94%的曲线下面积(AUC),在非正式文本中达到86.8%的准确率、86%的宏观f1和91%的AUC。提出的概念级极性精度(CLPA)度量确认语义保真度超过88%。效率分析表明,CLSA-Bn只需要30秒初始化,5毫秒推理和50 MB模型。错误率分析进一步证实了鲁棒性,最低的错误分类比率(9.8%正式,13.2%非正式),证明了性能和错误最小化的平衡改进。
{"title":"BCLSA: Advancing Bangla sentiment analysis with concept-level reasoning and efficiency","authors":"Mohammad Aman Ullah","doi":"10.1016/j.mlwa.2025.100793","DOIUrl":"10.1016/j.mlwa.2025.100793","url":null,"abstract":"<div><div>Accurate sentiment analysis in Bangla remains a significant research challenge due to limited annotated corpora, complex morphology, insufficient linguistic resources, and the absence of interpretable concept-level knowledge bases. Existing approaches often struggle to capture context-dependent sentiment, idiomatic expressions, and domain adaptability, further constrained by the low-resource nature of the language. To address these limitations, this study proposes the Bangla Concept-Level Sentiment Analysis (BCLSA) framework, introducing two dedicated algorithms: a Bangla-specific concept extraction method and the Concept-Level Sentiment Analysis for Bangla (CLSA-Bn) weighted scoring algorithm. The first extracts sentiment-bearing concepts through syntactic pattern recognition, multiword expression detection, and affective lexicon mapping, while the second refines polarity estimation via negation handling, modifier scaling, and weighted aggregation for interpretable classification. To mitigate data scarcity and morphological variation, BCLSA applies language-specific preprocessing, including Unicode normalization, phonetic correction, and lemmatization. Evaluations on 10,243 formal news articles and 12,084 informal social media comments show that CLSA-Bn outperforms the Bi-LSTM and SVM baselines, achieving 90.2 % Accuracy, 90 % Macro-F1, 85 % Matthews Correlation Coefficient (MCC), and 94 % Area Under the Curve (AUC) for formal text, and 86.8 % Accuracy, 86 % Macro-F1, and 91 % AUC for informal text. The proposed Concept-Level Polarity Accuracy (CLPA) metric confirmed semantic fidelity above 88 %. Efficiency analysis revealed that CLSA-Bn requires only 30 s initialization, 5 ms inference, and a 50 MB model. Error rate analysis further confirmed robustness with the lowest misclassification ratios (9.8 % formal, 13.2 % informal), demonstrating balanced improvement in performance and error minimization.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100793"},"PeriodicalIF":4.9,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-native causal AI for supply chain KPI monitoring: A GCP framework to diagnose out-of-stock events 用于供应链KPI监控的云原生因果AI:用于诊断缺货事件的GCP框架
IF 4.9 Pub Date : 2025-11-08 DOI: 10.1016/j.mlwa.2025.100765
Tarique Ameer, Omid Fatahi Valilai
Effective supply chain management (SCM) is essential for ensuring operational efficiency, cost optimization, and customer satisfaction. In an increasingly dynamic business environment, the integration of cloud computing and Causal Artificial Intelligence (AI) presents new opportunities for value creation and intelligent decision-making. This study proposes a framework that leverages the capabilities of Google© Cloud Platform (GCP) for real-time Key Performance Indicator (KPI) monitoring and supply chain analytics. By incorporating Causal AI, the framework enables the identification of underlying causes of out-of-stock (OOS) situations, rather than merely observing correlations. The research presents an end-to-end architecture combining data pipelines, real-time dashboards, and causal inference models to proactively detect and address OOS risks. This integrated, data-driven approach aims to improve inventory accuracy, enhance forecasting, and ultimately strengthen supply chain resilience and responsiveness.
有效的供应链管理(SCM)对于确保操作效率、成本优化和客户满意度是必不可少的。在日益动态的商业环境中,云计算与因果人工智能(AI)的融合为价值创造和智能决策提供了新的机遇。本研究提出了一个框架,利用谷歌©云平台(GCP)的功能进行实时关键绩效指标(KPI)监控和供应链分析。通过合并因果AI,该框架能够识别缺货(OOS)情况的潜在原因,而不仅仅是观察相关性。该研究提出了一种结合数据管道、实时仪表板和因果推理模型的端到端架构,以主动检测和解决OOS风险。这种集成的、数据驱动的方法旨在提高库存准确性,增强预测,并最终增强供应链的弹性和响应能力。
{"title":"Cloud-native causal AI for supply chain KPI monitoring: A GCP framework to diagnose out-of-stock events","authors":"Tarique Ameer,&nbsp;Omid Fatahi Valilai","doi":"10.1016/j.mlwa.2025.100765","DOIUrl":"10.1016/j.mlwa.2025.100765","url":null,"abstract":"<div><div>Effective supply chain management (SCM) is essential for ensuring operational efficiency, cost optimization, and customer satisfaction. In an increasingly dynamic business environment, the integration of cloud computing and Causal Artificial Intelligence (AI) presents new opportunities for value creation and intelligent decision-making. This study proposes a framework that leverages the capabilities of Google© Cloud Platform (GCP) for real-time Key Performance Indicator (KPI) monitoring and supply chain analytics. By incorporating Causal AI, the framework enables the identification of underlying causes of out-of-stock (OOS) situations, rather than merely observing correlations. The research presents an end-to-end architecture combining data pipelines, real-time dashboards, and causal inference models to proactively detect and address OOS risks. This integrated, data-driven approach aims to improve inventory accuracy, enhance forecasting, and ultimately strengthen supply chain resilience and responsiveness.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100765"},"PeriodicalIF":4.9,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning approaches to traffic accident severity prediction: Addressing class imbalance 交通事故严重程度预测的机器学习方法:解决类别不平衡
IF 4.9 Pub Date : 2025-11-07 DOI: 10.1016/j.mlwa.2025.100792
Mohammad Amin Amiri , Saeid Afshari , Ali Soltani
Road traffic injuries continue to pose a significant public health challenge in Australia, with pedestrians representing one of the most vulnerable road user groups. Accurate prediction of injury severity, particularly fatal outcomes, is essential for improving road safety interventions and resource allocation. This study applies advanced machine learning techniques to predict pedestrian crash severity using national hospitalization and mortality data collected from 2011 to 2021. The analysis focuses on addressing class imbalance, a common issue in injury data by evaluating the impact of several data balancing methods, including SMOTE, ADASYN, Random Oversampling (ROS), and Threshold Moving. We implement and compare four supervised learning algorithms: Logistic Regression, Support Vector Machine (SVM), Decision Tree, and XGBoost. Model performance is assessed using F1-score and macro-accuracy, with a focus on the minority (fatality) class. Results show that XGBoost combined with Threshold Moving achieves the highest performance, yielding an F1-score of 72% for fatality classification and a macro-accuracy of 84%. Additionally, feature importance analysis using SHAP values reveals age, gender, road user type, and crash location as key predictors of injury severity. The study highlights the critical role of data balancing strategies in enhancing predictive accuracy for rare but high-impact outcomes. These findings provide actionable insights for transport authorities and policymakers seeking to develop data-driven, targeted safety measures to protect pedestrians and reduce the severity of crash outcomes.
道路交通伤害继续对澳大利亚的公共卫生构成重大挑战,行人是最脆弱的道路使用者群体之一。准确预测伤害严重程度,特别是致命后果,对于改进道路安全干预措施和资源分配至关重要。本研究采用先进的机器学习技术,利用2011年至2021年收集的全国住院和死亡率数据预测行人碰撞严重程度。分析的重点是通过评估几种数据平衡方法的影响,包括SMOTE、ADASYN、随机过采样(ROS)和阈值移动,来解决损伤数据中的一个常见问题——类别失衡。我们实现并比较了四种监督学习算法:逻辑回归、支持向量机(SVM)、决策树和XGBoost。模型性能使用f1分数和宏观精度进行评估,重点关注少数(死亡)类别。结果表明,XGBoost结合Threshold Moving实现了最高的性能,在死亡率分类方面的f1得分为72%,宏观精度为84%。此外,使用SHAP值的特征重要性分析显示,年龄、性别、道路使用者类型和碰撞位置是损伤严重程度的关键预测因素。该研究强调了数据平衡策略在提高罕见但高影响结果的预测准确性方面的关键作用。这些发现为交通管理部门和政策制定者提供了可行的见解,帮助他们制定数据驱动的、有针对性的安全措施,以保护行人,降低碰撞后果的严重程度。
{"title":"Machine learning approaches to traffic accident severity prediction: Addressing class imbalance","authors":"Mohammad Amin Amiri ,&nbsp;Saeid Afshari ,&nbsp;Ali Soltani","doi":"10.1016/j.mlwa.2025.100792","DOIUrl":"10.1016/j.mlwa.2025.100792","url":null,"abstract":"<div><div>Road traffic injuries continue to pose a significant public health challenge in Australia, with pedestrians representing one of the most vulnerable road user groups. Accurate prediction of injury severity, particularly fatal outcomes, is essential for improving road safety interventions and resource allocation. This study applies advanced machine learning techniques to predict pedestrian crash severity using national hospitalization and mortality data collected from 2011 to 2021. The analysis focuses on addressing class imbalance, a common issue in injury data by evaluating the impact of several data balancing methods, including SMOTE, ADASYN, Random Oversampling (ROS), and Threshold Moving. We implement and compare four supervised learning algorithms: Logistic Regression, Support Vector Machine (SVM), Decision Tree, and XGBoost. Model performance is assessed using F1-score and macro-accuracy, with a focus on the minority (fatality) class. Results show that XGBoost combined with Threshold Moving achieves the highest performance, yielding an F1-score of 72% for fatality classification and a macro-accuracy of 84%. Additionally, feature importance analysis using SHAP values reveals age, gender, road user type, and crash location as key predictors of injury severity. The study highlights the critical role of data balancing strategies in enhancing predictive accuracy for rare but high-impact outcomes. These findings provide actionable insights for transport authorities and policymakers seeking to develop data-driven, targeted safety measures to protect pedestrians and reduce the severity of crash outcomes.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"22 ","pages":"Article 100792"},"PeriodicalIF":4.9,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning with applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1