首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Artificial intelligence and machine learning applications for cultured meat. 人工智能和机器学习在养殖肉类中的应用。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-24 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1424012
Michael E Todhunter, Sheikh Jubair, Ruchika Verma, Rikard Saqe, Kevin Shen, Breanna Duffy

Cultured meat has the potential to provide a complementary meat industry with reduced environmental, ethical, and health impacts. However, major technological challenges remain which require time-and resource-intensive research and development efforts. Machine learning has the potential to accelerate cultured meat technology by streamlining experiments, predicting optimal results, and reducing experimentation time and resources. However, the use of machine learning in cultured meat is in its infancy. This review covers the work available to date on the use of machine learning in cultured meat and explores future possibilities. We address four major areas of cultured meat research and development: establishing cell lines, cell culture media design, microscopy and image analysis, and bioprocessing and food processing optimization. In addition, we have included a survey of datasets relevant to CM research. This review aims to provide the foundation necessary for both cultured meat and machine learning scientists to identify research opportunities at the intersection between cultured meat and machine learning.

养殖肉类有可能成为肉类产业的补充,减少对环境、道德和健康的影响。然而,重大的技术挑战依然存在,需要时间和资源密集型的研发工作。机器学习有可能通过简化实验、预测最佳结果以及减少实验时间和资源来加速养殖肉类技术的发展。然而,机器学习在养殖肉类中的应用还处于起步阶段。本综述涵盖了迄今为止机器学习在肉类养殖中的应用,并探讨了未来的可能性。我们讨论了培养肉研究与开发的四个主要领域:建立细胞系、细胞培养基设计、显微镜和图像分析以及生物加工和食品加工优化。此外,我们还对与中药研究相关的数据集进行了调查。本综述旨在为培养肉和机器学习科学家提供必要的基础,以确定培养肉和机器学习交叉领域的研究机会。
{"title":"Artificial intelligence and machine learning applications for cultured meat.","authors":"Michael E Todhunter, Sheikh Jubair, Ruchika Verma, Rikard Saqe, Kevin Shen, Breanna Duffy","doi":"10.3389/frai.2024.1424012","DOIUrl":"https://doi.org/10.3389/frai.2024.1424012","url":null,"abstract":"<p><p>Cultured meat has the potential to provide a complementary meat industry with reduced environmental, ethical, and health impacts. However, major technological challenges remain which require time-and resource-intensive research and development efforts. Machine learning has the potential to accelerate cultured meat technology by streamlining experiments, predicting optimal results, and reducing experimentation time and resources. However, the use of machine learning in cultured meat is in its infancy. This review covers the work available to date on the use of machine learning in cultured meat and explores future possibilities. We address four major areas of cultured meat research and development: establishing cell lines, cell culture media design, microscopy and image analysis, and bioprocessing and food processing optimization. In addition, we have included a survey of datasets relevant to CM research. This review aims to provide the foundation necessary for both cultured meat and machine learning scientists to identify research opportunities at the intersection between cultured meat and machine learning.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1424012"},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards enhanced creativity in fashion: integrating generative models with hybrid intelligence. 增强时尚创意:将生成模型与混合智能相结合。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1460217
Alexander Ryjov, Vagan Kazaryan, Andrey Golub, Alina Egorova

Introduction: This study explores the role and potential of large language models (LLMs) and generative intelligence in the fashion industry. These technologies are reshaping traditional methods of design, production, and retail, leading to innovation, product personalization, and enhanced customer interaction.

Methods: Our research analyzes the current applications and limitations of LLMs in fashion, identifying challenges such as the need for better spatial understanding and design detail processing. We propose a hybrid intelligence approach to address these issues.

Results: We find that while LLMs offer significant potential, their integration into fashion workflows requires improvements in understanding spatial parameters and creating tools for iterative design.

Discussion: Future research should focus on overcoming these limitations and developing hybrid intelligence solutions to maximize the potential of LLMs in the fashion industry.

简介本研究探讨了大型语言模型(LLM)和生成智能在时尚产业中的作用和潜力。这些技术正在重塑传统的设计、生产和零售方法,带来创新、产品个性化和增强的客户互动:我们的研究分析了当前 LLM 在时尚领域的应用和局限性,发现了一些挑战,如需要更好的空间理解和设计细节处理。我们提出了一种混合智能方法来解决这些问题:结果:我们发现,虽然 LLMs 具有巨大的潜力,但将其整合到时尚工作流程中需要改进对空间参数的理解,并创建用于迭代设计的工具:讨论:未来的研究应侧重于克服这些局限性和开发混合智能解决方案,以最大限度地发挥 LLM 在时尚产业中的潜力。
{"title":"Towards enhanced creativity in fashion: integrating generative models with hybrid intelligence.","authors":"Alexander Ryjov, Vagan Kazaryan, Andrey Golub, Alina Egorova","doi":"10.3389/frai.2024.1460217","DOIUrl":"https://doi.org/10.3389/frai.2024.1460217","url":null,"abstract":"<p><strong>Introduction: </strong>This study explores the role and potential of large language models (LLMs) and generative intelligence in the fashion industry. These technologies are reshaping traditional methods of design, production, and retail, leading to innovation, product personalization, and enhanced customer interaction.</p><p><strong>Methods: </strong>Our research analyzes the current applications and limitations of LLMs in fashion, identifying challenges such as the need for better spatial understanding and design detail processing. We propose a hybrid intelligence approach to address these issues.</p><p><strong>Results: </strong>We find that while LLMs offer significant potential, their integration into fashion workflows requires improvements in understanding spatial parameters and creating tools for iterative design.</p><p><strong>Discussion: </strong>Future research should focus on overcoming these limitations and developing hybrid intelligence solutions to maximize the potential of LLMs in the fashion industry.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1460217"},"PeriodicalIF":3.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11468243/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image restoration in frequency space using complex-valued CNNs. 使用复值 CNN 在频率空间中修复图像。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1353873
Zafran Hussain Shah, Marcel Müller, Wolfgang Hübner, Henning Ortkrass, Barbara Hammer, Thomas Huser, Wolfram Schenck

Real-valued convolutional neural networks (RV-CNNs) in the spatial domain have outperformed classical approaches in many image restoration tasks such as image denoising and super-resolution. Fourier analysis of the results produced by these spatial domain models reveals the limitations of these models in properly processing the full frequency spectrum. This lack of complete spectral information can result in missing textural and structural elements. To address this limitation, we explore the potential of complex-valued convolutional neural networks (CV-CNNs) for image restoration tasks. CV-CNNs have shown remarkable performance in tasks such as image classification and segmentation. However, CV-CNNs for image restoration problems in the frequency domain have not been fully investigated to address the aforementioned issues. Here, we propose several novel CV-CNN-based models equipped with complex-valued attention gates for image denoising and super-resolution in the frequency domains. We also show that our CV-CNN-based models outperform their real-valued counterparts for denoising super-resolution structured illumination microscopy (SR-SIM) and conventional image datasets. Furthermore, the experimental results show that our proposed CV-CNN-based models preserve the frequency spectrum better than their real-valued counterparts in the denoising task. Based on these findings, we conclude that CV-CNN-based methods provide a plausible and beneficial deep learning approach for image restoration in the frequency domain.

空间域实值卷积神经网络(RV-CNN)在许多图像复原任务(如图像去噪和超分辨率)中的表现都优于传统方法。对这些空间域模型产生的结果进行傅立叶分析,可以发现这些模型在正确处理全频谱方面存在局限性。缺乏完整的频谱信息会导致纹理和结构元素的缺失。为了解决这一局限性,我们探索了复值卷积神经网络(CV-CNN)在图像复原任务中的潜力。复值卷积神经网络在图像分类和分割等任务中表现出色。然而,针对频域图像复原问题的 CV-CNN 还没有得到充分研究以解决上述问题。在此,我们提出了几种基于 CV-CNN 的新型模型,这些模型配备了复值注意门,可用于频域中的图像去噪和超分辨率。在对超分辨率结构照明显微镜(SR-SIM)和传统图像数据集进行去噪时,我们的基于 CV-CNN 的模型优于其对应的实值模型。此外,实验结果表明,在去噪任务中,我们提出的基于 CV-CNN 的模型比其对应的实值模型能更好地保留频谱。基于这些发现,我们得出结论:基于 CV-CNN 的方法为频域图像复原提供了一种可行且有益的深度学习方法。
{"title":"Image restoration in frequency space using complex-valued CNNs.","authors":"Zafran Hussain Shah, Marcel Müller, Wolfgang Hübner, Henning Ortkrass, Barbara Hammer, Thomas Huser, Wolfram Schenck","doi":"10.3389/frai.2024.1353873","DOIUrl":"https://doi.org/10.3389/frai.2024.1353873","url":null,"abstract":"<p><p>Real-valued convolutional neural networks (RV-CNNs) in the spatial domain have outperformed classical approaches in many image restoration tasks such as image denoising and super-resolution. Fourier analysis of the results produced by these spatial domain models reveals the limitations of these models in properly processing the full frequency spectrum. This lack of complete spectral information can result in missing textural and structural elements. To address this limitation, we explore the potential of complex-valued convolutional neural networks (CV-CNNs) for image restoration tasks. CV-CNNs have shown remarkable performance in tasks such as image classification and segmentation. However, CV-CNNs for image restoration problems in the frequency domain have not been fully investigated to address the aforementioned issues. Here, we propose several novel CV-CNN-based models equipped with complex-valued attention gates for image denoising and super-resolution in the frequency domains. We also show that our CV-CNN-based models outperform their real-valued counterparts for denoising super-resolution structured illumination microscopy (SR-SIM) and conventional image datasets. Furthermore, the experimental results show that our proposed CV-CNN-based models preserve the frequency spectrum better than their real-valued counterparts in the denoising task. Based on these findings, we conclude that CV-CNN-based methods provide a plausible and beneficial deep learning approach for image restoration in the frequency domain.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1353873"},"PeriodicalIF":3.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11456741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers. 基于时间序列分类器参数化事件原语的全局模型无关规则 XAI 方法。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1381921
Ephrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio

Time series classification is a challenging research area where machine learning and deep learning techniques have shown remarkable performance. However, often, these are seen as black boxes due to their minimal interpretability. On the one hand, there is a plethora of eXplainable AI (XAI) methods designed to elucidate the functioning of models trained on image and tabular data. On the other hand, adapting these methods to explain deep learning-based time series classifiers may not be straightforward due to the temporal nature of time series data. This research proposes a novel global post-hoc explainable method for unearthing the key time steps behind the inferences made by deep learning-based time series classifiers. This novel approach generates a decision tree graph, a specific set of rules, that can be seen as explanations, potentially enhancing interpretability. The methodology involves two major phases: (1) training and evaluating deep-learning-based time series classification models, and (2) extracting parameterized primitive events, such as increasing, decreasing, local max and local min, from each instance of the evaluation set and clustering such events to extract prototypical ones. These prototypical primitive events are then used as input to a decision-tree classifier trained to fit the model predictions of the test set rather than the ground truth data. Experiments were conducted on diverse real-world datasets sourced from the UCR archive, employing metrics such as accuracy, fidelity, robustness, number of nodes, and depth of the extracted rules. The findings indicate that this global post-hoc method can improve the global interpretability of complex time series classification models.

时间序列分类是一个极具挑战性的研究领域,机器学习和深度学习技术在这一领域表现出色。然而,由于其可解释性极低,这些技术往往被视为黑箱。一方面,有大量可解释人工智能(XAI)方法旨在阐明在图像和表格数据上训练的模型的功能。另一方面,由于时间序列数据的时间性,将这些方法用于解释基于深度学习的时间序列分类器可能并不简单。本研究提出了一种新颖的全局事后可解释方法,用于挖掘基于深度学习的时间序列分类器所做推断背后的关键时间步骤。这种新方法生成的决策树图是一组特定的规则,可被视为解释,潜在地提高了可解释性。该方法包括两个主要阶段:(1)训练和评估基于深度学习的时间序列分类模型;(2)从评估集的每个实例中提取参数化的原始事件,如增加、减少、局部最大和局部最小,并对这些事件进行聚类,以提取原型事件。然后,将这些原型原始事件作为决策树分类器的输入,经过训练,使其符合测试集而非地面实况数据的模型预测。实验在来自 UCR 档案的各种真实世界数据集上进行,采用的指标包括提取规则的准确性、保真度、鲁棒性、节点数和深度。研究结果表明,这种全局事后方法可以提高复杂时间序列分类模型的全局可解释性。
{"title":"A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers.","authors":"Ephrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio","doi":"10.3389/frai.2024.1381921","DOIUrl":"10.3389/frai.2024.1381921","url":null,"abstract":"<p><p>Time series classification is a challenging research area where machine learning and deep learning techniques have shown remarkable performance. However, often, these are seen as black boxes due to their minimal interpretability. On the one hand, there is a plethora of eXplainable AI (XAI) methods designed to elucidate the functioning of models trained on image and tabular data. On the other hand, adapting these methods to explain deep learning-based time series classifiers may not be straightforward due to the temporal nature of time series data. This research proposes a novel global <i>post-hoc</i> explainable method for unearthing the key time steps behind the inferences made by deep learning-based time series classifiers. This novel approach generates a decision tree graph, a specific set of rules, that can be seen as explanations, potentially enhancing interpretability. The methodology involves two major phases: (1) training and evaluating deep-learning-based time series classification models, and (2) extracting parameterized primitive events, such as increasing, decreasing, local max and local min, from each instance of the evaluation set and clustering such events to extract prototypical ones. These prototypical primitive events are then used as input to a decision-tree classifier trained to fit the model predictions of the test set rather than the ground truth data. Experiments were conducted on diverse real-world datasets sourced from the UCR archive, employing metrics such as accuracy, fidelity, robustness, number of nodes, and depth of the extracted rules. The findings indicate that this global <i>post-hoc</i> method can improve the global interpretability of complex time series classification models.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1381921"},"PeriodicalIF":3.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLGCN: an ultra efficient graph convolutional neural model for 3D point cloud analysis. MLGCN:用于三维点云分析的超高效图卷积神经模型。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1439340
Mohammad Khodadad, Ali Shiraee Kasmaee, Hamidreza Mahyar, Morteza Rezanejad

With the rapid advancement of 3D acquisition technologies, 3D sensors such as LiDARs, 3D scanners, and RGB-D cameras have become increasingly accessible and cost-effective. These sensors generate 3D point cloud data that require efficient algorithms for tasks such as 3D model classification and segmentation. While deep learning techniques have proven effective in these areas, existing models often rely on complex architectures, leading to high computational costs that are impractical for real-time applications like augmented reality and robotics. In this work, we propose the Multi-level Graph Convolutional Neural Network (MLGCN), an ultra-efficient model for 3D point cloud analysis. The MLGCN model utilizes shallow Graph Neural Network (GNN) blocks to extract features at various spatial locality levels, leveraging precomputed KNN graphs shared across GCN blocks. This approach significantly reduces computational overhead and memory usage, making the model well-suited for deployment on low-memory and low-CPU devices. Despite its efficiency, MLGCN achieves competitive performance in object classification and part segmentation tasks, demonstrating results comparable to state-of-the-art models while requiring up to a thousand times fewer floating-point operations and significantly less storage. The contributions of this paper include the introduction of a lightweight, multi-branch graph-based network for 3D shape analysis, the demonstration of the model's efficiency in both computation and storage, and a thorough theoretical and experimental evaluation of the model's performance. We also conduct ablation studies to assess the impact of different branches within the model, providing valuable insights into the role of specific components.

随着三维采集技术的飞速发展,激光雷达、三维扫描仪和 RGB-D 摄像机等三维传感器变得越来越容易获取,也越来越经济实惠。这些传感器生成的三维点云数据需要高效的算法来完成三维模型分类和分割等任务。虽然深度学习技术在这些领域已被证明行之有效,但现有模型往往依赖于复杂的架构,导致计算成本居高不下,这对于增强现实和机器人等实时应用来说是不切实际的。在这项工作中,我们提出了多级图卷积神经网络(MLGCN),这是一种用于三维点云分析的超高效模型。MLGCN 模型利用浅层图神经网络 (GNN) 块提取不同空间位置级别的特征,并利用 GCN 块之间共享的预计算 KNN 图。这种方法大大降低了计算开销和内存使用量,使该模型非常适合部署在低内存和低CPU设备上。尽管效率很高,但 MLGCN 在对象分类和部件分割任务中实现了极具竞争力的性能,其结果可与最先进的模型相媲美,同时所需的浮点运算次数和存储空间却大大减少了一千倍。本文的贡献包括为三维形状分析引入了一种轻量级、基于多分支图的网络,展示了该模型在计算和存储方面的效率,并对模型的性能进行了全面的理论和实验评估。我们还进行了消融研究,以评估模型内不同分支的影响,从而为了解特定组件的作用提供了宝贵的见解。
{"title":"MLGCN: an ultra efficient graph convolutional neural model for 3D point cloud analysis.","authors":"Mohammad Khodadad, Ali Shiraee Kasmaee, Hamidreza Mahyar, Morteza Rezanejad","doi":"10.3389/frai.2024.1439340","DOIUrl":"10.3389/frai.2024.1439340","url":null,"abstract":"<p><p>With the rapid advancement of 3D acquisition technologies, 3D sensors such as LiDARs, 3D scanners, and RGB-D cameras have become increasingly accessible and cost-effective. These sensors generate 3D point cloud data that require efficient algorithms for tasks such as 3D model classification and segmentation. While deep learning techniques have proven effective in these areas, existing models often rely on complex architectures, leading to high computational costs that are impractical for real-time applications like augmented reality and robotics. In this work, we propose the Multi-level Graph Convolutional Neural Network (MLGCN), an ultra-efficient model for 3D point cloud analysis. The MLGCN model utilizes shallow Graph Neural Network (GNN) blocks to extract features at various spatial locality levels, leveraging precomputed KNN graphs shared across GCN blocks. This approach significantly reduces computational overhead and memory usage, making the model well-suited for deployment on low-memory and low-CPU devices. Despite its efficiency, MLGCN achieves competitive performance in object classification and part segmentation tasks, demonstrating results comparable to state-of-the-art models while requiring up to a thousand times fewer floating-point operations and significantly less storage. The contributions of this paper include the introduction of a lightweight, multi-branch graph-based network for 3D shape analysis, the demonstration of the model's efficiency in both computation and storage, and a thorough theoretical and experimental evaluation of the model's performance. We also conduct ablation studies to assess the impact of different branches within the model, providing valuable insights into the role of specific components.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1439340"},"PeriodicalIF":3.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449895/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
David vs. Goliath: comparing conventional machine learning and a large language model for assessing students' concept use in a physics problem. David vs. Goliath:比较传统机器学习和大型语言模型在物理问题中对学生概念使用的评估。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1408817
Fabian Kieser, Paul Tschisgale, Sophia Rauh, Xiaoyu Bai, Holger Maus, Stefan Petersen, Manfred Stede, Knut Neumann, Peter Wulff

Large language models have been shown to excel in many different tasks across disciplines and research sites. They provide novel opportunities to enhance educational research and instruction in different ways such as assessment. However, these methods have also been shown to have fundamental limitations. These relate, among others, to hallucinating knowledge, explainability of model decisions, and resource expenditure. As such, more conventional machine learning algorithms might be more convenient for specific research problems because they allow researchers more control over their research. Yet, the circumstances in which either conventional machine learning or large language models are preferable choices are not well understood. This study seeks to answer the question to what extent either conventional machine learning algorithms or a recently advanced large language model performs better in assessing students' concept use in a physics problem-solving task. We found that conventional machine learning algorithms in combination outperformed the large language model. Model decisions were then analyzed via closer examination of the models' classifications. We conclude that in specific contexts, conventional machine learning can supplement large language models, especially when labeled data is available.

大型语言模型在不同学科和研究场所的许多不同任务中都表现出色。它们为以评估等不同方式加强教育研究和教学提供了新的机会。然而,这些方法也被证明具有根本性的局限性。这些限制主要涉及知识的幻觉、模型决策的可解释性和资源消耗。因此,更传统的机器学习算法可能更便于特定研究问题的解决,因为它们允许研究人员对研究进行更多控制。然而,在哪些情况下,传统机器学习或大型语言模型是更好的选择,目前还不十分清楚。本研究试图回答这样一个问题:在评估学生在物理问题解决任务中的概念使用情况时,传统的机器学习算法和最新的大型语言模型在多大程度上表现更佳。我们发现,传统机器学习算法的综合表现优于大型语言模型。然后,我们通过对模型分类的仔细研究,对模型决策进行了分析。我们的结论是,在特定情况下,传统机器学习可以补充大型语言模型的不足,尤其是在有标记数据的情况下。
{"title":"David vs. Goliath: comparing conventional machine learning and a large language model for assessing students' concept use in a physics problem.","authors":"Fabian Kieser, Paul Tschisgale, Sophia Rauh, Xiaoyu Bai, Holger Maus, Stefan Petersen, Manfred Stede, Knut Neumann, Peter Wulff","doi":"10.3389/frai.2024.1408817","DOIUrl":"10.3389/frai.2024.1408817","url":null,"abstract":"<p><p>Large language models have been shown to excel in many different tasks across disciplines and research sites. They provide novel opportunities to enhance educational research and instruction in different ways such as assessment. However, these methods have also been shown to have fundamental limitations. These relate, among others, to hallucinating knowledge, explainability of model decisions, and resource expenditure. As such, more conventional machine learning algorithms might be more convenient for specific research problems because they allow researchers more control over their research. Yet, the circumstances in which either conventional machine learning or large language models are preferable choices are not well understood. This study seeks to answer the question to what extent either conventional machine learning algorithms or a recently advanced large language model performs better in assessing students' concept use in a physics problem-solving task. We found that conventional machine learning algorithms in combination outperformed the large language model. Model decisions were then analyzed via closer examination of the models' classifications. We conclude that in specific contexts, conventional machine learning can supplement large language models, especially when labeled data is available.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408817"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the contribution of image time series observations to cauliflower harvest-readiness prediction. 研究图像时间序列观测对花椰菜收获准备预测的贡献。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1416323
Jana Kierdorf, Timo Tjarden Stomberg, Lukas Drees, Uwe Rascher, Ribana Roscher

Cauliflower cultivation is subject to high-quality control criteria during sales, which underlines the importance of accurate harvest timing. Using time series data for plant phenotyping can provide insights into the dynamic development of cauliflower and allow more accurate predictions of when the crop is ready for harvest than single-time observations. However, data acquisition on a daily or weekly basis is resource-intensive, making selection of acquisition days highly important. We investigate which data acquisition days and development stages positively affect the model accuracy to get insights into prediction-relevant observation days and aid future data acquisition planning. We analyze harvest-readiness using the cauliflower image time series of the GrowliFlower dataset. We use an adjusted ResNet18 classification model, including positional encoding of the data acquisition dates to add implicit information about development. The explainable machine learning approach GroupSHAP analyzes time points' contributions. Time points with the lowest mean absolute contribution are excluded from the time series to determine their effect on model accuracy. Using image time series rather than single time points, we achieve an increase in accuracy of 4%. GroupSHAP allows the selection of time points that positively affect the model accuracy. By using seven selected time points instead of all 11 ones, the accuracy improves by an additional 4%, resulting in an overall accuracy of 89.3%. The selection of time points may therefore lead to a reduction in data collection in the future.

花椰菜种植在销售过程中需要遵守高质量的控制标准,这凸显了准确收获时间的重要性。利用时间序列数据进行植物表型分析,可以深入了解花椰菜的动态生长过程,并能比单次观测更准确地预测作物的收获时间。然而,每天或每周采集数据需要耗费大量资源,因此采集日的选择非常重要。我们研究了哪些数据采集日和发展阶段会对模型的准确性产生积极影响,以便深入了解与预测相关的观测日,并帮助制定未来的数据采集计划。我们使用 GrowliFlower 数据集的花椰菜图像时间序列分析收获准备情况。我们使用调整后的 ResNet18 分类模型,包括数据采集日期的位置编码,以增加有关发育的隐含信息。可解释的机器学习方法 GroupSHAP 分析了时间点的贡献。从时间序列中剔除平均绝对贡献最小的时间点,以确定其对模型准确性的影响。通过使用图像时间序列而不是单个时间点,我们将准确率提高了 4%。GroupSHAP 可以选择对模型准确性有积极影响的时间点。通过使用 7 个选定的时间点而不是全部 11 个时间点,准确率又提高了 4%,从而使总体准确率达到 89.3%。因此,时间点的选择可能会导致未来数据收集的减少。
{"title":"Investigating the contribution of image time series observations to cauliflower harvest-readiness prediction.","authors":"Jana Kierdorf, Timo Tjarden Stomberg, Lukas Drees, Uwe Rascher, Ribana Roscher","doi":"10.3389/frai.2024.1416323","DOIUrl":"10.3389/frai.2024.1416323","url":null,"abstract":"<p><p>Cauliflower cultivation is subject to high-quality control criteria during sales, which underlines the importance of accurate harvest timing. Using time series data for plant phenotyping can provide insights into the dynamic development of cauliflower and allow more accurate predictions of when the crop is ready for harvest than single-time observations. However, data acquisition on a daily or weekly basis is resource-intensive, making selection of acquisition days highly important. We investigate which data acquisition days and development stages positively affect the model accuracy to get insights into prediction-relevant observation days and aid future data acquisition planning. We analyze harvest-readiness using the cauliflower image time series of the GrowliFlower dataset. We use an adjusted ResNet18 classification model, including positional encoding of the data acquisition dates to add implicit information about development. The explainable machine learning approach GroupSHAP analyzes time points' contributions. Time points with the lowest mean absolute contribution are excluded from the time series to determine their effect on model accuracy. Using image time series rather than single time points, we achieve an increase in accuracy of 4%. GroupSHAP allows the selection of time points that positively affect the model accuracy. By using seven selected time points instead of all 11 ones, the accuracy improves by an additional 4%, resulting in an overall accuracy of 89.3%. The selection of time points may therefore lead to a reduction in data collection in the future.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1416323"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty quantification in multi-class image classification using chest X-ray images of COVID-19 and pneumonia. 使用 COVID-19 和肺炎的胸部 X 光图像进行多类图像分类的不确定性量化。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1410841
Albert Whata, Katlego Dibeco, Kudakwashe Madzima, Ibidun Obagbuwa

This paper investigates uncertainty quantification (UQ) techniques in multi-class classification of chest X-ray images (COVID-19, Pneumonia, and Normal). We evaluate Bayesian Neural Networks (BNN) and the Deep Neural Network with UQ (DNN with UQ) techniques, including Monte Carlo dropout, Ensemble Bayesian Neural Network (EBNN), Ensemble Monte Carlo (EMC) dropout, across different evaluation metrics. Our analysis reveals that DNN with UQ, especially EBNN and EMC dropout, consistently outperform BNNs. For example, in Class 0 vs. All, EBNN achieved a UAcc of 92.6%, UAUC-ROC of 95.0%, and a Brier Score of 0.157, significantly surpassing BNN's performance. Similarly, EMC Dropout excelled in Class 1 vs. All with a UAcc of 83.5%, UAUC-ROC of 95.8%, and a Brier Score of 0.165. These advanced models demonstrated higher accuracy, better discriaminative capability, and more accurate probabilistic predictions. Our findings highlight the efficacy of DNN with UQ in enhancing model reliability and interpretability, making them highly suitable for critical healthcare applications like chest X-ray imageQ6 classification.

本文研究了胸部 X 光图像(COVID-19、肺炎和正常)多类分类中的不确定性量化(UQ)技术。我们评估了贝叶斯神经网络(BNN)和具有 UQ 的深度神经网络(DNN with UQ)技术,包括蒙特卡罗剔除、集合贝叶斯神经网络(EBNN)、集合蒙特卡罗剔除(EMC),以及不同的评估指标。我们的分析表明,具有 UQ 的 DNN,尤其是 EBNN 和 EMC dropout,始终优于 BNN。例如,在 Class 0 vs. All 中,EBNN 的 UAcc 为 92.6%,UAUC-ROC 为 95.0%,Brier Score 为 0.157,大大超过了 BNN 的表现。同样,EMC Dropout 在 Class 1 vs. All 中表现出色,UAcc 为 83.5%,UAUC-ROC 为 95.8%,Brier Score 为 0.165。这些高级模型表现出了更高的准确性、更好的判别能力和更准确的概率预测。我们的研究结果凸显了带有 UQ 的 DNN 在增强模型可靠性和可解释性方面的功效,使其非常适合胸部 X 光图像Q6 分类等关键医疗应用。
{"title":"Uncertainty quantification in multi-class image classification using chest X-ray images of COVID-19 and pneumonia.","authors":"Albert Whata, Katlego Dibeco, Kudakwashe Madzima, Ibidun Obagbuwa","doi":"10.3389/frai.2024.1410841","DOIUrl":"10.3389/frai.2024.1410841","url":null,"abstract":"<p><p>This paper investigates uncertainty quantification (UQ) techniques in multi-class classification of chest X-ray images (COVID-19, Pneumonia, and Normal). We evaluate Bayesian Neural Networks (BNN) and the Deep Neural Network with UQ (DNN with UQ) techniques, including Monte Carlo dropout, Ensemble Bayesian Neural Network (EBNN), Ensemble Monte Carlo (EMC) dropout, across different evaluation metrics. Our analysis reveals that DNN with UQ, especially EBNN and EMC dropout, consistently outperform BNNs. For example, in Class 0 vs. All, EBNN achieved a <i>U</i>Acc of 92.6%, <i>U</i>AUC-ROC of 95.0%, and a Brier Score of 0.157, significantly surpassing BNN's performance. Similarly, EMC Dropout excelled in Class 1 vs. All with a <i>U</i>Acc of 83.5%, <i>U</i>AUC-ROC of 95.8%, and a Brier Score of 0.165. These advanced models demonstrated higher accuracy, better discriaminative capability, and more accurate probabilistic predictions. Our findings highlight the efficacy of DNN with UQ in enhancing model reliability and interpretability, making them highly suitable for critical healthcare applications like chest X-ray imageQ6 classification.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1410841"},"PeriodicalIF":3.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving intellectual property landscape for AI-driven innovations in the biomedical sector: opportunities in stable IP regime for shared success. 生物医学领域人工智能驱动创新的知识产权格局演变:稳定的知识产权制度为共享成功带来的机遇。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1372161
Abhijit Poddar, S R Rao

Artificial Intelligence (AI) has revolutionized the biomedical sector in advanced diagnosis, treatment, and personalized medicine. While these AI-driven innovations promise vast benefits for patients and service providers, they also raise complex intellectual property (IP) challenges due to the inherent nature of AI technology. In this review, we discussed the multifaceted impact of AI on IP within the biomedical sector, exploring implications in areas like drug research and discovery, personalized medicine, and medical diagnostics. We dissect critical issues surrounding AI inventorship, patent and copyright protection for AI-generated works, data ownership, and licensing. To provide context, we analyzed the current IP legislative landscape in the United States, EU, China, and India, highlighting convergences, divergences, and precedent-setting cases relevant to the biomedical sector. Recognizing the need for harmonization, we reviewed current developments and discussed a way forward. We advocate for a collaborative approach, convening policymakers, clinicians, researchers, industry players, legal professionals, and patient advocates to navigate this dynamic landscape. It will create a stable IP regime and unlock the full potential of AI for enhanced healthcare delivery and improved patient outcomes.

人工智能(AI)为生物医学领域的先进诊断、治疗和个性化医疗带来了革命性的变化。虽然这些人工智能驱动的创新有望为患者和服务提供商带来巨大利益,但由于人工智能技术的固有特性,它们也带来了复杂的知识产权(IP)挑战。在本综述中,我们讨论了人工智能对生物医学领域知识产权的多方面影响,探讨了药物研究与发现、个性化医疗和医疗诊断等领域的影响。我们剖析了围绕人工智能发明权、人工智能作品的专利和版权保护、数据所有权和许可等方面的关键问题。为了提供背景资料,我们分析了美国、欧盟、中国和印度目前的知识产权立法情况,重点介绍了与生物医学领域相关的趋同、分歧和开创先例的案例。认识到协调的必要性,我们回顾了当前的发展情况,并讨论了前进的道路。我们主张采取合作的方式,召集政策制定者、临床医生、研究人员、行业参与者、法律专业人士和患者权益倡导者,共同驾驭这一充满活力的局面。这将建立一个稳定的知识产权制度,充分释放人工智能的潜力,从而提高医疗服务质量,改善患者治疗效果。
{"title":"Evolving intellectual property landscape for AI-driven innovations in the biomedical sector: opportunities in stable IP regime for shared success.","authors":"Abhijit Poddar, S R Rao","doi":"10.3389/frai.2024.1372161","DOIUrl":"10.3389/frai.2024.1372161","url":null,"abstract":"<p><p>Artificial Intelligence (AI) has revolutionized the biomedical sector in advanced diagnosis, treatment, and personalized medicine. While these AI-driven innovations promise vast benefits for patients and service providers, they also raise complex intellectual property (IP) challenges due to the inherent nature of AI technology. In this review, we discussed the multifaceted impact of AI on IP within the biomedical sector, exploring implications in areas like drug research and discovery, personalized medicine, and medical diagnostics. We dissect critical issues surrounding AI inventorship, patent and copyright protection for AI-generated works, data ownership, and licensing. To provide context, we analyzed the current IP legislative landscape in the United States, EU, China, and India, highlighting convergences, divergences, and precedent-setting cases relevant to the biomedical sector. Recognizing the need for harmonization, we reviewed current developments and discussed a way forward. We advocate for a collaborative approach, convening policymakers, clinicians, researchers, industry players, legal professionals, and patient advocates to navigate this dynamic landscape. It will create a stable IP regime and unlock the full potential of AI for enhanced healthcare delivery and improved patient outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1372161"},"PeriodicalIF":3.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow. 比较 ChatGPT 答案中的情绪和 Stack Overflow 上人类对编码问题的回答。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-16 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1393903
Somayeh Fatahi, Julita Vassileva, Chanchal K Roy

Introduction: Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students.

Methods: To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses.

Results: Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans.

Discussion: The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.

简介近年来,生成式人工智能(AI)和自然语言处理(NLP)技术的进步促进了大型语言模型(LLMs)和人工智能驱动的聊天机器人(如 ChatGPT)的发展,它们具有大量的实际应用。值得注意的是,这些模型可以帮助程序员进行编码查询、调试、提出解决方案建议,并为软件开发任务提供指导。尽管 ChatGPT 在回复的准确性方面存在已知问题,但其全面而清晰的语言仍然吸引着人们的频繁使用。这表明 ChatGPT 有潜力为教育工作者提供支持,并成为学生的虚拟导师:为了探索这一潜力,我们对 ChatGPT 和人类对来自 Stack Overflow (SO) 的 2000 个问题的回答中的情感内容进行了综合分析比较。我们对答案的情感方面进行了研究,以了解人工智能回答的情感基调与人类回答的情感基调相比有何不同:我们的分析表明,与人类回答相比,ChatGPT 的回答通常更为积极。相比之下,人类的回答往往表现出愤怒和厌恶等情绪。我们观察到 ChatGPT 和人类回答在情绪表达方面存在显著差异,尤其是在愤怒、厌恶和喜悦等情绪方面。与 ChatGPT 相比,人类回答的情绪范围更广,这表明人类的情绪变异性更大:讨论:研究结果凸显了 ChatGPT 和人类反应之间明显的情绪差异,ChatGPT 表现出更一致的积极基调,而人类则表现出更广泛的情绪。这种差异强调了进一步研究情感内容在人工智能与人类互动中的作用的必要性,尤其是在教育环境中,因为情感的细微差别会影响学习和交流。
{"title":"Comparing emotions in ChatGPT answers and human answers to the coding questions on Stack Overflow.","authors":"Somayeh Fatahi, Julita Vassileva, Chanchal K Roy","doi":"10.3389/frai.2024.1393903","DOIUrl":"10.3389/frai.2024.1393903","url":null,"abstract":"<p><strong>Introduction: </strong>Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students.</p><p><strong>Methods: </strong>To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses.</p><p><strong>Results: </strong>Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans.</p><p><strong>Discussion: </strong>The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1393903"},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11439875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1