首页 > 最新文献

Ai Magazine最新文献

英文 中文
An actionable framework for AI-ready data 一个可操作的ai就绪数据框架
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1002/aaai.70054
Neil Majithia, Thomas Carey-Wilson, Elena Simperl, Nigel Shadbolt

Data is the foundation of AI. Poor-quality data drive up costs and can lead to hidden problems for AI models, especially in complex fields such as healthcare and manufacturing. Meanwhile, biased data negatively affect the performance of AI models, and untested evaluation datasets can result in false positives or overestimates of model accuracy. For data publishers to realize their true potential in supporting the AI ecosystem and its impacts, they should take measures to ensure that their datasets support AI practitioners' needs; in other words, their data should be made AI-ready. In this article, we present a framework for data publishers to follow to make their datasets AI-ready. The framework provides specific, actionable guidance based on previous work and experience at the Open Data Institute and augmented with insights from literature and discussions with a range of experts. We first define AI-ready data before briefly discussing a selection of frameworks in the literature and where they are insufficient. We then provide a visual snapshot of our framework for AI-ready data, and a subsequent in-depth discussion of its criteria. Finally, we demonstrate the usage of our framework with a number of example datasets. We conclude by discussing the further steps that should be taken for the entire open data ecosystem to be made AI-ready in order to realize its true potential in supporting an innovative future.

数据是人工智能的基础。低质量的数据会推高成本,并可能给人工智能模型带来潜在问题,尤其是在医疗保健和制造业等复杂领域。同时,有偏差的数据会对人工智能模型的性能产生负面影响,未经测试的评估数据集可能导致误报或高估模型精度。为了让数据发布者实现其在支持人工智能生态系统及其影响方面的真正潜力,他们应该采取措施确保其数据集支持人工智能从业者的需求;换句话说,他们的数据应该为人工智能做好准备。在本文中,我们为数据发布者提供了一个框架,让他们的数据集为ai做好准备。该框架根据开放数据研究所以前的工作和经验,并辅以文献和与一系列专家讨论的见解,提供了具体的、可操作的指导。我们首先定义ai就绪数据,然后简要讨论文献中的框架选择以及它们不足的地方。然后,我们提供了ai就绪数据框架的可视化快照,并随后对其标准进行了深入讨论。最后,我们用一些示例数据集演示了我们的框架的使用。最后,我们讨论了整个开放数据生态系统应该采取的进一步措施,使人工智能做好准备,以实现其在支持创新未来方面的真正潜力。
{"title":"An actionable framework for AI-ready data","authors":"Neil Majithia,&nbsp;Thomas Carey-Wilson,&nbsp;Elena Simperl,&nbsp;Nigel Shadbolt","doi":"10.1002/aaai.70054","DOIUrl":"https://doi.org/10.1002/aaai.70054","url":null,"abstract":"<p>Data is the foundation of AI. Poor-quality data drive up costs and can lead to hidden problems for AI models, especially in complex fields such as healthcare and manufacturing. Meanwhile, biased data negatively affect the performance of AI models, and untested evaluation datasets can result in false positives or overestimates of model accuracy. For data publishers to realize their true potential in supporting the AI ecosystem and its impacts, they should take measures to ensure that their datasets support AI practitioners' needs; in other words, their data should be made AI-ready. In this article, we present a framework for data publishers to follow to make their datasets AI-ready. The framework provides specific, actionable guidance based on previous work and experience at the Open Data Institute and augmented with insights from literature and discussions with a range of experts. We first define AI-ready data before briefly discussing a selection of frameworks in the literature and where they are insufficient. We then provide a visual snapshot of our framework for AI-ready data, and a subsequent in-depth discussion of its criteria. Finally, we demonstrate the usage of our framework with a number of example datasets. We conclude by discussing the further steps that should be taken for the entire open data ecosystem to be made AI-ready in order to realize its true potential in supporting an innovative future.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147320923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence for web development: Perspectives from the industry web开发中的人工智能:来自行业的观点
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-08 DOI: 10.1002/aaai.70051
Pyry Pohjalainen, Juho Vepsäläinen

As a field, web development is roughly 30 years old, and during this period, it has been transformed several times already as it has moved from static websites to dynamic web applications. Now, with the introduction of Artificial Intelligence (AI), the field is again at the cusp of a transformation as the latest AI tools might change how to develop for the web yet again. The objective of this study is to look into this phenomenon and understand how AI is changing web development. To achieve this task, we chose to use the sequential qualitative–quantitative design method that combines interviews with a survey to validate and expand our findings from the interviews. We found that AI is used by web developers to increase their development efficiency, as even the current tools are easy to use and access, although they come with several minor downsides, including AI not being able to understand complex logic, the need for validation of AI output, and suggested code that could potentially lead to security issues. While there are clear benefits to using AI tools for web development and AI proficiency is a vital skill for web developers, there are still open questions related to the quality of code produced by AI tools.

作为一个领域,web开发大约有30年的历史,在此期间,它已经经历了几次转变,从静态网站转向动态web应用程序。现在,随着人工智能(AI)的引入,该领域再次处于转型的风口浪尖,因为最新的人工智能工具可能会再次改变web开发的方式。本研究的目的是研究这一现象,并了解人工智能如何改变web开发。为了完成这项任务,我们选择使用顺序定性定量设计方法,将访谈与调查相结合,以验证和扩展我们从访谈中得到的发现。我们发现,web开发人员使用AI来提高他们的开发效率,因为即使是目前的工具也很容易使用和访问,尽管它们有一些小的缺点,包括AI无法理解复杂的逻辑,需要验证AI输出,以及可能导致安全问题的建议代码。虽然使用人工智能工具进行web开发有明显的好处,而且熟练掌握人工智能是web开发人员的一项重要技能,但人工智能工具生成的代码质量仍然存在一些悬而未决的问题。
{"title":"Artificial intelligence for web development: Perspectives from the industry","authors":"Pyry Pohjalainen,&nbsp;Juho Vepsäläinen","doi":"10.1002/aaai.70051","DOIUrl":"https://doi.org/10.1002/aaai.70051","url":null,"abstract":"<p>As a field, web development is roughly 30 years old, and during this period, it has been transformed several times already as it has moved from static websites to dynamic web applications. Now, with the introduction of Artificial Intelligence (AI), the field is again at the cusp of a transformation as the latest AI tools might change how to develop for the web yet again. The objective of this study is to look into this phenomenon and understand how AI is changing web development. To achieve this task, we chose to use the sequential qualitative–quantitative design method that combines interviews with a survey to validate and expand our findings from the interviews. We found that AI is used by web developers to increase their development efficiency, as even the current tools are easy to use and access, although they come with several minor downsides, including AI not being able to understand complex logic, the need for validation of AI output, and suggested code that could potentially lead to security issues. While there are clear benefits to using AI tools for web development and AI proficiency is a vital skill for web developers, there are still open questions related to the quality of code produced by AI tools.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147268920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven perception management and political soft power: Insights from expert interviews 人工智能驱动的感知管理和政治软实力:来自专家访谈的见解
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-08 DOI: 10.1002/aaai.70052
Özkul Haraç, Ayhan Dolunay

This study explores the role of artificial intelligence (AI) in perception management as an emerging tool of political soft power. Drawing on the theoretical frameworks of social psychology, strategic communication, and political communication, the research investigates how AI-assisted strategies influence public perception, image, and trust in the context of modern statecraft. The study adopts a qualitative design based on semi-structured interviews with 16 experts—eight from psychology and eight from communication fields—selected through snowball sampling. Data were analyzed using qualitative content analysis to identify recurring patterns and thematic structures. The findings reveal four central themes: (1) AI enhances efficiency and precision in perception campaigns, (2) trust and credibility remain critical yet vulnerable dimensions, (3) ethical and governance dilemmas emerge in AI-mediated communication, and (4) human oversight continues to be essential for maintaining legitimacy. The results suggest that while AI strengthens states’ capacity for strategic influence, overreliance without transparency may undermine the very trust it seeks to build. The study contributes to soft power and communication scholarship by providing expert-based evidence on the psychological and strategic mechanisms of AI-driven perception management. Policy recommendations are offered to promote transparency, accountability, and ethical oversight in AI-enabled diplomatic practices.

本研究探讨了人工智能(AI)作为政治软实力的新兴工具在感知管理中的作用。利用社会心理学、战略沟通和政治沟通的理论框架,该研究调查了人工智能辅助策略如何影响现代治国方术背景下的公众感知、形象和信任。本研究采用半结构化访谈的定性设计,通过滚雪球抽样的方式选择16位专家,其中8位来自心理学领域,8位来自传播领域。使用定性内容分析对数据进行分析,以确定重复模式和主题结构。研究结果揭示了四个中心主题:(1)人工智能提高了感知活动的效率和准确性;(2)信任和信誉仍然是关键但脆弱的维度;(3)人工智能介导的沟通中出现了道德和治理困境;(4)人类监督对于维持合法性仍然至关重要。结果表明,虽然人工智能增强了国家的战略影响力,但缺乏透明度的过度依赖可能会破坏它试图建立的信任。该研究通过为人工智能驱动的感知管理的心理和战略机制提供基于专家的证据,为软实力和传播学术做出了贡献。提出了政策建议,以促进人工智能外交实践的透明度、问责制和道德监督。
{"title":"AI-driven perception management and political soft power: Insights from expert interviews","authors":"Özkul Haraç,&nbsp;Ayhan Dolunay","doi":"10.1002/aaai.70052","DOIUrl":"https://doi.org/10.1002/aaai.70052","url":null,"abstract":"<p>This study explores the role of artificial intelligence (AI) in perception management as an emerging tool of political soft power. Drawing on the theoretical frameworks of social psychology, strategic communication, and political communication, the research investigates how AI-assisted strategies influence public perception, image, and trust in the context of modern statecraft. The study adopts a qualitative design based on semi-structured interviews with 16 experts—eight from psychology and eight from communication fields—selected through snowball sampling. Data were analyzed using qualitative content analysis to identify recurring patterns and thematic structures. The findings reveal four central themes: (1) AI enhances efficiency and precision in perception campaigns, (2) trust and credibility remain critical yet vulnerable dimensions, (3) ethical and governance dilemmas emerge in AI-mediated communication, and (4) human oversight continues to be essential for maintaining legitimacy. The results suggest that while AI strengthens states’ capacity for strategic influence, overreliance without transparency may undermine the very trust it seeks to build. The study contributes to soft power and communication scholarship by providing expert-based evidence on the psychological and strategic mechanisms of AI-driven perception management. Policy recommendations are offered to promote transparency, accountability, and ethical oversight in AI-enabled diplomatic practices.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146216173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Datasheets for machine learning sensors 机器学习传感器的数据表
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-31 DOI: 10.1002/aaai.70050
Matthew Stewart, Yuke Zhang, Pete Warden, Yasmine Omri, Shvetank Prakash, Jacob Huckelberry, Joao Henrique Santos, Shawn Hymel, Benjamin Yeager Brown, Jim MacArthur, Nat Jeffries, Emanuel Moss, Mona Sloane, Brian Plancher, Vijay Janapa Reddi

Machine learning (ML) is becoming prevalent in embedded AI sensing systems. These “ML sensors” enable context-sensitive, real-time data collection and decision-making across diverse applications ranging from anomaly detection in industrial settings to wildlife tracking for conservation efforts. As such, there is a need to provide transparency in the operation of such ML-enabled sensing systems through comprehensive documentation. This is needed to enable their reproducibility, to address new compliance and auditing regimes mandated in regulation and industry-specific policy, and to verify and validate the responsible nature of their operation. To address this gap, we introduce the datasheet for ML sensors framework. We provide a comprehensive template, collaboratively developed in academia—industry partnerships, that captures the distinct attributes of ML sensors, including hardware specifications, ML model and dataset characteristics, end-to-end performance metrics, and environmental impacts. Our framework addresses the continuous streaming nature of sensor data, real-time processing requirements, and embeds benchmarking methodologies that reflect real-world deployment conditions, ensuring practical viability. Aligned with the FAIR principles (Findability, Accessibility, Interoperability, and Reusability), our approach enhances the transparency and reusability of ML sensor documentation across academic, industrial, and regulatory domains. To show the application of our approach, we present two datasheets: the first for an open-source ML sensor designed in-house and the second for a commercial ML sensor developed by industry collaborators, both performing computer vision-based person detection.

机器学习(ML)在嵌入式人工智能传感系统中越来越普遍。这些“机器学习传感器”能够在各种应用中实现上下文敏感的实时数据收集和决策,从工业环境中的异常检测到野生动物跟踪保护工作。因此,有必要通过全面的文档来提供这种基于机器学习的传感系统操作的透明度。这是必要的,以便使它们能够再现,处理条例和工业特定政策规定的新的遵守和审计制度,并核查和确认其业务的负责性质。为了解决这一差距,我们引入了ML传感器框架的数据表。我们提供了一个全面的模板,在学术界和工业界的合作伙伴关系中共同开发,可以捕获机器学习传感器的不同属性,包括硬件规格,机器学习模型和数据集特征,端到端性能指标和环境影响。我们的框架解决了传感器数据的连续流特性、实时处理要求,并嵌入了反映真实部署条件的基准测试方法,确保了实际可行性。与FAIR原则(可查找性、可访问性、互操作性和可重用性)一致,我们的方法增强了学术、工业和监管领域ML传感器文档的透明度和可重用性。为了展示我们的方法的应用,我们提供了两个数据表:第一个是内部设计的开源ML传感器,第二个是由行业合作者开发的商业ML传感器,两者都执行基于计算机视觉的人员检测。
{"title":"Datasheets for machine learning sensors","authors":"Matthew Stewart,&nbsp;Yuke Zhang,&nbsp;Pete Warden,&nbsp;Yasmine Omri,&nbsp;Shvetank Prakash,&nbsp;Jacob Huckelberry,&nbsp;Joao Henrique Santos,&nbsp;Shawn Hymel,&nbsp;Benjamin Yeager Brown,&nbsp;Jim MacArthur,&nbsp;Nat Jeffries,&nbsp;Emanuel Moss,&nbsp;Mona Sloane,&nbsp;Brian Plancher,&nbsp;Vijay Janapa Reddi","doi":"10.1002/aaai.70050","DOIUrl":"https://doi.org/10.1002/aaai.70050","url":null,"abstract":"<p>Machine learning (ML) is becoming prevalent in embedded AI sensing systems. These “ML sensors” enable context-sensitive, real-time data collection and decision-making across diverse applications ranging from anomaly detection in industrial settings to wildlife tracking for conservation efforts. As such, there is a need to provide transparency in the operation of such ML-enabled sensing systems through comprehensive documentation. This is needed to enable their reproducibility, to address new compliance and auditing regimes mandated in regulation and industry-specific policy, and to verify and validate the responsible nature of their operation. To address this gap, we introduce the datasheet for ML sensors framework. We provide a comprehensive template, collaboratively developed in academia—industry partnerships, that captures the distinct attributes of ML sensors, including hardware specifications, ML model and dataset characteristics, end-to-end performance metrics, and environmental impacts. Our framework addresses the continuous streaming nature of sensor data, real-time processing requirements, and embeds benchmarking methodologies that reflect real-world deployment conditions, ensuring practical viability. Aligned with the FAIR principles (Findability, Accessibility, Interoperability, and Reusability), our approach enhances the transparency and reusability of ML sensor documentation across academic, industrial, and regulatory domains. To show the application of our approach, we present two datasheets: the first for an open-source ML sensor designed in-house and the second for a commercial ML sensor developed by industry collaborators, both performing computer vision-based person detection.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training robots with natural and lightweight human feedback 用自然和轻量级的人类反馈来训练机器人
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1002/aaai.70037
Erdem Bıyık

Generalist robot models promise broad applicability across domains but currently require extensive expert demonstrations for task specialization, which is a costly and impractical barrier for real-world deployment. In this article, which summarizes the author's presentation in the New Faculty Highlights Track of the 39th annual AAAI Conference on Artificial Intelligence, we present algorithms that enable non-expert users to adapt and continually improve robot policies through natural and lightweight feedback modalities, such as preference comparisons, rankings, ratings, natural language, and users' own demonstrations, combining them with active learning strategies to maximize data-efficiency. We further introduce methods for leveraging real-time human interventions as rich training signals, modeling both their timing and absence to refine policies continually. Our approaches achieve substantial gains in sample-efficiency, adaptability, and user-friendliness, demonstrated across simulated and real-world robotic tasks. By aligning robot learning with how humans naturally teach, we hope to move toward autonomous systems that are more personalized, capable, and deployable in everyday environments.

通才机器人模型承诺广泛的跨领域适用性,但目前需要大量的专家演示来实现任务专业化,这是一个昂贵且不切实际的现实部署障碍。在这篇文章中,总结了作者在第39届年度AAAI人工智能会议新学院亮点轨道上的演讲,我们提出了一些算法,使非专业用户能够通过自然和轻量级的反馈方式(如偏好比较、排名、评级、自然语言和用户自己的演示)适应并不断改进机器人策略,并将它们与主动学习策略相结合,以最大限度地提高数据效率。我们进一步介绍了利用实时人为干预作为丰富的训练信号的方法,对其时间和缺席进行建模,以不断完善政策。我们的方法在样本效率、适应性和用户友好性方面取得了实质性的进展,并在模拟和现实世界的机器人任务中得到了证明。通过将机器人的学习与人类的自然教学方式结合起来,我们希望朝着更加个性化、更有能力、更能在日常环境中部署的自主系统迈进。
{"title":"Training robots with natural and lightweight human feedback","authors":"Erdem Bıyık","doi":"10.1002/aaai.70037","DOIUrl":"https://doi.org/10.1002/aaai.70037","url":null,"abstract":"<p>Generalist robot models promise broad applicability across domains but currently require extensive expert demonstrations for task specialization, which is a costly and impractical barrier for real-world deployment. In this article, which summarizes the author's presentation in the New Faculty Highlights Track of the 39<sup>th</sup> annual AAAI Conference on Artificial Intelligence, we present algorithms that enable non-expert users to adapt and continually improve robot policies through natural and lightweight feedback modalities, such as preference comparisons, rankings, ratings, natural language, and users' own demonstrations, combining them with active learning strategies to maximize data-efficiency. We further introduce methods for leveraging real-time human interventions as rich training signals, modeling both their timing and absence to refine policies continually. Our approaches achieve substantial gains in sample-efficiency, adaptability, and user-friendliness, demonstrated across simulated and real-world robotic tasks. By aligning robot learning with how humans naturally teach, we hope to move toward autonomous systems that are more personalized, capable, and deployable in everyday environments.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge Engineering for Open Science: Building and Deploying Knowledge Bases for Metadata Standards 面向开放科学的知识工程:为元数据标准构建和部署知识库
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-31 DOI: 10.1002/aaai.70048
Mark A. Musen, Martin J. O'Connor, Josef Hardi, Marcos Martínez-Romero

For more than a decade, scientists have been striving to make their datasets available in open repositories, with the goal that they be findable, accessible, interoperable, and reusable (FAIR). Although it is hard for most investigators to remember all the “guiding principles” associated with FAIR data, there is one overarching requirement: The data need to be annotated with “rich,” discipline-specific, standardized metadata that can enable third parties to understand who performed the experiment, who or what the subjects were, what the experimental conditions were, and what the results appear to show. Most areas of science lack standards for such metadata and, when such standards exist, it can be difficult for investigators or data curators to apply them. The Center for Expanded Data Annotation and Retrieval (CEDAR) builds technology that enables scientists to encode descriptive metadata standards as templates that enumerate the attributes of different kinds of experiments and that link those attributes to ontologies or value sets that may supply controlled values for those attributes. These metadata templates capture the preferences of groups of investigators regarding how their data should be described and what a third party needs to know to make sense of their datasets. CEDAR templates describing community metadata preferences have been used to standardize metadata for a variety of scientific consortia. They have been used as the basis for data-annotation systems that acquire metadata through Web forms or through spreadsheets, and they can help correct metadata to ensure adherence to standards. Like the declarative knowledge bases that underpinned intelligent systems decades ago, CEDAR templates capture the knowledge of a community of practice in symbolic form, and they allow that knowledge to be applied in a variety of settings. They provide a mechanism for scientific communities to create shared metadata standards and to encode their preferences for the application of those standards, and for deploying those standards in a range of intelligent systems to promote open science.

十多年来,科学家们一直在努力使他们的数据集在开放存储库中可用,目标是可查找、可访问、可互操作和可重用(FAIR)。尽管大多数研究者很难记住与FAIR数据相关的所有“指导原则”,但有一个首要要求:数据需要用“丰富的”、特定学科的、标准化的元数据进行注释,这可以使第三方了解谁进行了实验,谁或什么受试者,实验条件是什么,以及结果显示了什么。大多数科学领域缺乏此类元数据的标准,即使存在此类标准,调查人员或数据管理员也很难应用它们。扩展数据注释和检索中心(CEDAR)建立了一种技术,使科学家能够将描述性元数据标准编码为模板,这些模板列举了不同类型实验的属性,并将这些属性链接到本体或值集,这些本体或值集可能为这些属性提供受控值。这些元数据模板捕获了调查小组关于如何描述他们的数据以及第三方需要知道什么才能理解他们的数据集的偏好。描述社区元数据偏好的CEDAR模板已被用于标准化各种科学联盟的元数据。它们被用作通过Web表单或电子表格获取元数据的数据注释系统的基础,并且它们可以帮助纠正元数据以确保符合标准。就像几十年前支撑智能系统的声明性知识库一样,CEDAR模板以符号形式捕获实践社区的知识,并允许将这些知识应用于各种设置中。它们为科学界提供了一种机制,以创建共享的元数据标准,并为这些标准的应用编码他们的偏好,并在一系列智能系统中部署这些标准,以促进开放科学。
{"title":"Knowledge Engineering for Open Science: Building and Deploying Knowledge Bases for Metadata Standards","authors":"Mark A. Musen,&nbsp;Martin J. O'Connor,&nbsp;Josef Hardi,&nbsp;Marcos Martínez-Romero","doi":"10.1002/aaai.70048","DOIUrl":"https://doi.org/10.1002/aaai.70048","url":null,"abstract":"<p>For more than a decade, scientists have been striving to make their datasets available in open repositories, with the goal that they be findable, accessible, interoperable, and reusable (FAIR). Although it is hard for most investigators to remember all the “guiding principles” associated with FAIR data, there is one overarching requirement: The data need to be annotated with “rich,” discipline-specific, standardized metadata that can enable third parties to understand who performed the experiment, who or what the subjects were, what the experimental conditions were, and what the results appear to show. Most areas of science lack standards for such metadata and, when such standards exist, it can be difficult for investigators or data curators to apply them. The Center for Expanded Data Annotation and Retrieval (CEDAR) builds technology that enables scientists to encode descriptive metadata standards as <i>templates</i> that enumerate the attributes of different kinds of experiments and that link those attributes to ontologies or value sets that may supply controlled values for those attributes. These metadata templates capture the preferences of groups of investigators regarding how their data should be described and what a third party needs to know to make sense of their datasets. CEDAR templates describing community metadata preferences have been used to standardize metadata for a variety of scientific consortia. They have been used as the basis for data-annotation systems that acquire metadata through Web forms or through spreadsheets, and they can help correct metadata to ensure adherence to standards. Like the declarative knowledge bases that underpinned intelligent systems decades ago, CEDAR templates capture the knowledge of a community of practice in symbolic form, and they allow that knowledge to be applied in a variety of settings. They provide a mechanism for scientific communities to create shared metadata standards and to encode their preferences for the application of those standards, and for deploying those standards in a range of intelligent systems to promote open science.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI for social science: A sociology PhD candidate's autoethnography on how LLMs are changing research work 社会科学的人工智能:一位社会学博士候选人关于法学硕士如何改变研究工作的自我民族志
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-16 DOI: 10.1002/aaai.70046
Shuo Wang

Will AI replace social scientists? The real issue concerns reshaping rather than replacement. Confronting the integration of large language models (LLMs) into academic training establishes “prompt engineering” as the core interface for collaboration, defining it as a method to translate sociological thinking into precise instructions. LLMs are becoming essential partners across the research spectrum. They transform qualitative analysis from a solitary craft into a dialogical coding process and assist in theoretical localization and the construction of localized measurement scales. Beyond text analysis, they provide a low-cost virtual testbed for experimental design through “silicon samples” and enable the deduction of complex social interactions via “generative agents.” In the quantitative realm, they act as translators connecting research intentions with statistical code. Ultimately, the core challenge facing researchers is not technical. It lies in proactively cultivating a critical “literacy for human-AI collaboration” to master this paradigm shift.

人工智能会取代社会科学家吗?真正的问题在于重塑,而不是取代。面对将大型语言模型(llm)集成到学术培训中的问题,建立了“提示工程”作为协作的核心接口,将其定义为将社会学思维转化为精确指令的方法。法学硕士正在成为整个研究领域的重要合作伙伴。它们将定性分析从一种孤立的工艺转变为一种对话编码过程,并有助于理论定位和本地化测量尺度的构建。除了文本分析,他们还通过“硅样品”为实验设计提供了一个低成本的虚拟测试平台,并通过“生成代理”推导出复杂的社会互动。在定量领域,他们充当翻译,将研究意图与统计代码联系起来。最终,研究人员面临的核心挑战不是技术上的。关键在于积极培养一种关键的“人类与人工智能协作的素养”,以掌握这种范式转变。
{"title":"AI for social science: A sociology PhD candidate's autoethnography on how LLMs are changing research work","authors":"Shuo Wang","doi":"10.1002/aaai.70046","DOIUrl":"https://doi.org/10.1002/aaai.70046","url":null,"abstract":"<p>Will AI replace social scientists? The real issue concerns reshaping rather than replacement. Confronting the integration of large language models (LLMs) into academic training establishes “prompt engineering” as the core interface for collaboration, defining it as a method to translate sociological thinking into precise instructions. LLMs are becoming essential partners across the research spectrum. They transform qualitative analysis from a solitary craft into a dialogical coding process and assist in theoretical localization and the construction of localized measurement scales. Beyond text analysis, they provide a low-cost virtual testbed for experimental design through “silicon samples” and enable the deduction of complex social interactions via “generative agents.” In the quantitative realm, they act as translators connecting research intentions with statistical code. Ultimately, the core challenge facing researchers is not technical. It lies in proactively cultivating a critical “literacy for human-AI collaboration” to master this paradigm shift.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The ETHICAL Protocol for Responsible Use of Generative AI for Research Purposes in Higher Education 在高等教育中为研究目的负责任地使用生成人工智能的伦理协议
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1002/aaai.70047
Ahmed Alduais, Saba Qadhi, Youmen Chaaban, Majeda Khraisheh

Generative AI's growing use in higher education research requires strong protocols for responsible use. This need arises from the potential for misuse and the current uncertainty around ethical concerns and intellectual property. The lack of clear rules about openness in AI use, along with the “black box” nature of many AI systems, raises worries about reproducibility and the possibility of biased or fake results. This paper focuses specifically on generative AI tools (e.g., LLMs like ChatGPT, research-specific platforms like Elicit/SciSpace). The paper presents the ETHICAL protocol (i.e., Establish your purpose, Thoroughly explore options, Harness the appropriate tool, Inspect and verify output, Cite and reference accurately, Acknowledge AI usage transparently, and Look over publisher's guidelines), a detailed guide designed to direct researchers in the ethical and responsible inclusion of generative AI in their work. The protocol was created through a multi-step process, including a scientometric review of current trends, a systematic review of researcher experiences, and a policy analysis of 74 documents from various stakeholders (authorities, universities, publishers, and publication manuals). This analysis shaped the creation of a seven-heading, nine-item checklist covering key aspects of responsible AI use, from setting clear research goals to checking outputs and openly acknowledging AI help. The ETHICAL protocol gives practical examples and detailed explanations for each item, highlighting the importance of AI literacy and careful choice of suitable tools. It also stresses the vital need for checking AI-generated content to lessen the risk of errors and made-up information (“hallucinations”). The resulting protocol offers a practical and easy-to-use guide for researchers, encouraging responsible AI practices and upholding academic integrity. The ETHICAL protocol offers a helpful tool for managing the complex area of AI in research, ultimately leading to more open, reliable, and ethically sound scholarly work. Its broad acceptance could greatly improve the responsible use of AI in higher education, building trust and furthering knowledge growth.

生成式人工智能在高等教育研究中的越来越多的应用需要强有力的协议来负责任的使用。这种需求源于滥用的可能性以及目前围绕伦理问题和知识产权的不确定性。缺乏关于人工智能使用开放性的明确规则,加上许多人工智能系统的“黑箱”性质,引发了人们对可重复性以及有偏见或虚假结果的可能性的担忧。本文特别关注生成式人工智能工具(例如,法学硕士,如ChatGPT,研究专用平台,如Elicit/SciSpace)。本文介绍了伦理协议(即,建立您的目的,彻底探索选项,利用适当的工具,检查和验证输出,准确引用和参考,透明地承认人工智能的使用,并查看出版商的指导方针),这是一个详细的指南,旨在指导研究人员在其工作中道德和负责任地包括生成人工智能。该协议是通过一个多步骤过程创建的,包括对当前趋势进行科学计量学审查,对研究人员经验进行系统审查,以及对来自各利益攸关方(当局、大学、出版商和出版手册)的74份文件进行政策分析。这一分析形成了一个七标题、九项的清单,涵盖了负责任的人工智能使用的关键方面,从设定明确的研究目标到检查产出和公开承认人工智能的帮助。伦理协议为每个项目提供了实际示例和详细解释,强调了人工智能素养和谨慎选择合适工具的重要性。它还强调了检查人工智能生成内容的重要性,以减少错误和虚构信息(“幻觉”)的风险。由此产生的协议为研究人员提供了实用且易于使用的指南,鼓励负责任的人工智能实践并维护学术诚信。伦理协议为管理人工智能研究的复杂领域提供了一个有用的工具,最终导致更加开放、可靠和合乎道德的学术工作。它的广泛接受可以极大地改善人工智能在高等教育中的负责任地使用,建立信任并促进知识增长。
{"title":"The ETHICAL Protocol for Responsible Use of Generative AI for Research Purposes in Higher Education","authors":"Ahmed Alduais,&nbsp;Saba Qadhi,&nbsp;Youmen Chaaban,&nbsp;Majeda Khraisheh","doi":"10.1002/aaai.70047","DOIUrl":"https://doi.org/10.1002/aaai.70047","url":null,"abstract":"<p>Generative AI's growing use in higher education research requires strong protocols for responsible use. This need arises from the potential for misuse and the current uncertainty around ethical concerns and intellectual property. The lack of clear rules about openness in AI use, along with the “black box” nature of many AI systems, raises worries about reproducibility and the possibility of biased or fake results. This paper focuses specifically on generative AI tools (e.g., LLMs like ChatGPT, research-specific platforms like Elicit/SciSpace). The paper presents the ETHICAL protocol (i.e., <b>E</b>stablish your purpose, <b>T</b>horoughly explore options, <b>H</b>arness the appropriate tool, <b>I</b>nspect and verify output, <b>C</b>ite and reference accurately, <b>A</b>cknowledge AI usage transparently, and <b>L</b>ook over publisher's guidelines), a detailed guide designed to direct researchers in the ethical and responsible inclusion of generative AI in their work. The protocol was created through a multi-step process, including a scientometric review of current trends, a systematic review of researcher experiences, and a policy analysis of 74 documents from various stakeholders (authorities, universities, publishers, and publication manuals). This analysis shaped the creation of a seven-heading, nine-item checklist covering key aspects of responsible AI use, from setting clear research goals to checking outputs and openly acknowledging AI help. The ETHICAL protocol gives practical examples and detailed explanations for each item, highlighting the importance of AI literacy and careful choice of suitable tools. It also stresses the vital need for checking AI-generated content to lessen the risk of errors and made-up information (“hallucinations”). The resulting protocol offers a practical and easy-to-use guide for researchers, encouraging responsible AI practices and upholding academic integrity. The ETHICAL protocol offers a helpful tool for managing the complex area of AI in research, ultimately leading to more open, reliable, and ethically sound scholarly work. Its broad acceptance could greatly improve the responsible use of AI in higher education, building trust and furthering knowledge growth.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Proportional Representation Can Shape Artificial Intelligence 比例代表制如何影响人工智能
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1002/aaai.70044
Evi Micha

Proportional representation is a foundational principle in social choice theory, ensuring that groups influence collective decisions in proportion to their size. While it has traditionally been studied in the context of political elections, recent work in computational social choice has broadened its scope to a variety of voting frameworks. This article showcases how proportional representation can be formalized and applied beyond these frameworks, spotlighting AI domains where it naturally takes shape. In particular, we focus on two such domains: clustering and AI alignment. In clustering, proportionality ensures that sufficiently large and cohesive groups of data points or agents are adequately represented in the selection of cluster centers or group assignments, to both centroid-based and noncentroid-based paradigms. In AI alignment, particularly in reinforcement learning from human feedback (RLHF), proportionality provides a principled framework for aggregating heterogeneous preferences by designing committees of reward functions that reflect annotators' viewpoints in proportion to their prevalence. We also discuss additional promising applications, including client selection in federated learning and forming committees of pre-trained models in meta-learning, and argue that incorporating proportional representation into AI systems provides a mathematically rigorous foundation for aligning algorithmic outcomes with the breadth of human viewpoints.

比例代表制是社会选择理论中的一个基本原则,它确保群体对集体决策的影响与其规模成正比。虽然它传统上是在政治选举的背景下研究的,但最近在计算社会选择方面的工作已将其范围扩大到各种投票框架。本文展示了比例代表如何形式化并应用于这些框架之外,突出了它自然形成的人工智能领域。我们特别关注两个这样的领域:聚类和AI对齐。在聚类中,比例性确保足够大且有凝聚力的数据点或代理在聚类中心的选择或组分配中得到充分的代表,无论是基于质心的还是基于非质心的范式。在人工智能校准中,特别是在从人类反馈中强化学习(RLHF)中,比例性提供了一个原则性框架,通过设计奖励函数委员会来聚合异质偏好,奖励函数委员会反映了注释者的观点与他们的流行程度成比例。我们还讨论了其他有前途的应用,包括联邦学习中的客户选择和元学习中预先训练模型的组成委员会,并认为将比例表示纳入人工智能系统为将算法结果与人类观点的广度相一致提供了数学上严格的基础。
{"title":"How Proportional Representation Can Shape Artificial Intelligence","authors":"Evi Micha","doi":"10.1002/aaai.70044","DOIUrl":"https://doi.org/10.1002/aaai.70044","url":null,"abstract":"<p>Proportional representation is a foundational principle in social choice theory, ensuring that groups influence collective decisions in proportion to their size. While it has traditionally been studied in the context of political elections, recent work in computational social choice has broadened its scope to a variety of voting frameworks. This article showcases how proportional representation can be formalized and applied beyond these frameworks, spotlighting AI domains where it naturally takes shape. In particular, we focus on two such domains: clustering and AI alignment. In clustering, proportionality ensures that sufficiently large and cohesive groups of data points or agents are adequately represented in the selection of cluster centers or group assignments, to both centroid-based and noncentroid-based paradigms. In AI alignment, particularly in reinforcement learning from human feedback (RLHF), proportionality provides a principled framework for aggregating heterogeneous preferences by designing committees of reward functions that reflect annotators' viewpoints in proportion to their prevalence. We also discuss additional promising applications, including client selection in federated learning and forming committees of pre-trained models in meta-learning, and argue that incorporating proportional representation into AI systems provides a mathematically rigorous foundation for aligning algorithmic outcomes with the breadth of human viewpoints.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human at the Center: A Framework for Human-Driven AI Development 以人为中心:人类驱动的人工智能发展框架
IF 3.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1002/aaai.70043
Danniell Hu, Diana Acosta Navas, Susanne Gaube, Hussein Mozannar, Matthew E. Taylor, Krishnamurthy Dvijotham, Elizabeth Bondi-Kelly

Artificial Intelligence (AI) systems increasingly shape many aspects of daily life, influencing our jobs, finances, healthcare, and online content. This expansion has led to the rise of human–AI systems, where humans communicate, collaborate, or otherwise interact with AI, such as using AI outputs to make decisions. While these systems have shown potential to enhance human capabilities and improve performance on benchmarks, evidence suggests that they often underperform compared to AI-only or human-only approaches in experiments and real-world applications. Here, we argue that human–AI systems should be developed with a greater emphasis on human-centered factors—such as usability, fairness, trust, and user autonomy—within the algorithmic design and evaluation process. We advocate for integrating human-centered principles into AI development through human-centered algorithmic design and contextual evaluation with real users. Drawing on interdisciplinary research and our tutorial at two major AI conferences, we highlight examples and strategies for AI researchers and practitioners to embed these principles effectively. This work offers a systematic synthesis that integrates technical, practical, and ethical insights into a unified framework. Additionally, we highlight critical ethical considerations, including fairness, labor, privacy, and human agency to ensure that systems meet performance goals while serving broader societal interests. Through this work, we aim to inspire the field to embrace a truly human-centered approach to algorithmic design and deployment.

人工智能(AI)系统越来越多地影响着日常生活的方方面面,影响着我们的工作、财务、医疗保健和在线内容。这种扩张导致了人类-人工智能系统的兴起,在这种系统中,人类与人工智能进行交流、协作或以其他方式互动,例如使用人工智能的输出来做出决策。虽然这些系统已经显示出增强人类能力和提高基准性能的潜力,但有证据表明,在实验和实际应用中,与纯人工智能或纯人类方法相比,它们的表现往往不佳。在这里,我们认为人类-人工智能系统应该在算法设计和评估过程中更加强调以人为中心的因素,如可用性、公平性、信任和用户自主性。我们倡导通过以人为本的算法设计和真实用户的情境评估,将以人为本的原则融入人工智能开发。通过跨学科研究和我们在两个主要人工智能会议上的指导,我们为人工智能研究人员和从业者提供了有效嵌入这些原则的例子和策略。这项工作提供了一个系统的综合,将技术,实践和伦理见解集成到一个统一的框架中。此外,我们还强调了关键的道德考虑因素,包括公平、劳动、隐私和人类代理,以确保系统在满足性能目标的同时服务于更广泛的社会利益。通过这项工作,我们的目标是激励该领域采用真正以人为中心的算法设计和部署方法。
{"title":"Human at the Center: A Framework for Human-Driven AI Development","authors":"Danniell Hu,&nbsp;Diana Acosta Navas,&nbsp;Susanne Gaube,&nbsp;Hussein Mozannar,&nbsp;Matthew E. Taylor,&nbsp;Krishnamurthy Dvijotham,&nbsp;Elizabeth Bondi-Kelly","doi":"10.1002/aaai.70043","DOIUrl":"https://doi.org/10.1002/aaai.70043","url":null,"abstract":"<p>Artificial Intelligence (AI) systems increasingly shape many aspects of daily life, influencing our jobs, finances, healthcare, and online content. This expansion has led to the rise of human–AI systems, where humans communicate, collaborate, or otherwise interact with AI, such as using AI outputs to make decisions. While these systems have shown potential to enhance human capabilities and improve performance on benchmarks, evidence suggests that they often underperform compared to AI-only or human-only approaches in experiments and real-world applications. Here, we argue that human–AI systems should be developed with a greater emphasis on human-centered factors—such as usability, fairness, trust, and user autonomy—within the algorithmic design and evaluation process. We advocate for integrating human-centered principles into AI development through human-centered algorithmic design and contextual evaluation with real users. Drawing on interdisciplinary research and our tutorial at two major AI conferences, we highlight examples and strategies for AI researchers and practitioners to embed these principles effectively. This work offers a systematic synthesis that integrates technical, practical, and ethical insights into a unified framework. Additionally, we highlight critical ethical considerations, including fairness, labor, privacy, and human agency to ensure that systems meet performance goals while serving broader societal interests. Through this work, we aim to inspire the field to embrace a truly human-centered approach to algorithmic design and deployment.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 4","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ai Magazine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1