首页 > 最新文献

Applied AI letters最新文献

英文 中文
ChemQuery: A Natural Language Query-Driven Service for Comprehensive Exploration of Chemistry Patent Literature ChemQuery:用于化学专利文献综合检索的自然语言查询驱动服务
Pub Date : 2025-05-20 DOI: 10.1002/ail2.124
Shubham Gupta, Rafael Teixeira de Lima, Lokesh Mishra, Cesar Berrospi, Panagiotis Vagenas, Nikolaos Livathinos, Christoph Auer, Michele Dolfi, Peter Staar

Patents are integral to our shared scientific knowledge, requiring companies and inventors to stay informed about them to conduct research, find licensing opportunities, and manage legal risks. However, the rising rate of filings has made this task increasingly challenging over the years. To address this issue, we introduce ChemQuery, a tool for easily exploring chemistry-related patents using natural language questions. Traditional systems rely on simplistic keyword-based searches to find patents that might be relevant to a user's request. In contrast, ChemQuery uses up-to-date information to return specific answers, along with their sources. It also offers a more comprehensive search experience to the users, thanks to capabilities like extracting molecules from diagrams, integrating information from PubChem, and allowing complex queries about molecular structures. We conduct a thorough empirical evaluation of ChemQuery and compare it with several baseline approaches. The results highlight the practical utility and limitations of our tool.

专利是我们共享的科学知识中不可或缺的一部分,要求公司和发明者随时了解它们,以便进行研究,寻找许可机会,并管理法律风险。然而,多年来,申请率的上升使得这项任务越来越具有挑战性。为了解决这个问题,我们介绍了ChemQuery,一个使用自然语言问题轻松探索化学相关专利的工具。传统的系统依赖于简单的基于关键字的搜索来查找可能与用户请求相关的专利。相比之下,ChemQuery使用最新的信息来返回特定的答案及其来源。它还为用户提供了更全面的搜索体验,这要归功于从图表中提取分子、从PubChem中整合信息以及允许对分子结构进行复杂查询等功能。我们对ChemQuery进行了彻底的实证评估,并将其与几种基线方法进行了比较。结果突出了我们的工具的实用性和局限性。
{"title":"ChemQuery: A Natural Language Query-Driven Service for Comprehensive Exploration of Chemistry Patent Literature","authors":"Shubham Gupta,&nbsp;Rafael Teixeira de Lima,&nbsp;Lokesh Mishra,&nbsp;Cesar Berrospi,&nbsp;Panagiotis Vagenas,&nbsp;Nikolaos Livathinos,&nbsp;Christoph Auer,&nbsp;Michele Dolfi,&nbsp;Peter Staar","doi":"10.1002/ail2.124","DOIUrl":"https://doi.org/10.1002/ail2.124","url":null,"abstract":"<p>Patents are integral to our shared scientific knowledge, requiring companies and inventors to stay informed about them to conduct research, find licensing opportunities, and manage legal risks. However, the rising rate of filings has made this task increasingly challenging over the years. To address this issue, we introduce <span>ChemQuery</span>, a tool for easily exploring chemistry-related patents using natural language questions. Traditional systems rely on simplistic keyword-based searches to find patents that <i>might be</i> relevant to a user's request. In contrast, <span>ChemQuery</span> uses up-to-date information to return specific answers, along with their sources. It also offers a more comprehensive search experience to the users, thanks to capabilities like extracting molecules from diagrams, integrating information from PubChem, and allowing complex queries about molecular structures. We conduct a thorough empirical evaluation of <span>ChemQuery</span> and compare it with several baseline approaches. The results highlight the practical utility and limitations of our tool.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.124","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing an Intelligent Resume Screening Tool With AI-Driven Analysis and Recommendation Features
Pub Date : 2025-05-14 DOI: 10.1002/ail2.116
K. L. Abhishek, M. Niranjanamurthy, Shonit Aric, Syed Immamul Ansarullah, Anurag Sinha, G. Tejani, Mohd Asif Shah

Current resume screening relies on manual review, causing delays and errors in evaluating large volumes of resumes. Lack of automation and data extraction leads to inefficiencies and potential biases. Recruiters face challenges in identifying qualified candidates due to oversight and time constraints. Inconsistent evaluation criteria hinder decision-making. These issues result in prolonged hiring processes, missed opportunities, and potential bias in candidate selection. The goal of this project is to develop an AI-powered Resume Analysis and Recommendation Tool, catering to the trend of recruiters spending less than 2 min on each CV. The tool will rapidly analyze all resume components while providing personalized predictions and recommendations to applicants for improving their CVs. It will present user-friendly data for recruiters, facilitating export to CSV for integration into their recruitment processes. Additionally, the tool will offer insights and analytics on popular roles and skills within the job market. Its user section will enable applicants to continually test and track their resumes, encouraging repeat usage and driving traffic. Colleges can benefit from gaining insights into students' resumes before placements. Overall, this AI-powered tool aims to enhance the resume evaluation process, benefiting both job seekers and employers. The primary aim of this project is to develop a Resume Analyzer using Python, incorporating advanced libraries such as Pyresparser, NLTK (Natural Language Toolkit), and MySQL. This automated system offers an efficient solution for parsing, analyzing, and extracting essential information from resumes. The user-friendly interface, developed using Streamlit, allows for seamless resume uploading, insightful data visualization, and analytics. The Resume Analyzer significantly streamlines the resume screening process, providing recruiters with valuable insights and enhancing their decision-making capabilities.

目前的简历筛选依赖于人工审查,导致在评估大量简历时出现延迟和错误。缺乏自动化和数据提取会导致效率低下和潜在的偏见。由于监管和时间限制,招聘人员在识别合格候选人方面面临挑战。不一致的评价标准妨碍决策。这些问题会导致招聘过程延长,错失机会,以及在候选人选择中存在潜在的偏见。该项目的目标是开发一个人工智能驱动的简历分析和推荐工具,以迎合招聘人员在每份简历上花费不到2分钟的趋势。该工具将快速分析简历的所有组成部分,同时为申请人提供个性化的预测和建议,以改进他们的简历。它将为招聘人员提供用户友好的数据,便于导出到CSV,以便集成到招聘流程中。此外,该工具还将提供就业市场中热门职位和技能的见解和分析。它的用户区将允许申请人不断测试和跟踪他们的简历,鼓励重复使用并增加流量。大学可以通过在实习前了解学生的简历而受益。总的来说,这个人工智能工具旨在加强简历评估过程,使求职者和雇主都受益。该项目的主要目标是使用Python开发一个简历分析器,并结合Pyresparser、NLTK(自然语言工具包)和MySQL等高级库。这个自动化系统为解析、分析和从简历中提取重要信息提供了一个有效的解决方案。使用Streamlit开发的用户友好界面允许无缝上传简历,有洞察力的数据可视化和分析。简历分析器显著简化了简历筛选过程,为招聘人员提供了有价值的见解,提高了他们的决策能力。
{"title":"Developing an Intelligent Resume Screening Tool With AI-Driven Analysis and Recommendation Features","authors":"K. L. Abhishek,&nbsp;M. Niranjanamurthy,&nbsp;Shonit Aric,&nbsp;Syed Immamul Ansarullah,&nbsp;Anurag Sinha,&nbsp;G. Tejani,&nbsp;Mohd Asif Shah","doi":"10.1002/ail2.116","DOIUrl":"https://doi.org/10.1002/ail2.116","url":null,"abstract":"<p>Current resume screening relies on manual review, causing delays and errors in evaluating large volumes of resumes. Lack of automation and data extraction leads to inefficiencies and potential biases. Recruiters face challenges in identifying qualified candidates due to oversight and time constraints. Inconsistent evaluation criteria hinder decision-making. These issues result in prolonged hiring processes, missed opportunities, and potential bias in candidate selection. The goal of this project is to develop an AI-powered Resume Analysis and Recommendation Tool, catering to the trend of recruiters spending less than 2 min on each CV. The tool will rapidly analyze all resume components while providing personalized predictions and recommendations to applicants for improving their CVs. It will present user-friendly data for recruiters, facilitating export to CSV for integration into their recruitment processes. Additionally, the tool will offer insights and analytics on popular roles and skills within the job market. Its user section will enable applicants to continually test and track their resumes, encouraging repeat usage and driving traffic. Colleges can benefit from gaining insights into students' resumes before placements. Overall, this AI-powered tool aims to enhance the resume evaluation process, benefiting both job seekers and employers. The primary aim of this project is to develop a Resume Analyzer using Python, incorporating advanced libraries such as Pyresparser, NLTK (Natural Language Toolkit), and MySQL. This automated system offers an efficient solution for parsing, analyzing, and extracting essential information from resumes. The user-friendly interface, developed using Streamlit, allows for seamless resume uploading, insightful data visualization, and analytics. The Resume Analyzer significantly streamlines the resume screening process, providing recruiters with valuable insights and enhancing their decision-making capabilities.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143944817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Few-Shot Learning Approach for a Multilingual Agro-Information Question Answering System 多语言农业信息问答系统的单次学习方法
Pub Date : 2025-04-30 DOI: 10.1002/ail2.122
Fiskani Ella Banda, Vukosi Marivate, Joyce Nakatumba-Nabende

Across numerous households in Sub-Saharan Africa, agriculture plays a crucial role. One solution that can effectively bridge the support gap for farmers in the local community is a question–answer system based on agricultural expertise and agro-information. The more recent advancements in question answering research involve the use of large language models that are trained on an extensive amount of data. Due to this, conventional fine-tuning approaches have demonstrated a significant decline in performance when using a significantly smaller amount of data. One proposed alternative to address this decline is to use prompt-based fine-tuning, which allows the model to be fine-tuned with only a few examples, thus addressing the disparities between the objectives of pretraining and fine-tuning. Extensive research has been done on these methods, specifically on text classification and not question answering. In this research, our objective was to study the feasibility of recent few-shot learning approaches such as FewshotQA and Null-prompting for domain-specific agricultural data in four South African languages. We first explored creating a cross-lingual domain-specific extractive question answering dataset through an automated approach using the GPT model. Through exploratory data analysis, the GPT model was able to create a dataset, which requires minor improvements. We then evaluated the overall performance of the different approaches and investigated the effects of adapting these approaches to suit the new dataset. Results show these methods effectively capture semantic relationships and domain-specific terminology but exhibit limitations, including potential biases in automated annotation and plateauing F1 scores. This highlights the need for hybrid approaches that combine artificial intelligence and human supervision. Beyond academic insights, this study has practical significance for industry, demonstrating how prompt-based methods can help tailor AI models to specific use cases in low-resource settings.

在撒哈拉以南非洲的许多家庭中,农业发挥着至关重要的作用。一个能够有效弥合当地社区对农民支持差距的解决方案是基于农业专业知识和农业信息的问答系统。问答研究的最新进展涉及使用基于大量数据训练的大型语言模型。由于这个原因,传统的微调方法在使用更少的数据量时表现出明显的性能下降。解决这种下降的一种替代方案是使用基于提示的微调,它允许模型仅使用少数示例进行微调,从而解决预训练和微调目标之间的差异。对这些方法进行了广泛的研究,特别是在文本分类和非问答方面。在这项研究中,我们的目标是研究最近的几次学习方法的可行性,如在四种南非语言中对特定领域的农业数据的几次学习方法,如few-shot qa和null -prompt。我们首先探索了通过使用GPT模型的自动化方法创建跨语言特定领域的抽取问答数据集。通过探索性数据分析,GPT模型能够创建一个数据集,这需要微小的改进。然后,我们评估了不同方法的总体性能,并研究了调整这些方法以适应新数据集的效果。结果表明,这些方法可以有效地捕获语义关系和特定于领域的术语,但也存在局限性,包括自动注释中的潜在偏差和F1分数的稳定。这凸显了将人工智能和人类监督结合起来的混合方法的必要性。除了学术见解之外,这项研究对行业也具有实际意义,它展示了基于提示的方法如何帮助人工智能模型适应低资源环境下的特定用例。
{"title":"A Few-Shot Learning Approach for a Multilingual Agro-Information Question Answering System","authors":"Fiskani Ella Banda,&nbsp;Vukosi Marivate,&nbsp;Joyce Nakatumba-Nabende","doi":"10.1002/ail2.122","DOIUrl":"https://doi.org/10.1002/ail2.122","url":null,"abstract":"<p>Across numerous households in Sub-Saharan Africa, agriculture plays a crucial role. One solution that can effectively bridge the support gap for farmers in the local community is a question–answer system based on agricultural expertise and agro-information. The more recent advancements in question answering research involve the use of large language models that are trained on an extensive amount of data. Due to this, conventional fine-tuning approaches have demonstrated a significant decline in performance when using a significantly smaller amount of data. One proposed alternative to address this decline is to use prompt-based fine-tuning, which allows the model to be fine-tuned with only a few examples, thus addressing the disparities between the objectives of pretraining and fine-tuning. Extensive research has been done on these methods, specifically on text classification and not question answering. In this research, our objective was to study the feasibility of recent few-shot learning approaches such as FewshotQA and Null-prompting for domain-specific agricultural data in four South African languages. We first explored creating a cross-lingual domain-specific extractive question answering dataset through an automated approach using the GPT model. Through exploratory data analysis, the GPT model was able to create a dataset, which requires minor improvements. We then evaluated the overall performance of the different approaches and investigated the effects of adapting these approaches to suit the new dataset. Results show these methods effectively capture semantic relationships and domain-specific terminology but exhibit limitations, including potential biases in automated annotation and plateauing F1 scores. This highlights the need for hybrid approaches that combine artificial intelligence and human supervision. Beyond academic insights, this study has practical significance for industry, demonstrating how prompt-based methods can help tailor AI models to specific use cases in low-resource settings.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.122","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical Recommendations for Artificial Intelligence and Machine Learning in Antimicrobial Stewardship for Africa 非洲抗菌剂管理中人工智能和机器学习的实用建议
Pub Date : 2025-04-28 DOI: 10.1002/ail2.123
Tafadzwa Dzinamarira, Elliot Mbunge, Claire Steiner, Enos Moyo, Adewale Akinjeji, Kaunda Yamba, Loveday Mwila, Claude Mambo Muvunyi

The challenge of antimicrobial resistance (AMR) represents one of the most pressing global health crises, particularly, in resource-constrained settings like Africa. In this paper, we explore artificial intelligence (AI) and machine learning (ML) potential in transforming the potential for antimicrobial stewardship (AMS) to improve precision, efficiency, and effectiveness of antibiotic use. The deployment of AI-driven solutions presents unprecedented opportunities for optimizing treatment regimens, predicting resistance patterns, and improving clinical workflows. However, successfully integrating these technologies into Africa's health systems faces considerable obstacles, including limited human capacity and expertise, widespread public distrust, insufficient funding, inadequate infrastructure, fragmented data sources, and weak regulatory and policy enforcement. To harness the full potential of AI and ML in AMS, there is a need to first address these foundational barriers. Capacity-building initiatives are essential to equip healthcare professionals with the skills needed to leverage AI technologies effectively. Public trust must be cultivated through community engagement and transparent communication about the benefits and limitations of AI. Furthermore, technological solutions should be tailored to the unique constraints of resource-limited settings, with a focus on developing low-computational, explainable models that can operate with minimal infrastructure. Financial investment is critical to scaling successful pilot projects and integrating them into national health systems. Effective policy development is equally essential to establishing regulatory frameworks that ensure data security, algorithmic fairness, and ethical AI use. This comprehensive approach will not only improve the deployment of AI systems but also address the underlying issues that exacerbate AMR, such as unauthorized antibiotic sales and inadequate enforcement of guidelines. To effectively and sustainably combat AMR, a concerted effort involving governments, health organizations, communities, and technology developers is essential. Through collaborations and sharing a common goal, we can build resilient and effective AMS programs in Africa.

抗菌素耐药性的挑战是最紧迫的全球卫生危机之一,特别是在非洲等资源有限的环境中。在本文中,我们探讨了人工智能(AI)和机器学习(ML)在改变抗菌素管理(AMS)潜力方面的潜力,以提高抗生素使用的精度、效率和有效性。人工智能驱动解决方案的部署为优化治疗方案、预测耐药模式和改善临床工作流程提供了前所未有的机会。然而,将这些技术成功地纳入非洲卫生系统面临着相当大的障碍,包括人力能力和专业知识有限、公众普遍不信任、资金不足、基础设施不足、数据来源分散以及监管和政策执行不力。为了充分利用人工智能和机器学习在AMS中的潜力,需要首先解决这些基本障碍。能力建设举措对于使卫生保健专业人员掌握有效利用人工智能技术所需的技能至关重要。必须通过社区参与和关于人工智能的好处和局限性的透明沟通来培养公众信任。此外,技术解决方案应根据资源有限环境的独特限制进行调整,重点是开发可在最小基础设施下运行的低计算、可解释的模型。财政投资对于扩大成功的试点项目并将其纳入国家卫生系统至关重要。有效的政策制定对于建立确保数据安全、算法公平和合乎道德的人工智能使用的监管框架同样重要。这种全面的方法不仅可以改善人工智能系统的部署,还可以解决加剧抗生素耐药性的潜在问题,例如未经授权的抗生素销售和指南执行不力。为了有效和可持续地防治抗生素耐药性,各国政府、卫生组织、社区和技术开发人员必须共同努力。通过合作和共享共同目标,我们可以在非洲建立有弹性和有效的辅助医疗项目。
{"title":"Practical Recommendations for Artificial Intelligence and Machine Learning in Antimicrobial Stewardship for Africa","authors":"Tafadzwa Dzinamarira,&nbsp;Elliot Mbunge,&nbsp;Claire Steiner,&nbsp;Enos Moyo,&nbsp;Adewale Akinjeji,&nbsp;Kaunda Yamba,&nbsp;Loveday Mwila,&nbsp;Claude Mambo Muvunyi","doi":"10.1002/ail2.123","DOIUrl":"https://doi.org/10.1002/ail2.123","url":null,"abstract":"<p>The challenge of antimicrobial resistance (AMR) represents one of the most pressing global health crises, particularly, in resource-constrained settings like Africa. In this paper, we explore artificial intelligence (AI) and machine learning (ML) potential in transforming the potential for antimicrobial stewardship (AMS) to improve precision, efficiency, and effectiveness of antibiotic use. The deployment of AI-driven solutions presents unprecedented opportunities for optimizing treatment regimens, predicting resistance patterns, and improving clinical workflows. However, successfully integrating these technologies into Africa's health systems faces considerable obstacles, including limited human capacity and expertise, widespread public distrust, insufficient funding, inadequate infrastructure, fragmented data sources, and weak regulatory and policy enforcement. To harness the full potential of AI and ML in AMS, there is a need to first address these foundational barriers. Capacity-building initiatives are essential to equip healthcare professionals with the skills needed to leverage AI technologies effectively. Public trust must be cultivated through community engagement and transparent communication about the benefits and limitations of AI. Furthermore, technological solutions should be tailored to the unique constraints of resource-limited settings, with a focus on developing low-computational, explainable models that can operate with minimal infrastructure. Financial investment is critical to scaling successful pilot projects and integrating them into national health systems. Effective policy development is equally essential to establishing regulatory frameworks that ensure data security, algorithmic fairness, and ethical AI use. This comprehensive approach will not only improve the deployment of AI systems but also address the underlying issues that exacerbate AMR, such as unauthorized antibiotic sales and inadequate enforcement of guidelines. To effectively and sustainably combat AMR, a concerted effort involving governments, health organizations, communities, and technology developers is essential. Through collaborations and sharing a common goal, we can build resilient and effective AMS programs in Africa.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Text-to-Speech Models for Low-Resourced Languages From Crowdsourced Data 从众包数据中构建低资源语言的文本到语音模型
Pub Date : 2025-04-28 DOI: 10.1002/ail2.117
Andrew Katumba, Sulaiman Kagumire, Joyce Nakatumba-Nabende, John Quinn, Sudi Murindanyi

Text-to-speech (TTS) models have expanded the scope of digital inclusivity by becoming a basis for assistive communication technologies for visually impaired people, facilitating language learning, and allowing for digital textual content consumption in audio form across various sectors. Despite these benefits, the full potential of TTS models is often not realized for the majority of low-resourced African languages because they have traditionally required large amounts of high-quality single-speaker recordings, which are financially costly and time-consuming to obtain. In this paper, we demonstrate that crowdsourced recordings can help overcome the lack of single-speaker data by compensating with data from other speakers of similar intonation (how the voice rises and falls in speech). We fine-tuned an English variational inference with adversarial learning for an end-to-end text-to-speech (VITS) model on over 10 h of speech from six female common voice (CV) speech data speakers for Luganda and Kiswahili. A human mean opinion score evaluation on 100 test sentences shows that the model trained on six speakers sounds more natural than the benchmark models trained on two speakers and a single speaker for both languages. In addition to careful data curation, this approach shows promise for advancing speech synthesis in the context of low-resourced African languages. Our final models for Luganda and Kiswahili are available at https://huggingface.co/marconilab/VITS-commonvoice-females.

文本转语音(TTS)模式通过成为视障人士辅助通信技术的基础,促进语言学习,并允许各行业以音频形式消费数字文本内容,扩大了数字包容性的范围。尽管有这些好处,但对于大多数资源匮乏的非洲语言来说,TTS模式的全部潜力往往没有实现,因为它们传统上需要大量高质量的单语录音,而这些录音在经济上昂贵且耗时。在本文中,我们证明了众包录音可以通过补偿具有相似语调的其他说话者的数据(语音如何上升和下降)来帮助克服单个说话者数据的缺乏。我们对一个端到端文本到语音(VITS)模型的英语变分推理进行了对抗性学习,该模型使用了卢干达语和斯瓦希里语的六名女性通用语音(CV)语音数据说话者超过10小时的语音。对100个测试句子的人类平均意见评分评估表明,在六个说话者身上训练的模型听起来比在两个说话者和一个说话者身上训练的基准模型听起来更自然。除了仔细的数据管理之外,这种方法还显示出在资源匮乏的非洲语言背景下推进语音合成的希望。卢甘达语和斯瓦希里语的最终模型可在https://huggingface.co/marconilab/VITS-commonvoice-females上找到。
{"title":"Building Text-to-Speech Models for Low-Resourced Languages From Crowdsourced Data","authors":"Andrew Katumba,&nbsp;Sulaiman Kagumire,&nbsp;Joyce Nakatumba-Nabende,&nbsp;John Quinn,&nbsp;Sudi Murindanyi","doi":"10.1002/ail2.117","DOIUrl":"https://doi.org/10.1002/ail2.117","url":null,"abstract":"<p>Text-to-speech (TTS) models have expanded the scope of digital inclusivity by becoming a basis for assistive communication technologies for visually impaired people, facilitating language learning, and allowing for digital textual content consumption in audio form across various sectors. Despite these benefits, the full potential of TTS models is often not realized for the majority of low-resourced African languages because they have traditionally required large amounts of high-quality single-speaker recordings, which are financially costly and time-consuming to obtain. In this paper, we demonstrate that crowdsourced recordings can help overcome the lack of single-speaker data by compensating with data from other speakers of similar intonation (how the voice rises and falls in speech). We fine-tuned an English variational inference with adversarial learning for an end-to-end text-to-speech (VITS) model on over 10 h of speech from six female common voice (CV) speech data speakers for Luganda and Kiswahili. A human mean opinion score evaluation on 100 test sentences shows that the model trained on six speakers sounds more natural than the benchmark models trained on two speakers and a single speaker for both languages. In addition to careful data curation, this approach shows promise for advancing speech synthesis in the context of low-resourced African languages. Our final models for Luganda and Kiswahili are available at https://huggingface.co/marconilab/VITS-commonvoice-females.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Public Health: AI-Powered Security Solutions for AI-Driven Challenges 增强公共卫生能力:人工智能驱动的安全解决方案,应对人工智能驱动的挑战
Pub Date : 2025-04-20 DOI: 10.1002/ail2.119
Shahrukh Mushtaq, Qurra-Tul-Ain Hameeda

The escalating integration of artificial intelligence (AI) in public healthcare has raised a critical concern: the vast amounts of data being generated and utilised by AI language models are not adequately connected to privacy and security considerations. This study addresses the problem by exploring how AI language models can be used to enhance digital security in public healthcare while addressing challenges related to privacy and ethics. The research adopts a three-phase methodology: a bibliometric analysis of literature from the Scopus database to identify research trends, the generation of AI-driven scenarios refined by healthcare professionals and analysing AI responses using grounded theory. Two scenarios, focused on AI-driven clinical decision support systems and AI-powered telemedicine platforms, were validated by healthcare experts and tested using ChatGPT-4 and Gemini, two prominent AI models. While ChatGPT-4 produced contextually specific and diverse responses, Gemini's outputs were inconsistent and repetitive, highlighting discrepancies in their performance. These discrepancies are linked to the data used to train these models, implying that incorporating more specialised healthcare data could enhance performance; however, such data usage must align with ethical guidelines. The analysis found that human, organizational, and technological dimensions are critical for addressing security issues and promoting trust in healthcare systems utilising AI. While AI-generated scenarios are a valuable starting point, they must be mediated by medical professionals to ensure practical applicability. The findings provide a theoretical framework for handling AI-generated issues related to privacy and security concerns, which can be used for future empirical research to enhance digital security in public healthcare.

人工智能(AI)在公共医疗领域的不断整合引发了一个关键问题:人工智能语言模型生成和使用的大量数据没有充分考虑隐私和安全因素。本研究通过探索如何使用人工智能语言模型来增强公共医疗保健中的数字安全,同时解决与隐私和道德相关的挑战,解决了这个问题。该研究采用了三个阶段的方法:对Scopus数据库中的文献进行文献计量分析,以确定研究趋势,生成由医疗保健专业人员改进的人工智能驱动的场景,并使用扎根理论分析人工智能的反应。医疗专家验证了两个场景,重点是人工智能驱动的临床决策支持系统和人工智能驱动的远程医疗平台,并使用ChatGPT-4和Gemini这两种著名的人工智能模型进行了测试。虽然ChatGPT-4产生了上下文特定和多样化的反应,但Gemini的输出是不一致和重复的,突出了他们的表现差异。这些差异与用于训练这些模型的数据有关,这意味着纳入更专业的医疗保健数据可以提高性能;然而,这样的数据使用必须符合道德准则。分析发现,人员、组织和技术维度对于解决安全问题和促进对利用人工智能的医疗系统的信任至关重要。虽然人工智能生成的场景是一个有价值的起点,但它们必须由医疗专业人员调解,以确保实际适用性。研究结果为处理人工智能产生的与隐私和安全问题相关的问题提供了理论框架,可用于未来的实证研究,以增强公共医疗保健中的数字安全。
{"title":"Empowering Public Health: AI-Powered Security Solutions for AI-Driven Challenges","authors":"Shahrukh Mushtaq,&nbsp;Qurra-Tul-Ain Hameeda","doi":"10.1002/ail2.119","DOIUrl":"https://doi.org/10.1002/ail2.119","url":null,"abstract":"<p>The escalating integration of artificial intelligence (AI) in public healthcare has raised a critical concern: the vast amounts of data being generated and utilised by AI language models are not adequately connected to privacy and security considerations. This study addresses the problem by exploring how AI language models can be used to enhance digital security in public healthcare while addressing challenges related to privacy and ethics. The research adopts a three-phase methodology: a bibliometric analysis of literature from the Scopus database to identify research trends, the generation of AI-driven scenarios refined by healthcare professionals and analysing AI responses using grounded theory. Two scenarios, focused on AI-driven clinical decision support systems and AI-powered telemedicine platforms, were validated by healthcare experts and tested using ChatGPT-4 and Gemini, two prominent AI models. While ChatGPT-4 produced contextually specific and diverse responses, Gemini's outputs were inconsistent and repetitive, highlighting discrepancies in their performance. These discrepancies are linked to the data used to train these models, implying that incorporating more specialised healthcare data could enhance performance; however, such data usage must align with ethical guidelines. The analysis found that human, organizational, and technological dimensions are critical for addressing security issues and promoting trust in healthcare systems utilising AI. While AI-generated scenarios are a valuable starting point, they must be mediated by medical professionals to ensure practical applicability. The findings provide a theoretical framework for handling AI-generated issues related to privacy and security concerns, which can be used for future empirical research to enhance digital security in public healthcare.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.119","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Adversarial Attacks Against Artificial Intelligence Systems in Application Deployments 评估应用程序部署中针对人工智能系统的对抗性攻击
Pub Date : 2025-04-20 DOI: 10.1002/ail2.121
Lera Leonteva

Businesses have invested billions into artificial intelligence (AI) applications, leading to a sharp rise in the number of AI applications being released to customers. Taking into account previous approaches to attacking machine learning models, we conduct a comparative analysis of adversarial attacks, contrasting large language models (LLMs) being deployed through application programming interfaces (APIs) with the same attacks against locally deployed models to evaluate the significance of security controls in production deployments on attack success in black-box environments. The article puts forward adversarial attacks that are adapted for remote model endpoints in order to create a threat model that can be used by security organizations to prioritize controls when deploying AI systems through APIs. This paper contributes: (1) a public repository of adversarial attacks adapted to handle remote models on https://github.com/l3ra/adversarial-ai, (2) benchmarking results of remote attacks comparing the effectiveness of attacks on remote models with those on local models, and (3) a framework for assessing future AI system deployment controls. By providing a practical framework for benchmarking the security of remote AI systems, this study contributes to the understanding of adversarial attacks in the context of natural language processing models deployed by production applications.

企业已向人工智能(AI)应用投入数十亿美元,导致向客户发布的人工智能应用数量急剧上升。考虑到以往攻击机器学习模型的方法,我们对对抗性攻击进行了对比分析,将通过应用编程接口(API)部署的大型语言模型(LLM)与针对本地部署模型的相同攻击进行对比,以评估生产部署中的安全控制对黑盒环境中攻击成功率的影响。文章提出了适用于远程模型端点的对抗性攻击,以创建一个威胁模型,供安全机构在通过 API 部署人工智能系统时优先考虑控制措施。本文的贡献包括:(1)在 https://github.com/l3ra/adversarial-ai 上提供了适用于远程模型的对抗性攻击的公共资源库;(2)远程攻击的基准测试结果,比较了远程模型攻击与本地模型攻击的有效性;以及(3)用于评估未来人工智能系统部署控制的框架。本研究为远程人工智能系统的安全性基准测试提供了一个实用框架,有助于人们了解在生产应用部署的自然语言处理模型背景下的对抗性攻击。
{"title":"Evaluating Adversarial Attacks Against Artificial Intelligence Systems in Application Deployments","authors":"Lera Leonteva","doi":"10.1002/ail2.121","DOIUrl":"https://doi.org/10.1002/ail2.121","url":null,"abstract":"<p>Businesses have invested billions into artificial intelligence (AI) applications, leading to a sharp rise in the number of AI applications being released to customers. Taking into account previous approaches to attacking machine learning models, we conduct a comparative analysis of adversarial attacks, contrasting large language models (LLMs) being deployed through application programming interfaces (APIs) with the same attacks against locally deployed models to evaluate the significance of security controls in production deployments on attack success in black-box environments. The article puts forward adversarial attacks that are adapted for remote model endpoints in order to create a threat model that can be used by security organizations to prioritize controls when deploying AI systems through APIs. This paper contributes: (1) a public repository of adversarial attacks adapted to handle remote models on https://github.com/l3ra/adversarial-ai, (2) benchmarking results of remote attacks comparing the effectiveness of attacks on remote models with those on local models, and (3) a framework for assessing future AI system deployment controls. By providing a practical framework for benchmarking the security of remote AI systems, this study contributes to the understanding of adversarial attacks in the context of natural language processing models deployed by production applications.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.121","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Machine Learning Workflows Using the “Normative-Descriptive-Prescriptive” Decision Framework 使用“规范-描述-规范”决策框架改进机器学习工作流程
Pub Date : 2025-04-10 DOI: 10.1002/ail2.118
Naveen Gudigantala, Manaranjan Pradhan, Naga Vemprala

To maximize business value from artificial intelligence and machine learning (ML) systems, understanding what leads to the effective development and deployment of ML systems is crucial. While prior research primarily focused on technical aspects, important issues related to improving decision-making across ML workflows have been overlooked. This paper introduces a “normative-descriptive-prescriptive” decision framework to address this gap. Normative guidelines outline best practices, descriptive dimensions describe actual decision-making, and prescriptive elements provide recommendations to bridge gaps. The three-step framework analyzes decision-making in key ML pipeline phases, identifying gaps and offering prescriptions for improved model building. Key descriptive findings include rushed problem-solving with convenient data, use of inaccurate success metrics, underestimation of downstream impacts, limited roles of subject matter experts, use of non-representative data samples, prioritization of prediction over explanation, lack of formal verification processes, and challenges in monitoring production models. The paper highlights biases, incentive issues, and systematic disconnects in decision-making across the ML pipeline as contributors to descriptive shortcomings. Practitioners can use the framework to pinpoint gaps, develop prescriptive interventions, and build higher quality, ethical, and legally compliant ML systems.

为了最大限度地提高人工智能和机器学习(ML)系统的商业价值,了解导致ML系统有效开发和部署的因素至关重要。虽然之前的研究主要集中在技术方面,但与改进机器学习工作流程决策相关的重要问题被忽视了。本文引入了一个“规范-描述-规定”的决策框架来解决这一差距。规范性指南概述了最佳实践,描述性维度描述了实际的决策,而规定性要素提供了弥合差距的建议。该三步框架分析了关键ML管道阶段的决策,确定了差距,并为改进模型构建提供了处方。关键的描述性发现包括使用方便的数据快速解决问题,使用不准确的成功度量,低估下游影响,主题专家的作用有限,使用非代表性数据样本,预测优先于解释,缺乏正式的验证过程,以及监测生产模型中的挑战。这篇论文强调了偏见、激励问题和机器学习管道中决策的系统性脱节,这些都是造成描述性缺陷的原因。从业者可以使用该框架来查明差距,制定规范性干预措施,并建立更高质量、道德和法律合规的机器学习系统。
{"title":"Improving Machine Learning Workflows Using the “Normative-Descriptive-Prescriptive” Decision Framework","authors":"Naveen Gudigantala,&nbsp;Manaranjan Pradhan,&nbsp;Naga Vemprala","doi":"10.1002/ail2.118","DOIUrl":"https://doi.org/10.1002/ail2.118","url":null,"abstract":"<p>To maximize business value from artificial intelligence and machine learning (ML) systems, understanding what leads to the effective development and deployment of ML systems is crucial. While prior research primarily focused on technical aspects, important issues related to improving decision-making across ML workflows have been overlooked. This paper introduces a “normative-descriptive-prescriptive” decision framework to address this gap. Normative guidelines outline best practices, descriptive dimensions describe actual decision-making, and prescriptive elements provide recommendations to bridge gaps. The three-step framework analyzes decision-making in key ML pipeline phases, identifying gaps and offering prescriptions for improved model building. Key descriptive findings include rushed problem-solving with convenient data, use of inaccurate success metrics, underestimation of downstream impacts, limited roles of subject matter experts, use of non-representative data samples, prioritization of prediction over explanation, lack of formal verification processes, and challenges in monitoring production models. The paper highlights biases, incentive issues, and systematic disconnects in decision-making across the ML pipeline as contributors to descriptive shortcomings. Practitioners can use the framework to pinpoint gaps, develop prescriptive interventions, and build higher quality, ethical, and legally compliant ML systems.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.118","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Deep Learning Paradigm for Robust Feature Extraction and Classification for Cataracts 一种用于白内障鲁棒特征提取和分类的混合深度学习范式
Pub Date : 2025-03-28 DOI: 10.1002/ail2.113
Akshay Bhuvaneswari Ramakrishnan, Mukunth Madavan, R. Manikandan, Amir H. Gandomi

The study suggests using a hybrid convolutional neural networks-support vector machines architecture to extract reliable characteristics from medical images and classify them as an ensemble using four different models. Manual processing of fundus images for the automated identification of ocular disorders is laborious, error-prone, and time-consuming. This necessitates computer-assisted technologies that can automatically identify different ocular illnesses from fundus images. The interpretation of the photos also plays a massive role in the diagnosis. Automating the diagnosing procedure reduces human mistakes and helps with early cataract detection. The oneDNN library available in the oneAPI Environment provided by Intel has been used to optimize all transfer learning models for better performance. The suggested approach is verified through a range of metrics in experiments using the openly accessible Ocular Disease Intelligent Recognition dataset. The MobileNet Model outperformed other transfer learning techniques with an accuracy of 0.9836.

该研究建议使用混合卷积神经网络-支持向量机架构从医学图像中提取可靠的特征,并使用四种不同的模型将它们分类为一个集合。人工处理眼底图像用于眼部疾病的自动识别是费力、容易出错和耗时的。这就需要能够从眼底图像中自动识别不同眼部疾病的计算机辅助技术。对照片的解读在诊断中也起着重要作用。自动化诊断程序减少了人为错误,并有助于早期发现白内障。英特尔提供的oneAPI环境中提供的oneDNN库已被用于优化所有迁移学习模型,以获得更好的性能。建议的方法通过使用开放访问的眼部疾病智能识别数据集的一系列实验指标进行验证。MobileNet模型以0.9836的准确率优于其他迁移学习技术。
{"title":"A Hybrid Deep Learning Paradigm for Robust Feature Extraction and Classification for Cataracts","authors":"Akshay Bhuvaneswari Ramakrishnan,&nbsp;Mukunth Madavan,&nbsp;R. Manikandan,&nbsp;Amir H. Gandomi","doi":"10.1002/ail2.113","DOIUrl":"https://doi.org/10.1002/ail2.113","url":null,"abstract":"<p>The study suggests using a hybrid convolutional neural networks-support vector machines architecture to extract reliable characteristics from medical images and classify them as an ensemble using four different models. Manual processing of fundus images for the automated identification of ocular disorders is laborious, error-prone, and time-consuming. This necessitates computer-assisted technologies that can automatically identify different ocular illnesses from fundus images. The interpretation of the photos also plays a massive role in the diagnosis. Automating the diagnosing procedure reduces human mistakes and helps with early cataract detection. The oneDNN library available in the oneAPI Environment provided by Intel has been used to optimize all transfer learning models for better performance. The suggested approach is verified through a range of metrics in experiments using the openly accessible Ocular Disease Intelligent Recognition dataset. The MobileNet Model outperformed other transfer learning techniques with an accuracy of 0.9836.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143717423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technical Language Processing for Telecommunications Specifications 电信规范的技术语言处理
Pub Date : 2025-03-24 DOI: 10.1002/ail2.111
Felipe A. Rodriguez Y

Large Language Models (LLMs) are continuously being evaluated in more diverse contexts. However, they still have challenges when extracting information from highly specific internal technical documentation. One specific area of real-world technical documentation is telecommunications engineering, which could benefit from domain-specific LLMs. In this article, we expand the notion of Technical Language Processing (TLP) to the telecommunications domain by introducing and analyzing the format of technical specifications from a leading telecommunications equipment vendor. Additionally, we highlight the importance of use case definitions by introducing requirement property mapping for maximizing information extraction. Also, we recommend actions to mitigate the effect of the internal specifications format on information extraction, which can lead to LLM-friendly internal specifications. Finally, a PoC is built to showcase the improvements of our proposed framework.

大型语言模型(llm)在更多样化的环境中不断地被评估。然而,在从高度具体的内部技术文档中提取信息时,他们仍然面临挑战。实际技术文档的一个特定领域是电信工程,它可以从领域特定的llm中受益。在本文中,我们通过介绍和分析一家领先的电信设备供应商的技术规范格式,将技术语言处理(TLP)的概念扩展到电信领域。另外,我们通过引入需求属性映射来最大化信息提取来强调用例定义的重要性。此外,我们建议采取措施减轻内部规范格式对信息提取的影响,这可能导致llm友好的内部规范。最后,构建一个PoC来展示我们提出的框架的改进。
{"title":"Technical Language Processing for Telecommunications Specifications","authors":"Felipe A. Rodriguez Y","doi":"10.1002/ail2.111","DOIUrl":"https://doi.org/10.1002/ail2.111","url":null,"abstract":"<p>Large Language Models (LLMs) are continuously being evaluated in more diverse contexts. However, they still have challenges when extracting information from highly specific internal technical documentation. One specific area of real-world technical documentation is telecommunications engineering, which could benefit from domain-specific LLMs. In this article, we expand the notion of Technical Language Processing (TLP) to the telecommunications domain by introducing and analyzing the format of technical specifications from a leading telecommunications equipment vendor. Additionally, we highlight the importance of use case definitions by introducing requirement property mapping for maximizing information extraction. Also, we recommend actions to mitigate the effect of the internal specifications format on information extraction, which can lead to LLM-friendly internal specifications. Finally, a PoC is built to showcase the improvements of our proposed framework.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied AI letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1