首页 > 最新文献

Big Data and Cognitive Computing最新文献

英文 中文
Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry 推进牙科诊断:人工智能在牙科领域的应用和挑战综述
Pub Date : 2024-06-07 DOI: 10.3390/bdcc8060066
Dhiaa Musleh, Haya Almossaeed, Fay Balhareth, Ghadah Alqahtani, Norah Alobaidan, Jana Altalag, May Issa Aldossary
The rise of artificial intelligence has created and facilitated numerous everyday tasks in a variety of industries, including dentistry. Dentists have utilized X-rays for diagnosing patients’ ailments for many years. However, the procedure is typically performed manually, which can be challenging and time-consuming for non-specialized specialists and carries a significant risk of error. As a result, researchers have turned to machine and deep learning modeling approaches to precisely identify dental disorders using X-ray pictures. This review is motivated by the need to address these challenges and to explore the potential of AI to enhance diagnostic accuracy, efficiency, and reliability in dental practice. Although artificial intelligence is frequently employed in dentistry, the approaches’ outcomes are still influenced by aspects such as dataset availability and quantity, chapter balance, and data interpretation capability. Consequently, it is critical to work with the research community to address these issues in order to identify the most effective approaches for use in ongoing investigations. This article, which is based on a literature review, provides a concise summary of the diagnosis process using X-ray imaging systems, offers a thorough understanding of the difficulties that dental researchers face, and presents an amalgamative evaluation of the performances and methodologies assessed using publicly available benchmarks.
人工智能的兴起为包括牙科在内的各行各业创造和促进了大量日常工作。多年来,牙医一直利用 X 射线来诊断病人的疾病。然而,这一过程通常由人工完成,这对非专业专家来说具有挑战性,耗费时间,而且存在很大的出错风险。因此,研究人员转而采用机器和深度学习建模方法,利用 X 射线图片精确识别牙科疾病。本综述正是出于应对这些挑战的需要,并探索人工智能在提高牙科诊所诊断准确性、效率和可靠性方面的潜力。虽然人工智能经常被应用于牙科领域,但其结果仍受到数据集的可用性和数量、章节平衡和数据解读能力等方面的影响。因此,与研究界合作解决这些问题至关重要,以便找出最有效的方法用于正在进行的研究。本文以文献综述为基础,简明扼要地总结了使用 X 射线成像系统进行诊断的过程,透彻地阐述了牙科研究人员面临的困难,并对使用公开基准评估的性能和方法进行了综合评价。
{"title":"Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry","authors":"Dhiaa Musleh, Haya Almossaeed, Fay Balhareth, Ghadah Alqahtani, Norah Alobaidan, Jana Altalag, May Issa Aldossary","doi":"10.3390/bdcc8060066","DOIUrl":"https://doi.org/10.3390/bdcc8060066","url":null,"abstract":"The rise of artificial intelligence has created and facilitated numerous everyday tasks in a variety of industries, including dentistry. Dentists have utilized X-rays for diagnosing patients’ ailments for many years. However, the procedure is typically performed manually, which can be challenging and time-consuming for non-specialized specialists and carries a significant risk of error. As a result, researchers have turned to machine and deep learning modeling approaches to precisely identify dental disorders using X-ray pictures. This review is motivated by the need to address these challenges and to explore the potential of AI to enhance diagnostic accuracy, efficiency, and reliability in dental practice. Although artificial intelligence is frequently employed in dentistry, the approaches’ outcomes are still influenced by aspects such as dataset availability and quantity, chapter balance, and data interpretation capability. Consequently, it is critical to work with the research community to address these issues in order to identify the most effective approaches for use in ongoing investigations. This article, which is based on a literature review, provides a concise summary of the diagnosis process using X-ray imaging systems, offers a thorough understanding of the difficulties that dental researchers face, and presents an amalgamative evaluation of the performances and methodologies assessed using publicly available benchmarks.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141371605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLMs and NLP Models in Cryptocurrency Sentiment Analysis: A Comparative Classification Study 加密货币情感分析中的 LLM 和 NLP 模型:分类比较研究
Pub Date : 2024-06-05 DOI: 10.3390/bdcc8060063
Konstantinos I. Roumeliotis, Nikolaos D. Tselikas, Dimitrios K. Nasiopoulos
Cryptocurrencies are becoming increasingly prominent in financial investments, with more investors diversifying their portfolios and individuals drawn to their ease of use and decentralized financial opportunities. However, this accessibility also brings significant risks and rewards, often influenced by news and the sentiments of crypto investors, known as crypto signals. This paper explores the capabilities of large language models (LLMs) and natural language processing (NLP) models in analyzing sentiment from cryptocurrency-related news articles. We fine-tune state-of-the-art models such as GPT-4, BERT, and FinBERT for this specific task, evaluating their performance and comparing their effectiveness in sentiment classification. By leveraging these advanced techniques, we aim to enhance the understanding of sentiment dynamics in the cryptocurrency market, providing insights that can inform investment decisions and risk management strategies. The outcomes of this comparative study contribute to the broader discourse on applying advanced NLP models to cryptocurrency sentiment analysis, with implications for both academic research and practical applications in financial markets.
加密货币在金融投资中的地位日益突出,越来越多的投资者将其投资组合多样化,个人也被其易用性和去中心化的金融机会所吸引。然而,这种易用性也带来了巨大的风险和回报,通常会受到新闻和加密投资者情绪的影响,即所谓的加密信号。本文探讨了大型语言模型(LLM)和自然语言处理(NLP)模型在分析加密货币相关新闻文章的情绪方面的能力。我们针对这一特定任务对 GPT-4、BERT 和 FinBERT 等最先进的模型进行了微调,评估了它们的性能,并比较了它们在情感分类方面的有效性。通过利用这些先进技术,我们旨在加强对加密货币市场情绪动态的了解,为投资决策和风险管理策略提供启示。这项比较研究的成果有助于将高级 NLP 模型应用于加密货币情感分析的广泛讨论,对学术研究和金融市场的实际应用都有影响。
{"title":"LLMs and NLP Models in Cryptocurrency Sentiment Analysis: A Comparative Classification Study","authors":"Konstantinos I. Roumeliotis, Nikolaos D. Tselikas, Dimitrios K. Nasiopoulos","doi":"10.3390/bdcc8060063","DOIUrl":"https://doi.org/10.3390/bdcc8060063","url":null,"abstract":"Cryptocurrencies are becoming increasingly prominent in financial investments, with more investors diversifying their portfolios and individuals drawn to their ease of use and decentralized financial opportunities. However, this accessibility also brings significant risks and rewards, often influenced by news and the sentiments of crypto investors, known as crypto signals. This paper explores the capabilities of large language models (LLMs) and natural language processing (NLP) models in analyzing sentiment from cryptocurrency-related news articles. We fine-tune state-of-the-art models such as GPT-4, BERT, and FinBERT for this specific task, evaluating their performance and comparing their effectiveness in sentiment classification. By leveraging these advanced techniques, we aim to enhance the understanding of sentiment dynamics in the cryptocurrency market, providing insights that can inform investment decisions and risk management strategies. The outcomes of this comparative study contribute to the broader discourse on applying advanced NLP models to cryptocurrency sentiment analysis, with implications for both academic research and practical applications in financial markets.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141384032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating OLAP with NoSQL Databases in Big Data Environments: Systematic Mapping 在大数据环境中整合 OLAP 与 NoSQL 数据库:系统映射
Pub Date : 2024-06-05 DOI: 10.3390/bdcc8060064
Diana Martínez-Mosquera, Rosa Navarrete, Sergio Luján-Mora, Lorena Recalde, Andres Andrade-Cabrera
The growing importance of data analytics is leading to a shift in data management strategy at many companies, moving away from simple data storage towards adopting Online Analytical Processing (OLAP) query analysis. Concurrently, NoSQL databases are gaining ground as the preferred choice for storing and querying analytical data. This article presents a comprehensive, systematic mapping, aiming to consolidate research efforts related to the integration of OLAP with NoSQL databases in Big Data environments. After identifying 1646 initial research studies from scientific digital repositories, a thorough examination of their content resulted in the acceptance of 22 studies. Utilizing the snowballing technique, an additional three studies were selected, culminating in a final corpus of twenty-five relevant articles. This review addresses the growing importance of leveraging NoSQL databases for OLAP query analysis in response to increasing data analytics demands. By identifying the most commonly used NoSQL databases with OLAP, such as column-oriented and document-oriented, prevalent OLAP modeling methods, such as Relational Online Analytical Processing (ROLAP) and Multidimensional Online Analytical Processing (MOLAP), and suggested models for batch and real-time processing, among other results, this research provides a roadmap for organizations navigating the integration of OLAP with NoSQL. Additionally, exploring computational resource requirements and performance benchmarks facilitates informed decision making and promotes advancements in Big Data analytics. The main findings of this review provide valuable insights and updated information regarding the integration of OLAP cubes with NoSQL databases to benefit future research, industry practitioners, and academia alike. This consolidation of research efforts not only promotes innovative solutions but also promises reduced operational costs compared to traditional database systems.
数据分析的重要性与日俱增,导致许多公司的数据管理战略发生转变,从简单的数据存储转向采用联机分析处理(OLAP)查询分析。与此同时,NoSQL 数据库正逐渐成为存储和查询分析数据的首选。本文介绍了一个全面、系统的映射,旨在整合与大数据环境中 OLAP 与 NoSQL 数据库集成相关的研究工作。在从科学数字资源库中识别出 1646 项初步研究后,对其内容进行了彻底检查,最终接受了 22 项研究。利用 "滚雪球 "技术,又选取了三项研究,最终形成了由 25 篇相关文章组成的语料库。本综述探讨了利用 NoSQL 数据库进行 OLAP 查询分析以满足日益增长的数据分析需求的重要性。通过确定 OLAP 最常用的 NoSQL 数据库(如面向列和面向文档)、流行的 OLAP 建模方法(如关系型联机分析处理 (ROLAP) 和多维联机分析处理 (MOLAP)),以及建议的批量和实时处理模型等结果,本研究为导航 OLAP 与 NoSQL 集成的企业提供了路线图。此外,探索计算资源需求和性能基准有助于做出明智的决策,并促进大数据分析的进步。本综述的主要发现提供了有关 OLAP 数据集与 NoSQL 数据库集成的宝贵见解和最新信息,使未来的研究、行业从业人员和学术界都能从中受益。与传统数据库系统相比,这种研究工作的整合不仅促进了创新解决方案的发展,而且有望降低运营成本。
{"title":"Integrating OLAP with NoSQL Databases in Big Data Environments: Systematic Mapping","authors":"Diana Martínez-Mosquera, Rosa Navarrete, Sergio Luján-Mora, Lorena Recalde, Andres Andrade-Cabrera","doi":"10.3390/bdcc8060064","DOIUrl":"https://doi.org/10.3390/bdcc8060064","url":null,"abstract":"The growing importance of data analytics is leading to a shift in data management strategy at many companies, moving away from simple data storage towards adopting Online Analytical Processing (OLAP) query analysis. Concurrently, NoSQL databases are gaining ground as the preferred choice for storing and querying analytical data. This article presents a comprehensive, systematic mapping, aiming to consolidate research efforts related to the integration of OLAP with NoSQL databases in Big Data environments. After identifying 1646 initial research studies from scientific digital repositories, a thorough examination of their content resulted in the acceptance of 22 studies. Utilizing the snowballing technique, an additional three studies were selected, culminating in a final corpus of twenty-five relevant articles. This review addresses the growing importance of leveraging NoSQL databases for OLAP query analysis in response to increasing data analytics demands. By identifying the most commonly used NoSQL databases with OLAP, such as column-oriented and document-oriented, prevalent OLAP modeling methods, such as Relational Online Analytical Processing (ROLAP) and Multidimensional Online Analytical Processing (MOLAP), and suggested models for batch and real-time processing, among other results, this research provides a roadmap for organizations navigating the integration of OLAP with NoSQL. Additionally, exploring computational resource requirements and performance benchmarks facilitates informed decision making and promotes advancements in Big Data analytics. The main findings of this review provide valuable insights and updated information regarding the integration of OLAP cubes with NoSQL databases to benefit future research, industry practitioners, and academia alike. This consolidation of research efforts not only promotes innovative solutions but also promises reduced operational costs compared to traditional database systems.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141382518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying Variations in Controversial Discussions within Kuwaiti Social Networks 量化科威特社交网络中争议性讨论的变化
Pub Date : 2024-06-04 DOI: 10.3390/bdcc8060060
Yeonjung Lee, Hana Alostad, Hasan Davulcu
During the COVID-19 pandemic, pro-vaccine and anti-vaccine groups emerged, influencing others to vaccinate or abstain and leading to polarized debates. Due to incomplete user data and the complexity of social network interactions, understanding the dynamics of these discussions is challenging. This study aims to discover and quantify the factors driving the controversy related to vaccine stances across Kuwaiti social networks. To tackle these challenges, a graph convolutional network (GCN) and feature propagation (FP) were utilized to accurately detect users’ stances despite incomplete features, achieving an accuracy of 96%. Additionally, the random walk controversy (RWC) score was employed to quantify polarization points within the social networks. Experiments were conducted using a dataset of vaccine-related retweets and discussions from X (formerly Twitter) during the Kuwait COVID-19 vaccine rollout period. The analysis revealed high polarization periods correlating with specific vaccination rates and governmental announcements. This research provides a novel approach to accurately detecting user stances in low-resource languages like the Kuwaiti dialect without the need for costly annotations, offering valuable insights to help policymakers understand public opinion and address misinformation effectively.
在 COVID-19 大流行期间,出现了支持接种疫苗和反对接种疫苗的团体,影响他人接种疫苗或放弃接种疫苗,并引发了两极分化的争论。由于用户数据不完整以及社交网络互动的复杂性,了解这些讨论的动态具有挑战性。本研究旨在发现和量化科威特社交网络中与疫苗立场有关的争议的驱动因素。为了应对这些挑战,我们利用图卷积网络(GCN)和特征传播(FP),在特征不完整的情况下准确检测用户的立场,准确率达到 96%。此外,还采用了随机漫步争议(RWC)得分来量化社交网络中的极化点。在科威特 COVID-19 疫苗推广期间,我们使用 X(前 Twitter)上与疫苗相关的转发和讨论数据集进行了实验。分析结果显示,两极分化的高发期与特定的疫苗接种率和政府公告相关。这项研究提供了一种新颖的方法,无需昂贵的注释即可准确检测低资源语言(如科威特方言)中的用户立场,为帮助政策制定者了解民意和有效解决误导问题提供了宝贵的见解。
{"title":"Quantifying Variations in Controversial Discussions within Kuwaiti Social Networks","authors":"Yeonjung Lee, Hana Alostad, Hasan Davulcu","doi":"10.3390/bdcc8060060","DOIUrl":"https://doi.org/10.3390/bdcc8060060","url":null,"abstract":"During the COVID-19 pandemic, pro-vaccine and anti-vaccine groups emerged, influencing others to vaccinate or abstain and leading to polarized debates. Due to incomplete user data and the complexity of social network interactions, understanding the dynamics of these discussions is challenging. This study aims to discover and quantify the factors driving the controversy related to vaccine stances across Kuwaiti social networks. To tackle these challenges, a graph convolutional network (GCN) and feature propagation (FP) were utilized to accurately detect users’ stances despite incomplete features, achieving an accuracy of 96%. Additionally, the random walk controversy (RWC) score was employed to quantify polarization points within the social networks. Experiments were conducted using a dataset of vaccine-related retweets and discussions from X (formerly Twitter) during the Kuwait COVID-19 vaccine rollout period. The analysis revealed high polarization periods correlating with specific vaccination rates and governmental announcements. This research provides a novel approach to accurately detecting user stances in low-resource languages like the Kuwaiti dialect without the need for costly annotations, offering valuable insights to help policymakers understand public opinion and address misinformation effectively.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XplAInable: Explainable AI Smoke Detection at the Edge XplAInable:可解释的人工智能边缘烟雾探测
Pub Date : 2024-05-17 DOI: 10.3390/bdcc8050050
Alexander Lehnert, Falko Gawantka, Jonas During, Franz Just, Marc Reichenbach
Wild and forest fires pose a threat to forests and thereby, in extension, to wild life and humanity. Recent history shows an increase in devastating damages caused by fires. Traditional fire detection systems, such as video surveillance, fail in the early stages of a rural forest fire. Such systems would see the fire only when the damage is immense. Novel low-power smoke detection units based on gas sensors can detect smoke fumes in the early development stages of fires. The required proximity is only achieved using a distributed network of sensors interconnected via 5G. In the context of battery-powered sensor nodes, energy efficiency becomes a key metric. Using AI classification combined with XAI enables improved confidence regarding measurements. In this work, we present both a low-power gas sensor for smoke detection and a system elaboration regarding energy-efficient communication schemes and XAI-based evaluation. We show that leveraging edge processing in a smart way combined with buffered data samples in a 5G communication network yields optimal energy efficiency and rating results.
野火和森林火灾对森林构成威胁,进而对野生生物和人类构成威胁。近代历史表明,火灾造成的破坏性损失在不断增加。传统的火灾探测系统,如视频监控,在农村森林火灾的早期阶段就会失灵。这些系统只有在损失巨大时才会发现火灾。基于气体传感器的新型低功耗烟雾探测装置可以在火灾初期探测到烟雾。只有使用通过 5G 互联的分布式传感器网络,才能实现所需的接近性。在使用电池供电的传感器节点方面,能效成为一个关键指标。将人工智能分类与 XAI 结合使用可提高测量结果的可信度。在这项工作中,我们介绍了一种用于烟雾检测的低功耗气体传感器,并就高能效通信方案和基于 XAI 的评估进行了系统阐述。我们表明,在 5G 通信网络中以智能方式利用边缘处理与缓冲数据样本相结合,可获得最佳能效和评级结果。
{"title":"XplAInable: Explainable AI Smoke Detection at the Edge","authors":"Alexander Lehnert, Falko Gawantka, Jonas During, Franz Just, Marc Reichenbach","doi":"10.3390/bdcc8050050","DOIUrl":"https://doi.org/10.3390/bdcc8050050","url":null,"abstract":"Wild and forest fires pose a threat to forests and thereby, in extension, to wild life and humanity. Recent history shows an increase in devastating damages caused by fires. Traditional fire detection systems, such as video surveillance, fail in the early stages of a rural forest fire. Such systems would see the fire only when the damage is immense. Novel low-power smoke detection units based on gas sensors can detect smoke fumes in the early development stages of fires. The required proximity is only achieved using a distributed network of sensors interconnected via 5G. In the context of battery-powered sensor nodes, energy efficiency becomes a key metric. Using AI classification combined with XAI enables improved confidence regarding measurements. In this work, we present both a low-power gas sensor for smoke detection and a system elaboration regarding energy-efficient communication schemes and XAI-based evaluation. We show that leveraging edge processing in a smart way combined with buffered data samples in a 5G communication network yields optimal energy efficiency and rating results.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140962701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Runtime Verification-Based Safe MARL for Optimized Safety Policy Generation for Multi-Robot Systems 基于运行时验证的安全 MARL,用于优化多机器人系统的安全策略生成
Pub Date : 2024-05-16 DOI: 10.3390/bdcc8050049
Yang Liu, Jiankun Li
The intelligent warehouse is a modern logistics management system that uses technologies like the Internet of Things, robots, and artificial intelligence to realize automated management and optimize warehousing operations. The multi-robot system (MRS) is an important carrier for implementing an intelligent warehouse, which completes various tasks in the warehouse through cooperation and coordination between robots. As an extension of reinforcement learning and a kind of swarm intelligence, MARL (multi-agent reinforcement learning) can effectively create the multi-robot systems in intelligent warehouses. However, MARL-based multi-robot systems in intelligent warehouses face serious safety issues, such as collisions, conflicts, and congestion. To deal with these issues, this paper proposes a safe MARL method based on runtime verification, i.e., an optimized safety policy-generation framework, for multi-robot systems in intelligent warehouses. The framework consists of three stages. In the first stage, a runtime model SCMG (safety-constrained Markov Game) is defined for the multi-robot system at runtime in the intelligent warehouse. In the second stage, rPATL (probabilistic alternating-time temporal logic with rewards) is used to express safety properties, and SCMG is cyclically verified and refined through runtime verification (RV) to ensure safety. This stage guarantees the safety of robots’ behaviors before training. In the third stage, the verified SCMG guides SCPO (safety-constrained policy optimization) to obtain an optimized safety policy for robots. Finally, a multi-robot warehouse (RWARE) scenario is used for experimental evaluation. The results show that the policy obtained by our framework is safer than existing frameworks and includes a certain degree of optimization.
智能仓库是利用物联网、机器人、人工智能等技术实现自动化管理、优化仓储作业的现代物流管理系统。多机器人系统(MRS)是实现智能仓库的重要载体,它通过机器人之间的合作与协调完成仓库中的各种任务。作为强化学习的延伸和群集智能的一种,MARL(多代理强化学习)可以有效地创建智能仓库中的多机器人系统。然而,基于 MARL 的智能仓库多机器人系统面临着严重的安全问题,如碰撞、冲突和拥堵。针对这些问题,本文为智能仓库中的多机器人系统提出了一种基于运行时验证的安全 MARL 方法,即优化的安全策略生成框架。该框架包括三个阶段。第一阶段,定义智能仓库中多机器人系统的运行时模型 SCMG(安全约束马尔可夫博弈)。在第二阶段,使用 rPATL(带奖励的概率交替时间时间逻辑)来表达安全属性,并通过运行时验证(RV)对 SCMG 进行循环验证和完善,以确保安全。这一阶段保证了训练前机器人行为的安全性。在第三阶段,经过验证的 SCMG 指导 SCPO(安全约束策略优化),为机器人获取优化的安全策略。最后,使用多机器人仓库(RWARE)场景进行实验评估。结果表明,我们的框架所获得的策略比现有框架更安全,并包含一定程度的优化。
{"title":"Runtime Verification-Based Safe MARL for Optimized Safety Policy Generation for Multi-Robot Systems","authors":"Yang Liu, Jiankun Li","doi":"10.3390/bdcc8050049","DOIUrl":"https://doi.org/10.3390/bdcc8050049","url":null,"abstract":"The intelligent warehouse is a modern logistics management system that uses technologies like the Internet of Things, robots, and artificial intelligence to realize automated management and optimize warehousing operations. The multi-robot system (MRS) is an important carrier for implementing an intelligent warehouse, which completes various tasks in the warehouse through cooperation and coordination between robots. As an extension of reinforcement learning and a kind of swarm intelligence, MARL (multi-agent reinforcement learning) can effectively create the multi-robot systems in intelligent warehouses. However, MARL-based multi-robot systems in intelligent warehouses face serious safety issues, such as collisions, conflicts, and congestion. To deal with these issues, this paper proposes a safe MARL method based on runtime verification, i.e., an optimized safety policy-generation framework, for multi-robot systems in intelligent warehouses. The framework consists of three stages. In the first stage, a runtime model SCMG (safety-constrained Markov Game) is defined for the multi-robot system at runtime in the intelligent warehouse. In the second stage, rPATL (probabilistic alternating-time temporal logic with rewards) is used to express safety properties, and SCMG is cyclically verified and refined through runtime verification (RV) to ensure safety. This stage guarantees the safety of robots’ behaviors before training. In the third stage, the verified SCMG guides SCPO (safety-constrained policy optimization) to obtain an optimized safety policy for robots. Finally, a multi-robot warehouse (RWARE) scenario is used for experimental evaluation. The results show that the policy obtained by our framework is safer than existing frameworks and includes a certain degree of optimization.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140968544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting 基于线性和视觉变换器的时间序列预测增强型架构
Pub Date : 2024-05-16 DOI: 10.3390/bdcc8050048
Musleh Alharthi, Ausif Mahmood
Time series forecasting has been a challenging area in the field of Artificial Intelligence. Various approaches such as linear neural networks, recurrent linear neural networks, Convolutional Neural Networks, and recently transformers have been attempted for the time series forecasting domain. Although transformer-based architectures have been outstanding in the Natural Language Processing domain, especially in autoregressive language modeling, the initial attempts to use transformers in the time series arena have met mixed success. A recent important work indicating simple linear networks outperform transformer-based designs. We investigate this paradox in detail comparing the linear neural network- and transformer-based designs, providing insights into why a certain approach may be better for a particular type of problem. We also improve upon the recently proposed simple linear neural network-based architecture by using dual pipelines with batch normalization and reversible instance normalization. Our enhanced architecture outperforms all existing architectures for time series forecasting on a majority of the popular benchmarks.
时间序列预测一直是人工智能领域一个具有挑战性的领域。线性神经网络、递归线性神经网络、卷积神经网络以及最近的变换器等各种方法都被尝试用于时间序列预测领域。虽然基于变换器的架构在自然语言处理领域,尤其是自回归语言建模领域表现出色,但在时间序列领域使用变换器的初步尝试却喜忧参半。最近的一项重要工作表明,简单线性网络的性能优于基于变换器的设计。我们通过比较线性神经网络和基于变换器的设计,详细研究了这一悖论,并深入探讨了为什么某种方法更适合特定类型的问题。我们还改进了最近提出的基于简单线性神经网络的架构,使用了批量归一化和可逆实例归一化的双流水线。在大多数流行基准上,我们的增强型架构在时间序列预测方面优于所有现有架构。
{"title":"Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting","authors":"Musleh Alharthi, Ausif Mahmood","doi":"10.3390/bdcc8050048","DOIUrl":"https://doi.org/10.3390/bdcc8050048","url":null,"abstract":"Time series forecasting has been a challenging area in the field of Artificial Intelligence. Various approaches such as linear neural networks, recurrent linear neural networks, Convolutional Neural Networks, and recently transformers have been attempted for the time series forecasting domain. Although transformer-based architectures have been outstanding in the Natural Language Processing domain, especially in autoregressive language modeling, the initial attempts to use transformers in the time series arena have met mixed success. A recent important work indicating simple linear networks outperform transformer-based designs. We investigate this paradox in detail comparing the linear neural network- and transformer-based designs, providing insights into why a certain approach may be better for a particular type of problem. We also improve upon the recently proposed simple linear neural network-based architecture by using dual pipelines with batch normalization and reversible instance normalization. Our enhanced architecture outperforms all existing architectures for time series forecasting on a majority of the popular benchmarks.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140968682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
International Classification of Diseases Prediction from MIMIIC-III Clinical Text Using Pre-Trained ClinicalBERT and NLP Deep Learning Models Achieving State of the Art 使用预训练的 ClinicalBERT 和 NLP 深度学习模型从 MIMIIC-III 临床文本中进行国际疾病分类预测,达到最先进水平
Pub Date : 2024-05-10 DOI: 10.3390/bdcc8050047
Ilyas Aden, Christopher H. T. Child, C. Reyes-Aldasoro
The International Classification of Diseases (ICD) serves as a widely employed framework for assigning diagnosis codes to electronic health records of patients. These codes facilitate the encapsulation of diagnoses and procedures conducted during a patient’s hospitalisation. This study aims to devise a predictive model for ICD codes based on the MIMIC-III clinical text dataset. Leveraging natural language processing techniques and deep learning architectures, we constructed a pipeline to distill pertinent information from the MIMIC-III dataset: the Medical Information Mart for Intensive Care III (MIMIC-III), a sizable, de-identified, and publicly accessible repository of medical records. Our method entails predicting diagnosis codes from unstructured data, such as discharge summaries and notes encompassing symptoms. We used state-of-the-art deep learning algorithms, such as recurrent neural networks (RNNs), long short-term memory (LSTM) networks, bidirectional LSTM (BiLSTM) and BERT models after tokenizing the clinical test with Bio-ClinicalBERT, a pre-trained model from Hugging Face. To evaluate the efficacy of our approach, we conducted experiments utilizing the discharge dataset within MIMIC-III. Employing the BERT model, our methodology exhibited commendable accuracy in predicting the top 10 and top 50 diagnosis codes within the MIMIC-III dataset, achieving average accuracies of 88% and 80%, respectively. In comparison to recent studies by Biseda and Kerang, as well as Gangavarapu, which reported F1 scores of 0.72 in predicting the top 10 ICD-10 codes, our model demonstrated better performance, with an F1 score of 0.87. Similarly, in predicting the top 50 ICD-10 codes, previous research achieved an F1 score of 0.75, whereas our method attained an F1 score of 0.81. These results underscore the better performance of deep learning models over conventional machine learning approaches in this domain, thus validating our findings. The ability to predict diagnoses early from clinical notes holds promise in assisting doctors or physicians in determining effective treatments, thereby reshaping the conventional paradigm of diagnosis-then-treatment care. Our code is available online.
国际疾病分类(ICD)是一个广泛使用的框架,用于为病人的电子健康记录分配诊断代码。这些代码便于概括病人住院期间的诊断和治疗过程。本研究旨在基于 MIMIC-III 临床文本数据集设计一个 ICD 代码预测模型。利用自然语言处理技术和深度学习架构,我们构建了一个从 MIMIC-III 数据集中提炼相关信息的管道:MIMIC-III(Medical Information Mart for Intensive Care III)是一个规模庞大、去标识化且可公开访问的医疗记录库。我们的方法需要从非结构化数据(如出院摘要和包含症状的笔记)中预测诊断代码。我们使用了最先进的深度学习算法,如递归神经网络(RNN)、长短期记忆(LSTM)网络、双向 LSTM(BiLSTM)和 BERT 模型,然后使用 Hugging Face 的预训练模型 Bio-ClinicalBERT 对临床测试进行标记。为了评估我们方法的有效性,我们利用 MIMIC-III 中的出院数据集进行了实验。通过使用 BERT 模型,我们的方法在预测 MIMIC-III 数据集中的前 10 和前 50 个诊断代码方面表现出了值得称赞的准确性,平均准确率分别达到了 88% 和 80%。与 Biseda 和 Kerang 以及 Gangavarapu 最近的研究相比,我们的模型在预测前 10 个 ICD-10 代码方面的 F1 得分为 0.72,表现更好,F1 得分为 0.87。同样,在预测前 50 个 ICD-10 代码时,以前的研究取得了 0.75 的 F1 分数,而我们的方法取得了 0.81 的 F1 分数。这些结果表明,在这一领域,深度学习模型的性能优于传统的机器学习方法,从而验证了我们的研究结果。从临床笔记中及早预测诊断的能力有望协助医生确定有效的治疗方法,从而重塑先诊断后治疗的传统模式。我们的代码可在线获取。
{"title":"International Classification of Diseases Prediction from MIMIIC-III Clinical Text Using Pre-Trained ClinicalBERT and NLP Deep Learning Models Achieving State of the Art","authors":"Ilyas Aden, Christopher H. T. Child, C. Reyes-Aldasoro","doi":"10.3390/bdcc8050047","DOIUrl":"https://doi.org/10.3390/bdcc8050047","url":null,"abstract":"The International Classification of Diseases (ICD) serves as a widely employed framework for assigning diagnosis codes to electronic health records of patients. These codes facilitate the encapsulation of diagnoses and procedures conducted during a patient’s hospitalisation. This study aims to devise a predictive model for ICD codes based on the MIMIC-III clinical text dataset. Leveraging natural language processing techniques and deep learning architectures, we constructed a pipeline to distill pertinent information from the MIMIC-III dataset: the Medical Information Mart for Intensive Care III (MIMIC-III), a sizable, de-identified, and publicly accessible repository of medical records. Our method entails predicting diagnosis codes from unstructured data, such as discharge summaries and notes encompassing symptoms. We used state-of-the-art deep learning algorithms, such as recurrent neural networks (RNNs), long short-term memory (LSTM) networks, bidirectional LSTM (BiLSTM) and BERT models after tokenizing the clinical test with Bio-ClinicalBERT, a pre-trained model from Hugging Face. To evaluate the efficacy of our approach, we conducted experiments utilizing the discharge dataset within MIMIC-III. Employing the BERT model, our methodology exhibited commendable accuracy in predicting the top 10 and top 50 diagnosis codes within the MIMIC-III dataset, achieving average accuracies of 88% and 80%, respectively. In comparison to recent studies by Biseda and Kerang, as well as Gangavarapu, which reported F1 scores of 0.72 in predicting the top 10 ICD-10 codes, our model demonstrated better performance, with an F1 score of 0.87. Similarly, in predicting the top 50 ICD-10 codes, previous research achieved an F1 score of 0.75, whereas our method attained an F1 score of 0.81. These results underscore the better performance of deep learning models over conventional machine learning approaches in this domain, thus validating our findings. The ability to predict diagnoses early from clinical notes holds promise in assisting doctors or physicians in determining effective treatments, thereby reshaping the conventional paradigm of diagnosis-then-treatment care. Our code is available online.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140992033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification 针对少量文本分类的知识增强提示学习
Pub Date : 2024-04-18 DOI: 10.3390/bdcc8040043
Jinshuo Liu, Lu Yang
Classification methods based on fine-tuning pre-trained language models often require a large number of labeled samples; therefore, few-shot text classification has attracted considerable attention. Prompt learning is an effective method for addressing few-shot text classification tasks in low-resource settings. The essence of prompt tuning is to insert tokens into the input, thereby converting a text classification task into a masked language modeling problem. However, constructing appropriate prompt templates and verbalizers remains challenging, as manual prompts often require expert knowledge, while auto-constructing prompts is time-consuming. In addition, the extensive knowledge contained in entities and relations should not be ignored. To address these issues, we propose a structured knowledge prompt tuning (SKPT) method, which is a knowledge-enhanced prompt tuning approach. Specifically, SKPT includes three components: prompt template, prompt verbalizer, and training strategies. First, we insert virtual tokens into the prompt template based on open triples to introduce external knowledge. Second, we use an improved knowledgeable verbalizer to expand and filter the label words. Finally, we use structured knowledge constraints during the training phase to optimize the model. Through extensive experiments on few-shot text classification tasks with different settings, the effectiveness of our model has been demonstrated.
基于微调预训练语言模型的分类方法通常需要大量标记样本,因此,少量文本分类备受关注。提示学习是在低资源环境下处理少量文本分类任务的一种有效方法。提示调整的本质是在输入中插入标记,从而将文本分类任务转换为掩码语言建模问题。然而,构建适当的提示模板和口头化器仍然具有挑战性,因为手动提示通常需要专家知识,而自动构建提示则非常耗时。此外,实体和关系中包含的大量知识也不容忽视。为了解决这些问题,我们提出了一种结构化知识提示调整(SKPT)方法,这是一种知识增强型提示调整方法。具体来说,SKPT 包括三个部分:提示模板、提示口头化器和训练策略。首先,我们在提示模板中插入基于开放三元组的虚拟标记,以引入外部知识。其次,我们使用改进的知识口头化器来扩展和过滤标签词。最后,我们在训练阶段使用结构化知识约束来优化模型。通过在不同设置的少量文本分类任务中进行大量实验,我们的模型的有效性得到了证明。
{"title":"Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification","authors":"Jinshuo Liu, Lu Yang","doi":"10.3390/bdcc8040043","DOIUrl":"https://doi.org/10.3390/bdcc8040043","url":null,"abstract":"Classification methods based on fine-tuning pre-trained language models often require a large number of labeled samples; therefore, few-shot text classification has attracted considerable attention. Prompt learning is an effective method for addressing few-shot text classification tasks in low-resource settings. The essence of prompt tuning is to insert tokens into the input, thereby converting a text classification task into a masked language modeling problem. However, constructing appropriate prompt templates and verbalizers remains challenging, as manual prompts often require expert knowledge, while auto-constructing prompts is time-consuming. In addition, the extensive knowledge contained in entities and relations should not be ignored. To address these issues, we propose a structured knowledge prompt tuning (SKPT) method, which is a knowledge-enhanced prompt tuning approach. Specifically, SKPT includes three components: prompt template, prompt verbalizer, and training strategies. First, we insert virtual tokens into the prompt template based on open triples to introduce external knowledge. Second, we use an improved knowledgeable verbalizer to expand and filter the label words. Finally, we use structured knowledge constraints during the training phase to optimize the model. Through extensive experiments on few-shot text classification tasks with different settings, the effectiveness of our model has been demonstrated.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140688770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Sorting Influence on Short Text Manual Labeling Quality for Hierarchical Classification 数据排序对分层分类中短文本人工标注质量的影响
Pub Date : 2024-04-07 DOI: 10.3390/bdcc8040041
Olga Narushynska, V. Teslyuk, Anastasiya Doroshenko, Maksym Arzubov
The precise categorization of brief texts holds significant importance in various applications within the ever-changing realm of artificial intelligence (AI) and natural language processing (NLP). Short texts are everywhere in the digital world, from social media updates to customer reviews and feedback. Nevertheless, short texts’ limited length and context pose unique challenges for accurate classification. This research article delves into the influence of data sorting methods on the quality of manual labeling in hierarchical classification, with a particular focus on short texts. The study is set against the backdrop of the increasing reliance on manual labeling in AI and NLP, highlighting its significance in the accuracy of hierarchical text classification. Methodologically, the study integrates AI, notably zero-shot learning, with human annotation processes to examine the efficacy of various data-sorting strategies. The results demonstrate how different sorting approaches impact the accuracy and consistency of manual labeling, a critical aspect of creating high-quality datasets for NLP applications. The study’s findings reveal a significant time efficiency improvement in terms of labeling, where ordered manual labeling required 760 min per 1000 samples, compared to 800 min for traditional manual labeling, illustrating the practical benefits of optimized data sorting strategies. Comparatively, ordered manual labeling achieved the highest mean accuracy rates across all hierarchical levels, with figures reaching up to 99% for segments, 95% for families, 92% for classes, and 90% for bricks, underscoring the efficiency of structured data sorting. It offers valuable insights and practical guidelines for improving labeling quality in hierarchical classification tasks, thereby advancing the precision of text analysis in AI-driven research. This abstract encapsulates the article’s background, methods, results, and conclusions, providing a comprehensive yet succinct study overview.
在日新月异的人工智能(AI)和自然语言处理(NLP)领域的各种应用中,对简短文本进行精确分类具有重要意义。在数字世界中,从社交媒体更新到客户评论和反馈,短文随处可见。然而,短文有限的长度和上下文给准确分类带来了独特的挑战。本研究文章深入探讨了数据分类方法对分层分类中人工标注质量的影响,尤其关注短文本。本研究以人工智能和 NLP 越来越依赖人工标注为背景,强调了人工标注对分层文本分类准确性的重要意义。在方法上,研究将人工智能(尤其是零点学习)与人工标注过程相结合,以检验各种数据分类策略的有效性。结果表明了不同的分类方法如何影响人工标注的准确性和一致性,而人工标注是为 NLP 应用创建高质量数据集的一个关键方面。研究结果表明,有序人工标注每 1000 个样本需要 760 分钟,而传统人工标注则需要 800 分钟,标注时间效率显著提高,说明了优化数据排序策略的实际优势。相比之下,有序人工标注在所有分层水平上都达到了最高的平均准确率,其中段的准确率高达 99%,族的准确率高达 95%,类的准确率高达 92%,砖的准确率高达 90%,这凸显了结构化数据排序的效率。它为提高分层分类任务中的标签质量提供了宝贵的见解和实用指南,从而提高了人工智能驱动研究中文本分析的精确度。本摘要概括了文章的背景、方法、结果和结论,提供了一个全面而简洁的研究概述。
{"title":"Data Sorting Influence on Short Text Manual Labeling Quality for Hierarchical Classification","authors":"Olga Narushynska, V. Teslyuk, Anastasiya Doroshenko, Maksym Arzubov","doi":"10.3390/bdcc8040041","DOIUrl":"https://doi.org/10.3390/bdcc8040041","url":null,"abstract":"The precise categorization of brief texts holds significant importance in various applications within the ever-changing realm of artificial intelligence (AI) and natural language processing (NLP). Short texts are everywhere in the digital world, from social media updates to customer reviews and feedback. Nevertheless, short texts’ limited length and context pose unique challenges for accurate classification. This research article delves into the influence of data sorting methods on the quality of manual labeling in hierarchical classification, with a particular focus on short texts. The study is set against the backdrop of the increasing reliance on manual labeling in AI and NLP, highlighting its significance in the accuracy of hierarchical text classification. Methodologically, the study integrates AI, notably zero-shot learning, with human annotation processes to examine the efficacy of various data-sorting strategies. The results demonstrate how different sorting approaches impact the accuracy and consistency of manual labeling, a critical aspect of creating high-quality datasets for NLP applications. The study’s findings reveal a significant time efficiency improvement in terms of labeling, where ordered manual labeling required 760 min per 1000 samples, compared to 800 min for traditional manual labeling, illustrating the practical benefits of optimized data sorting strategies. Comparatively, ordered manual labeling achieved the highest mean accuracy rates across all hierarchical levels, with figures reaching up to 99% for segments, 95% for families, 92% for classes, and 90% for bricks, underscoring the efficiency of structured data sorting. It offers valuable insights and practical guidelines for improving labeling quality in hierarchical classification tasks, thereby advancing the precision of text analysis in AI-driven research. This abstract encapsulates the article’s background, methods, results, and conclusions, providing a comprehensive yet succinct study overview.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140733772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Big Data and Cognitive Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1