首页 > 最新文献

BenchCouncil Transactions on Benchmarks, Standards and Evaluations最新文献

英文 中文
Algorithmic fairness in social context 社会背景下的算法公平
Pub Date : 2023-09-01 DOI: 10.1016/j.tbench.2023.100137
Yunyou Huang , Wenjing Liu , Wanling Gao , Xiangjiang Lu , Xiaoshuang Liang , Zhengxin Yang , Hongxiao Li , Li Ma , Suqin Tang

Algorithmic fairness research is currently receiving significant attention, aiming to ensure that algorithms do not discriminate between different groups or individuals with similar characteristics. However, with the popularization of algorithms in all aspects of society, algorithms have changed from mere instruments to social infrastructure. For instance, facial recognition algorithms are widely used to provide user verification services and have become an indispensable part of many social infrastructures like transportation, health care, etc. As an instrument, an algorithm needs to pay attention to the fairness of its behavior. However, as a social infrastructure, it needs to pay even more attention to its impact on social fairness. Otherwise, it may exacerbate existing inequities or create new ones. For example, if an algorithm treats all passengers equally and eliminates special seats for pregnant women in the interest of fairness, it will increase the risk of pregnant women taking public transport and indirectly damage their right to fair travel. Therefore, algorithms have the responsibility to ensure social fairness, not just within their operations. It is now time to expand the concept of algorithmic fairness beyond mere behavioral equity, assessing algorithms in a broader societal context, and examining whether they uphold and promote social fairness. This article analyzes the current status and challenges of algorithmic fairness from three key perspectives: fairness definition, fairness dataset, and fairness algorithm. Furthermore, the potential directions and strategies to promote the fairness of the algorithm are proposed.

算法公平性研究目前正受到极大关注,旨在确保算法不会歧视具有相似特征的不同群体或个人。然而,随着算法在社会各方面的普及,算法已经从单纯的工具变成了社会基础设施。例如,面部识别算法被广泛用于提供用户验证服务,并已成为交通、医疗等许多社会基础设施不可或缺的一部分。作为一种工具,算法需要注意其行为的公平性。然而,作为一种社会基础设施,它需要更加关注其对社会公平的影响。否则,它可能会加剧现有的不平等现象或造成新的不平等。例如,如果一种算法平等对待所有乘客,并为了公平起见取消孕妇专用座位,这将增加孕妇乘坐公共交通工具的风险,并间接损害她们公平出行的权利。因此,算法有责任确保社会公平,而不仅仅是在其操作范围内。现在是时候将算法公平的概念扩展到行为公平之外,在更广泛的社会背景下评估算法,并检查它们是否维护和促进社会公平了。本文从公平定义、公平数据集和公平算法三个关键角度分析了算法公平的现状和挑战。此外,还提出了提高算法公平性的潜在方向和策略。
{"title":"Algorithmic fairness in social context","authors":"Yunyou Huang ,&nbsp;Wenjing Liu ,&nbsp;Wanling Gao ,&nbsp;Xiangjiang Lu ,&nbsp;Xiaoshuang Liang ,&nbsp;Zhengxin Yang ,&nbsp;Hongxiao Li ,&nbsp;Li Ma ,&nbsp;Suqin Tang","doi":"10.1016/j.tbench.2023.100137","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100137","url":null,"abstract":"<div><p>Algorithmic fairness research is currently receiving significant attention, aiming to ensure that algorithms do not discriminate between different groups or individuals with similar characteristics. However, with the popularization of algorithms in all aspects of society, algorithms have changed from mere instruments to social infrastructure. For instance, facial recognition algorithms are widely used to provide user verification services and have become an indispensable part of many social infrastructures like transportation, health care, etc. As an instrument, an algorithm needs to pay attention to the fairness of its behavior. However, as a social infrastructure, it needs to pay even more attention to its impact on social fairness. Otherwise, it may exacerbate existing inequities or create new ones. For example, if an algorithm treats all passengers equally and eliminates special seats for pregnant women in the interest of fairness, it will increase the risk of pregnant women taking public transport and indirectly damage their right to fair travel. Therefore, algorithms have the responsibility to ensure social fairness, not just within their operations. It is now time to expand the concept of algorithmic fairness beyond mere behavioral equity, assessing algorithms in a broader societal context, and examining whether they uphold and promote social fairness. This article analyzes the current status and challenges of algorithmic fairness from three key perspectives: fairness definition, fairness dataset, and fairness algorithm. Furthermore, the potential directions and strategies to promote the fairness of the algorithm are proposed.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 3","pages":"Article 100137"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49714010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking, ethical alignment, and evaluation framework for conversational AI: Advancing responsible development of ChatGPT 对话式人工智能的基准、道德一致性和评估框架:推进ChatGPT的负责任发展
Pub Date : 2023-09-01 DOI: 10.1016/j.tbench.2023.100136
Partha Pratim Ray

Conversational AI systems like ChatGPT have seen remarkable advancements in recent years, revolutionizing human–computer interactions. However, evaluating the performance and ethical implications of these systems remains a challenge. This paper delves into the creation of rigorous benchmarks, adaptable standards, and an intelligent evaluation methodology tailored specifically for ChatGPT. We meticulously analyze several prominent benchmarks, including GLUE, SuperGLUE, SQuAD, CoQA, Persona-Chat, DSTC, BIG-Bench, HELM and MMLU illuminating their strengths and limitations. This paper also scrutinizes the existing standards set by OpenAI, IEEE’s Ethically Aligned Design, the Montreal Declaration, and Partnership on AI’s Tenets, investigating their relevance to ChatGPT. Further, we propose adaptive standards that encapsulate ethical considerations, context adaptability, and community involvement. In terms of evaluation, we explore traditional methods like BLEU, ROUGE, METEOR, precision–recall, F1 score, perplexity, and user feedback, while also proposing a novel evaluation approach that harnesses the power of reinforcement learning. Our proposed evaluation framework is multidimensional, incorporating task-specific, real-world application, and multi-turn dialogue benchmarks. We perform feasibility analysis, SWOT analysis and adaptability analysis of the proposed framework. The framework highlights the significance of user feedback, integrating it as a core component of evaluation alongside subjective assessments and interactive evaluation sessions. By amalgamating these elements, this paper contributes to the development of a comprehensive evaluation framework that fosters responsible and impactful advancement in the field of conversational AI.

近年来,像ChatGPT这样的对话式人工智能系统取得了显著进步,彻底改变了人机交互。然而,评估这些系统的性能和道德影响仍然是一项挑战。本文深入探讨了创建严格的基准、适应性标准和专门为ChatGPT量身定制的智能评估方法。我们仔细分析了几个突出的基准,包括GLUE、SuperGLUE、SQuAD、CoQA、Persona Chat、DSTC、BIG Bench、HELM和MMLU,阐明了它们的优势和局限性。本文还仔细审查了OpenAI、IEEE的道德一致设计、蒙特利尔宣言和人工智能信条伙伴关系制定的现有标准,调查了它们与ChatGPT的相关性。此外,我们提出了适应性标准,包括伦理考虑、环境适应性和社区参与。在评估方面,我们探索了传统的方法,如BLEU、ROUGE、METEOR、精确回忆、F1分数、困惑和用户反馈,同时还提出了一种利用强化学习力量的新评估方法。我们提出的评估框架是多层面的,包括特定任务、现实世界的应用程序和多回合对话基准。我们对所提出的框架进行了可行性分析、SWOT分析和适应性分析。该框架强调了用户反馈的重要性,将其与主观评估和互动评估会议一起作为评估的核心组成部分。通过整合这些元素,本文有助于开发一个全面的评估框架,促进对话人工智能领域负责任和有影响力的发展。
{"title":"Benchmarking, ethical alignment, and evaluation framework for conversational AI: Advancing responsible development of ChatGPT","authors":"Partha Pratim Ray","doi":"10.1016/j.tbench.2023.100136","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100136","url":null,"abstract":"<div><p>Conversational AI systems like ChatGPT have seen remarkable advancements in recent years, revolutionizing human–computer interactions. However, evaluating the performance and ethical implications of these systems remains a challenge. This paper delves into the creation of rigorous benchmarks, adaptable standards, and an intelligent evaluation methodology tailored specifically for ChatGPT. We meticulously analyze several prominent benchmarks, including GLUE, SuperGLUE, SQuAD, CoQA, Persona-Chat, DSTC, BIG-Bench, HELM and MMLU illuminating their strengths and limitations. This paper also scrutinizes the existing standards set by OpenAI, IEEE’s Ethically Aligned Design, the Montreal Declaration, and Partnership on AI’s Tenets, investigating their relevance to ChatGPT. Further, we propose adaptive standards that encapsulate ethical considerations, context adaptability, and community involvement. In terms of evaluation, we explore traditional methods like BLEU, ROUGE, METEOR, precision–recall, F1 score, perplexity, and user feedback, while also proposing a novel evaluation approach that harnesses the power of reinforcement learning. Our proposed evaluation framework is multidimensional, incorporating task-specific, real-world application, and multi-turn dialogue benchmarks. We perform feasibility analysis, SWOT analysis and adaptability analysis of the proposed framework. The framework highlights the significance of user feedback, integrating it as a core component of evaluation alongside subjective assessments and interactive evaluation sessions. By amalgamating these elements, this paper contributes to the development of a comprehensive evaluation framework that fosters responsible and impactful advancement in the field of conversational AI.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 3","pages":"Article 100136"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MetaverseBench: Instantiating and benchmarking metaverse challenges MetaverseBench:实例化和基准化元数据挑战
Pub Date : 2023-09-01 DOI: 10.1016/j.tbench.2023.100138
Hainan Ye , Lei Wang

The rapid evolution of the metaverse has led to the emergence of numerous metaverse technologies and productions. From a computer systems perspective, the metaverse system is a complex, large-scale system that integrates various state-of-the-art technologies, including AI, blockchain, big data, and AR/VR. It also includes multiple platforms, such as IoTs, edges, data centers, and diverse devices, including CPUs, GPUs, NPUs, and 3D glasses. Integrating these technologies and components to build a holistic system poses a significant challenge for system designers. The first step towards building the metaverse is to instantiate and evaluate the challenges and provide a comprehensive benchmark suite. However, to the best of our knowledge, no existing benchmark defines the metaverse challenges and evaluates state-of-the-art solutions from a holistic perspective. In this paper, we instantiate metaverse challenges from a system perspective and propose MetaverseBench, a holistic and comprehensive metaverse benchmark suite. Our preliminary experiments indicate that the existing system performance needs to catch up to the requirements of the metaverse by two orders of magnitude on average.

元宇宙的快速发展导致了大量元宇宙技术和产品的出现。从计算机系统的角度来看,元宇宙系统是一个复杂的大规模系统,集成了各种最先进的技术,包括人工智能、区块链、大数据和AR/VR。它还包括多个平台,如IoT、边缘、数据中心和各种设备,包括CPU、GPU、NPU和3D眼镜。集成这些技术和组件来构建一个整体系统对系统设计者来说是一个重大挑战。构建元宇宙的第一步是实例化和评估挑战,并提供一个全面的基准套件。然而,据我们所知,没有任何现有的基准可以定义元宇宙的挑战,并从整体角度评估最先进的解决方案。在本文中,我们从系统的角度实例化了元宇宙的挑战,并提出了MetaverseBench,一个全面、全面的元宇宙基准套件。我们的初步实验表明,现有的系统性能需要平均达到元宇宙的两个数量级的要求。
{"title":"MetaverseBench: Instantiating and benchmarking metaverse challenges","authors":"Hainan Ye ,&nbsp;Lei Wang","doi":"10.1016/j.tbench.2023.100138","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100138","url":null,"abstract":"<div><p>The rapid evolution of the metaverse has led to the emergence of numerous metaverse technologies and productions. From a computer systems perspective, the metaverse system is a complex, large-scale system that integrates various state-of-the-art technologies, including AI, blockchain, big data, and AR/VR. It also includes multiple platforms, such as IoTs, edges, data centers, and diverse devices, including CPUs, GPUs, NPUs, and 3D glasses. Integrating these technologies and components to build a holistic system poses a significant challenge for system designers. The first step towards building the metaverse is to instantiate and evaluate the challenges and provide a comprehensive benchmark suite. However, to the best of our knowledge, no existing benchmark defines the metaverse challenges and evaluates state-of-the-art solutions from a holistic perspective. In this paper, we instantiate metaverse challenges from a system perspective and propose MetaverseBench, a holistic and comprehensive metaverse benchmark suite. Our preliminary experiments indicate that the existing system performance needs to catch up to the requirements of the metaverse by two orders of magnitude on average.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 3","pages":"Article 100138"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49714025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind meets machine: Unravelling GPT-4’s cognitive psychology 思维与机器相遇:破解GPT-4的认知心理学
Pub Date : 2023-09-01 DOI: 10.1016/j.tbench.2023.100139
Sifatkaur Dhingra , Manmeet Singh , Vaisakh S.B. , Neetiraj Malviya , Sukhpal Singh Gill

Cognitive psychology delves on understanding perception, attention, memory, language, problem-solving, decision-making, and reasoning. Large Language Models (LLMs) are emerging as potent tools increasingly capable of performing human-level tasks. The recent development in the form of Generative Pre-trained Transformer 4 (GPT-4) and its demonstrated success in tasks complex to humans exam and complex problems has led to an increased confidence in the LLMs to become perfect instruments of intelligence. Although GPT-4 report has shown performance on some cognitive psychology tasks, a comprehensive assessment of GPT-4, via the existing well-established datasets is required. In this study, we focus on the evaluation of GPT-4’s performance on a set of cognitive psychology datasets such as CommonsenseQA, SuperGLUE, MATH and HANS. In doing so, we understand how GPT-4 processes and integrates cognitive psychology with contextual information, providing insight into the underlying cognitive processes that enable its ability to generate the responses. We show that GPT-4 exhibits a high level of accuracy in cognitive psychology tasks relative to the prior state-of-the-art models. Our results strengthen the already available assessments and confidence on GPT-4’s cognitive psychology abilities. It has significant potential to revolutionise the field of Artificial Intelligence (AI), by enabling machines to bridge the gap between human and machine reasoning.

认知心理学研究理解感知、注意力、记忆、语言、解决问题、决策和推理。大型语言模型(LLM)正在成为一种强大的工具,越来越能够执行人类级别的任务。Generative Pre-trained Transformer 4(GPT-4)形式的最新发展及其在人类复杂任务、考试和复杂问题方面的成功证明,增强了人们对LLM成为完美智能工具的信心。尽管GPT-4报告显示了一些认知心理学任务的表现,但需要通过现有的成熟数据集对GPT-4进行全面评估。在本研究中,我们重点评估了GPT-4在一组认知心理学数据集上的表现,如CommonsenseQA、SuperGLUE、MATH和HANS。通过这样做,我们了解了GPT-4是如何处理认知心理学并将其与上下文信息相结合的,从而深入了解其产生反应的潜在认知过程。我们发现,与先前最先进的模型相比,GPT-4在认知心理学任务中表现出较高的准确性。我们的研究结果加强了对GPT-4认知心理能力的现有评估和信心。它具有巨大的潜力,可以通过使机器弥合人类和机器推理之间的差距,彻底改变人工智能领域。
{"title":"Mind meets machine: Unravelling GPT-4’s cognitive psychology","authors":"Sifatkaur Dhingra ,&nbsp;Manmeet Singh ,&nbsp;Vaisakh S.B. ,&nbsp;Neetiraj Malviya ,&nbsp;Sukhpal Singh Gill","doi":"10.1016/j.tbench.2023.100139","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100139","url":null,"abstract":"<div><p>Cognitive psychology delves on understanding perception, attention, memory, language, problem-solving, decision-making, and reasoning. Large Language Models (LLMs) are emerging as potent tools increasingly capable of performing human-level tasks. The recent development in the form of Generative Pre-trained Transformer 4 (GPT-4) and its demonstrated success in tasks complex to humans exam and complex problems has led to an increased confidence in the LLMs to become perfect instruments of intelligence. Although GPT-4 report has shown performance on some cognitive psychology tasks, a comprehensive assessment of GPT-4, via the existing well-established datasets is required. In this study, we focus on the evaluation of GPT-4’s performance on a set of cognitive psychology datasets such as CommonsenseQA, SuperGLUE, MATH and HANS. In doing so, we understand how GPT-4 processes and integrates cognitive psychology with contextual information, providing insight into the underlying cognitive processes that enable its ability to generate the responses. We show that GPT-4 exhibits a high level of accuracy in cognitive psychology tasks relative to the prior state-of-the-art models. Our results strengthen the already available assessments and confidence on GPT-4’s cognitive psychology abilities. It has significant potential to revolutionise the field of Artificial Intelligence (AI), by enabling machines to bridge the gap between human and machine reasoning.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 3","pages":"Article 100139"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination 2023年BenchCouncil杰出博士论文奖诚邀提名
Pub Date : 2023-06-01 DOI: 10.1016/S2772-4859(23)00047-9

BenchCouncil Distinguished Doctoral Dissertation Award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarking community. This year, the award consists of two tracks: Computer Architecture track and Other Areas track. Each track carries a $1,000 honorarium and has individual nomination submission form and award subcommittee. For each track, all the candidates are encouraged to submit articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench). Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench 2023 conference and contribute research articles to TBench. Finally, for each track, one among the four will receive the award. More information are available from https://www.benchcouncil.org/awards/index.html#DistinguishedDoctoralDissertation

Important Dates: Nomination deadline: October 15, 2023, at 11:59 PM AoE Conference Date: December 3–5, 2023 Online Nomination form: Computer Architecture Track: https://forms.gle/a2JnWq9A9Vkq5JXXA Other Areas Track: https://forms.gle/pHBDZzWGN4kjwRJu9

BenchCouncil杰出博士论文奖旨在表彰和鼓励博士生在标杆社区广泛领域的卓越研究和写作。今年,该奖项由两个轨道组成:计算机架构轨道和其他领域轨道。每条赛道都有1000美元的奖金,并有个人提名提交表和颁奖小组委员会。对于每条赛道,鼓励所有候选人向BenchCouncil Transactions on Benchmarks,Standards,and Evaluation(TBench)提交文章。在每首曲目的参赛作品中,四名候选人将被选为决赛选手。他们将被邀请在2023年BenchCouncil Bench会议上发表30分钟的演讲,并为TBench撰写研究文章。最后,对于每首曲目,四首曲目中的一首将获得该奖项。更多信息请访问https://www.benchcouncil.org/awards/index.html#DistinguishedDoctoralDissertationImportant日期:提名截止日期:2023年10月15日下午11:59 AoE会议日期:2021年12月3日至5日在线提名表格:计算机架构轨道:https://forms.gle/a2JnWq9A9Vkq5JXXA其他区域跟踪:https://forms.gle/pHBDZzWGN4kjwRJu9
{"title":"2023 BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination","authors":"","doi":"10.1016/S2772-4859(23)00047-9","DOIUrl":"https://doi.org/10.1016/S2772-4859(23)00047-9","url":null,"abstract":"<div><p>BenchCouncil Distinguished Doctoral Dissertation Award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarking community. This year, the award consists of two tracks: Computer Architecture track and Other Areas track. Each track carries a $1,000 honorarium and has individual nomination submission form and award subcommittee. For each track, all the candidates are encouraged to submit articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench). Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench 2023 conference and contribute research articles to TBench. Finally, for each track, one among the four will receive the award. More information are available from <span>https://www.benchcouncil.org/awards/index.html#DistinguishedDoctoralDissertation</span><svg><path></path></svg></p><p><strong>Important Dates:</strong> Nomination deadline: October 15, 2023, at 11:59 PM AoE Conference Date: December 3–5, 2023 Online Nomination form: Computer Architecture Track: <span>https://forms.gle/a2JnWq9A9Vkq5JXXA</span><svg><path></path></svg> Other Areas Track: <span>https://forms.gle/pHBDZzWGN4kjwRJu9</span><svg><path></path></svg></p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StreamAD: A cloud platform metrics-oriented benchmark for unsupervised online anomaly detection StreamAD:用于无监督在线异常检测的面向云平台指标的基准
Pub Date : 2023-06-01 DOI: 10.1016/j.tbench.2023.100121
Jiahui Xu , Chengxiang Lin , Fengrui Liu , Yang Wang , Wei Xiong , Zhenyu Li , Hongtao Guan , Gaogang Xie

Cloud platforms, serving as fundamental infrastructure, play a significant role in developing modern applications. In recent years, there has been growing interest among researchers in utilizing machine learning algorithms to rapidly detect and diagnose faults within complex cloud platforms, aiming to improve the quality of service and optimize system performance. There is a need for online anomaly detection on cloud platform metrics to provide timely fault alerts. To assist Site Reliability Engineers (SREs) in selecting suitable anomaly detection algorithms based on specific use cases, we introduce a benchmark called StreamAD. This benchmark offers three-fold contributions: (1) it encompasses eleven unsupervised algorithms with open-source code; (2) it abstracts various common operators for online anomaly detection which enhances the efficiency of algorithm development; (3) it provides extensive comparisons of various algorithms using different evaluation methods; With StreamAD, researchers can efficiently conduct comprehensive evaluations for new algorithms, which can further facilitate research in this area. The code of StreamAD is published at https://github.com/Fengrui-Liu/StreamAD.

云平台作为基础设施,在开发现代应用程序方面发挥着重要作用。近年来,研究人员对利用机器学习算法快速检测和诊断复杂云平台中的故障越来越感兴趣,旨在提高服务质量和优化系统性能。需要在云平台指标上进行在线异常检测,以提供及时的故障警报。为了帮助现场可靠性工程师(SRE)根据特定用例选择合适的异常检测算法,我们引入了一个名为StreamAD的基准。这个基准测试提供了三个方面的贡献:(1)它包含了11个带有开源代码的无监督算法;(2) 它抽象了各种常见的在线异常检测算子,提高了算法开发的效率;(3) 它提供了使用不同评估方法的各种算法的广泛比较;有了StreamAD,研究人员可以有效地对新算法进行全面评估,这可以进一步促进该领域的研究。StreamAD的代码发布在https://github.com/Fengrui-Liu/StreamAD.
{"title":"StreamAD: A cloud platform metrics-oriented benchmark for unsupervised online anomaly detection","authors":"Jiahui Xu ,&nbsp;Chengxiang Lin ,&nbsp;Fengrui Liu ,&nbsp;Yang Wang ,&nbsp;Wei Xiong ,&nbsp;Zhenyu Li ,&nbsp;Hongtao Guan ,&nbsp;Gaogang Xie","doi":"10.1016/j.tbench.2023.100121","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100121","url":null,"abstract":"<div><p>Cloud platforms, serving as fundamental infrastructure, play a significant role in developing modern applications. In recent years, there has been growing interest among researchers in utilizing machine learning algorithms to rapidly detect and diagnose faults within complex cloud platforms, aiming to improve the quality of service and optimize system performance. There is a need for online anomaly detection on cloud platform metrics to provide timely fault alerts. To assist Site Reliability Engineers (SREs) in selecting suitable anomaly detection algorithms based on specific use cases, we introduce a benchmark called StreamAD. This benchmark offers three-fold contributions: (1) it encompasses eleven unsupervised algorithms with open-source code; (2) it abstracts various common operators for online anomaly detection which enhances the efficiency of algorithm development; (3) it provides extensive comparisons of various algorithms using different evaluation methods; With StreamAD, researchers can efficiently conduct comprehensive evaluations for new algorithms, which can further facilitate research in this area. The code of StreamAD is published at <span>https://github.com/Fengrui-Liu/StreamAD</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49715935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPUBench: An application-driven scalable benchmark suite for comprehensive DPU evaluation DPUBench:一个应用程序驱动的可扩展基准测试套件,用于全面的DPU评估
Pub Date : 2023-06-01 DOI: 10.1016/j.tbench.2023.100120
Zheng Wang , Chenxi Wang , Lei Wang

With the development of data centers, network bandwidth has rapidly increased, reaching hundreds of Gbps. However, the network I/O processing performance of CPU improvement has not kept pace with this growth in recent years, which leads to the CPU being increasingly burdened by network applications in data centers. To address this issue, Data Processing Unit (DPU) has emerged as a hardware accelerator designed to offload network applications from the CPU. As a new hardware device, the DPU architecture design is still in the exploration stage. Previous DPU benchmarks are not neutral and comprehensive, making them unsuitable as general benchmarks. To showcase the advantages of their specific architectural features, DPU vendors tend to provide some particular architecture-dependent evaluation programs. Moreover, they fail to provide comprehensive coverage and cannot adequately represent the full range of network applications. To address this gap, we propose an application-driven scalable benchmark suite called DPUBench. DPUBench classifies DPU applications into three typical scenarios — network, storage, and security, and includes a scalable benchmark framework that contains essential Operator Set in these scenarios and End-to-end Evaluation Programs in real data center scenarios. DPUBench can easily incorporate new operators and end-to-end evaluation programs as DPU evolves. We present the results of evaluating the NVIDIA BlueField-2 using DPUBench and provide optimization recommendations. DPUBench are publicly available from https://www.benchcouncil.org/DPUBench.

随着数据中心的发展,网络带宽迅速增加,达到数百Gbps。然而,近年来CPU改进的网络I/O处理性能没有跟上这种增长,这导致CPU越来越受到数据中心网络应用的负担。为了解决这个问题,数据处理单元(DPU)已经成为一种硬件加速器,旨在从CPU卸载网络应用程序。DPU作为一种新型的硬件设备,其体系结构设计尚处于探索阶段。以前的DPU基准并不中立和全面,因此不适合作为一般基准。为了展示其特定体系结构功能的优势,DPU供应商倾向于提供一些特定的依赖于体系结构的评估程序。此外,它们不能提供全面的覆盖范围,也不能充分代表网络应用的全部范围。为了解决这一差距,我们提出了一个名为DPUBench的应用程序驱动的可扩展基准测试套件。DPUBench将DPU应用程序分为三种典型场景——网络、存储和安全,并包括一个可扩展的基准框架,该框架包含这些场景中的基本操作员集和真实数据中心场景中的端到端评估程序。随着DPU的发展,DPUBench可以轻松地整合新的运营商和端到端评估程序。我们展示了使用DPUBench评估NVIDIA BlueField-2的结果,并提供了优化建议。DPUBench可从https://www.benchcouncil.org/DPUBench.
{"title":"DPUBench: An application-driven scalable benchmark suite for comprehensive DPU evaluation","authors":"Zheng Wang ,&nbsp;Chenxi Wang ,&nbsp;Lei Wang","doi":"10.1016/j.tbench.2023.100120","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100120","url":null,"abstract":"<div><p>With the development of data centers, network bandwidth has rapidly increased, reaching hundreds of Gbps. However, the network I/O processing performance of CPU improvement has not kept pace with this growth in recent years, which leads to the CPU being increasingly burdened by network applications in data centers. To address this issue, Data Processing Unit (DPU) has emerged as a hardware accelerator designed to offload network applications from the CPU. As a new hardware device, the DPU architecture design is still in the exploration stage. Previous DPU benchmarks are not neutral and comprehensive, making them unsuitable as general benchmarks. To showcase the advantages of their specific architectural features, DPU vendors tend to provide some particular architecture-dependent evaluation programs. Moreover, they fail to provide comprehensive coverage and cannot adequately represent the full range of network applications. To address this gap, we propose an <strong>application-driven</strong> scalable benchmark suite called <strong>DPUBench</strong>. DPUBench classifies DPU applications into three typical scenarios — network, storage, and security, and includes a scalable benchmark framework that contains essential Operator Set in these scenarios and End-to-end Evaluation Programs in real data center scenarios. DPUBench can easily incorporate new operators and end-to-end evaluation programs as DPU evolves. We present the results of evaluating the NVIDIA BlueField-2 using DPUBench and provide optimization recommendations. DPUBench are publicly available from <span>https://www.benchcouncil.org/DPUBench</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49715585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoviDetector: A transfer learning-based semi supervised approach to detect Covid-19 using CXR images CoviDetector:一种基于迁移学习的半监督方法,使用CXR图像检测Covid-19
Pub Date : 2023-06-01 DOI: 10.1016/j.tbench.2023.100119
Deepraj Chowdhury , Anik Das , Ajoy Dey , Soham Banerjee , Muhammed Golec , Dimitrios Kollias , Mohit Kumar , Guneet Kaur , Rupinder Kaur , Rajesh Chand Arya , Gurleen Wander , Praneet Wander , Gurpreet Singh Wander , Ajith Kumar Parlikad , Sukhpal Singh Gill , Steve Uhlig

COVID-19 was one of the deadliest and most infectious illnesses of this century. Research has been done to decrease pandemic deaths and slow down its spread. COVID-19 detection investigations have utilised Chest X-ray (CXR) images with deep learning techniques with its sensitivity in identifying pneumonic alterations. However, CXR images are not publicly available due to users’ privacy concerns, resulting in a challenge to train a highly accurate deep learning model from scratch. Therefore, we proposed CoviDetector, a new semi-supervised approach based on transfer learning and clustering, which displays improved performance and requires less training data. CXR images are given as input to this model, and individuals are categorised into three classes: (1) COVID-19 positive; (2) Viral pneumonia; and (3) Normal. The performance of CoviDetector has been evaluated on four different datasets, achieving over 99% accuracy on them. Additionally, we generate heatmaps utilising Grad-CAM and overlay them on the CXR images to present the highlighted areas that were deciding factors in detecting COVID-19. Finally, we developed an Android app to offer a user-friendly interface. We release the code, datasets and results’ scripts of CoviDetector for reproducibility purposes; they are available at: https://github.com/dasanik2001/CoviDetector

新冠肺炎是本世纪最致命、最具传染性的疾病之一。已经进行了减少大流行死亡人数和减缓其传播的研究。新冠肺炎检测调查利用了具有深度学习技术的胸部X射线(CXR)图像,其在识别肺炎改变方面的敏感性。然而,由于用户的隐私问题,CXR图像尚未公开,这给从头开始训练高度准确的深度学习模型带来了挑战。因此,我们提出了CoviDetector,这是一种基于迁移学习和聚类的新的半监督方法,它显示出改进的性能,并且需要更少的训练数据。CXR图像被作为该模型的输入,个体被分为三类:(1)新冠肺炎阳性;(2) 病毒性肺炎;和(3)正常。CoviDetector的性能已经在四个不同的数据集上进行了评估,在这些数据集上实现了99%以上的准确率。此外,我们使用Grad-CAM生成热图,并将其覆盖在CXR图像上,以呈现突出显示的区域,这些区域是检测新冠肺炎的决定性因素。最后,我们开发了一个Android应用程序,提供了一个用户友好的界面。我们发布CoviDetector的代码、数据集和结果脚本,以实现再现性;它们可在以下位置获得:https://github.com/dasanik2001/CoviDetector
{"title":"CoviDetector: A transfer learning-based semi supervised approach to detect Covid-19 using CXR images","authors":"Deepraj Chowdhury ,&nbsp;Anik Das ,&nbsp;Ajoy Dey ,&nbsp;Soham Banerjee ,&nbsp;Muhammed Golec ,&nbsp;Dimitrios Kollias ,&nbsp;Mohit Kumar ,&nbsp;Guneet Kaur ,&nbsp;Rupinder Kaur ,&nbsp;Rajesh Chand Arya ,&nbsp;Gurleen Wander ,&nbsp;Praneet Wander ,&nbsp;Gurpreet Singh Wander ,&nbsp;Ajith Kumar Parlikad ,&nbsp;Sukhpal Singh Gill ,&nbsp;Steve Uhlig","doi":"10.1016/j.tbench.2023.100119","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100119","url":null,"abstract":"<div><p>COVID-19 was one of the deadliest and most infectious illnesses of this century. Research has been done to decrease pandemic deaths and slow down its spread. COVID-19 detection investigations have utilised Chest X-ray (CXR) images with deep learning techniques with its sensitivity in identifying pneumonic alterations. However, CXR images are not publicly available due to users’ privacy concerns, resulting in a challenge to train a highly accurate deep learning model from scratch. Therefore, we proposed <strong>CoviDetector</strong>, a new semi-supervised approach based on transfer learning and clustering, which displays improved performance and requires less training data. CXR images are given as input to this model, and individuals are categorised into three classes: (1) COVID-19 positive; (2) Viral pneumonia; and (3) Normal. The performance of CoviDetector has been evaluated on four different datasets, achieving over 99% accuracy on them. Additionally, we generate heatmaps utilising Grad-CAM and overlay them on the CXR images to present the highlighted areas that were deciding factors in detecting COVID-19. Finally, we developed an Android app to offer a user-friendly interface. We release the code, datasets and results’ scripts of CoviDetector for reproducibility purposes; they are available at: <span>https://github.com/dasanik2001/CoviDetector</span><svg><path></path></svg></p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100119"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49716000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Third BenchCouncil International Symposium on Intelligent Computers, Algorithms, and Applications (IC 2023) Call for Papers 第三届智能计算机,算法和应用国际研讨会(IC 2023)征文
Pub Date : 2023-06-01 DOI: 10.1016/j.tbench.2023.100123

Sponsored and organized by the International Open Benchmark Council (BenchCouncil), the IC conference is to provide a pioneering technology map through searching and advancing state-of-the-art and state-of-the-practice in processors, systems, algorithms, and applications for machine learning, deep learning, spiking neural network and other AI techniques across multidisciplinary and interdisciplinary areas. IC 2023 invites manuscripts describing original work in the above areas and topics. All accepted papers will be presented at the IC 2023 conference and published by Springer CCIS (Indexed by EI). The IC conferences have been successfully held for two series from 2019 to 2022 and attracted plenty of paper submissions and participants. IC 2023 will be held on December 4-6, 2023 in Sanya and invites manuscripts describing original work in processors, systems, algorithms, and applications for AI techniques across multidisciplinary and interdisciplinary areas. The conference website is https://www.benchcouncil.org/ic2023/.

Important Dates: Paper Submission: July 31, 2023, at 11:59 PM AoE Notification: September 30, 2023, at 11:59 PM AoE Final Papers Due: October 31, 2023, at 11:59 PM AoE Conference Date: December 4-6, 2023 Submission Site: https://ic2023.hotcrp.com/

IC会议由国际开放基准理事会(BenchCouncil)赞助和组织,旨在通过搜索和推进处理器、系统、算法和机器学习、深度学习、,尖峰神经网络和其他跨学科和跨学科领域的人工智能技术。IC2023邀请描述上述领域和主题的原创作品的手稿。所有被接受的论文将在IC 2023会议上发表,并由Springer CCIS(EI索引)出版。从2019年到2022年,IC会议已经成功举办了两个系列,吸引了大量的论文提交和参与者。IC2023将于2023年12月4日至6日在三亚举行,邀请手稿描述处理器、系统、算法以及人工智能技术在多学科和跨学科领域的应用。会议网站是https://www.benchcouncil.org/ic2023/.Important日期:论文提交时间:2023年7月31日,上午11:59 AoE通知:2023月30日,下午11:59 Ao E最终论文截止时间:2025年10月31日下午11:59https://ic2023.hotcrp.com/
{"title":"The Third BenchCouncil International Symposium on Intelligent Computers, Algorithms, and Applications (IC 2023) Call for Papers","authors":"","doi":"10.1016/j.tbench.2023.100123","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100123","url":null,"abstract":"<div><p>Sponsored and organized by the International Open Benchmark Council (BenchCouncil), the IC conference is to provide a pioneering technology map through searching and advancing state-of-the-art and state-of-the-practice in processors, systems, algorithms, and applications for machine learning, deep learning, spiking neural network and other AI techniques across multidisciplinary and interdisciplinary areas. IC 2023 invites manuscripts describing original work in the above areas and topics. All accepted papers will be presented at the IC 2023 conference and published by Springer CCIS (Indexed by EI). The IC conferences have been successfully held for two series from 2019 to 2022 and attracted plenty of paper submissions and participants. IC 2023 will be held on December 4-6, 2023 in Sanya and invites manuscripts describing original work in processors, systems, algorithms, and applications for AI techniques across multidisciplinary and interdisciplinary areas. The conference website is <span>https://www.benchcouncil.org/ic2023/</span><svg><path></path></svg>.</p><p><strong>Important Dates:</strong> Paper Submission: July 31, 2023, at 11:59 PM AoE Notification: September 30, 2023, at 11:59 PM AoE Final Papers Due: October 31, 2023, at 11:59 PM AoE Conference Date: December 4-6, 2023 Submission Site: <span>https://ic2023.hotcrp.com/</span><svg><path></path></svg></p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TBench (BenchCouncil Transactions on Benchmarks, Standards and Evaluations) Calls for Papers bench (BenchCouncil Transactions on benchmark, Standards and evaluation)征文
Pub Date : 2023-06-01 DOI: 10.1016/S2772-4859(23)00048-0

BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access journal dedicated to advancing the field of benchmarks, data sets, standards, evaluations and optimizations. This journal is a peer-reviewed, subsidized open-access journal where The International Open Benchmark Council (BenchCouncil) pays the open-access fee. Authors do not have to pay any open-access publication fee. However, at least one of the authors must register BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench) (https://www.benchcouncil.org/bench/) and present their work. It seeks a fast-track publication with an average turnaround time of one month.

We invite submissions covering a wide range of topics from various disciplines, with a particular emphasis on interdisciplinary research. Whether it pertains to computers, AI, medicine, education, finance, business, psychology, or other social disciplines, all relevant contributions are welcome.

At TBench, we prioritize the reproducibility of research. We strongly encourage authors to ensure that their articles are prepared for open-source or artifact evaluation before submission. The journal website is https://www.benchcouncil.org/tbench.

BenchCouncil Transactions on Benchmarks,Standards and Evaluation(TBench)是一本开放获取期刊,致力于推进基准、数据集、标准、评估和优化领域的发展。本期刊是一本同行评审、有补贴的开放获取期刊,由国际开放基准理事会(BenchCouncil)支付开放获取费。作者无需支付任何开放访问出版费。然而,至少有一位作者必须注册BenchCouncil国际基准、测量和优化研讨会(Bench)(https://www.benchcouncil.org/bench/)并展示他们的作品。它寻求一种平均周转时间为一个月的快速出版物。我们邀请来自不同学科的广泛主题的投稿,特别强调跨学科研究。无论是计算机、人工智能、医学、教育、金融、商业、心理学还是其他社会学科,都欢迎所有相关贡献。在TBench,我们优先考虑研究的再现性。我们强烈鼓励作者确保他们的文章在提交前准备好进行开源或工件评估。期刊网站是https://www.benchcouncil.org/tbench.
{"title":"TBench (BenchCouncil Transactions on Benchmarks, Standards and Evaluations) Calls for Papers","authors":"","doi":"10.1016/S2772-4859(23)00048-0","DOIUrl":"https://doi.org/10.1016/S2772-4859(23)00048-0","url":null,"abstract":"<div><p>BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access journal dedicated to advancing the field of benchmarks, data sets, standards, evaluations and optimizations. This journal is a peer-reviewed, subsidized open-access journal where The International Open Benchmark Council (BenchCouncil) pays the open-access fee. Authors do not have to pay any open-access publication fee. However, at least one of the authors must register BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench) (<span>https://www.benchcouncil.org/bench/</span><svg><path></path></svg>) and present their work. It seeks a fast-track publication with an average turnaround time of one month.</p><p>We invite submissions covering a wide range of topics from various disciplines, with a particular emphasis on interdisciplinary research. Whether it pertains to computers, AI, medicine, education, finance, business, psychology, or other social disciplines, all relevant contributions are welcome.</p><p>At TBench, we prioritize the reproducibility of research. We strongly encourage authors to ensure that their articles are prepared for open-source or artifact evaluation before submission. The journal website is <span>https://www.benchcouncil.org/tbench</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 2","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49715448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
BenchCouncil Transactions on Benchmarks, Standards and Evaluations
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1