首页 > 最新文献

AI Open最新文献

英文 中文
A survey of transformers 变压器的调查
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.10.001
Tianyang Lin, Yuxin Wang, Xiangyang Liu, Xipeng Qiu

Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots of interest from academic and industry researchers. Up to the present, a great variety of Transformer variants (a.k.a. X-formers) have been proposed, however, a systematic and comprehensive literature review on these Transformer variants is still missing. In this survey, we provide a comprehensive review of various X-formers. We first briefly introduce the vanilla Transformer and then propose a new taxonomy of X-formers. Next, we introduce the various X-formers from three perspectives: architectural modification, pre-training, and applications. Finally, we outline some potential directions for future research.

变形金刚在自然语言处理、计算机视觉、音频处理等诸多人工智能领域取得了巨大成功。因此,它自然会引起学术界和工业界研究人员的极大兴趣。到目前为止,已经提出了各种各样的Transformer变体(又名x -former),然而,关于这些Transformer变体的系统和全面的文献综述仍然缺失。在这项调查中,我们提供了各种x -former的全面审查。我们首先简要介绍香草Transformer,然后提出x -former的新分类。接下来,我们将从三个角度介绍各种x -former:架构修改、预培训和应用程序。最后,对今后的研究方向进行了展望。
{"title":"A survey of transformers","authors":"Tianyang Lin,&nbsp;Yuxin Wang,&nbsp;Xiangyang Liu,&nbsp;Xipeng Qiu","doi":"10.1016/j.aiopen.2022.10.001","DOIUrl":"10.1016/j.aiopen.2022.10.001","url":null,"abstract":"<div><p>Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots of interest from academic and industry researchers. Up to the present, a great variety of Transformer variants (a.k.a. X-formers) have been proposed, however, a systematic and comprehensive literature review on these Transformer variants is still missing. In this survey, we provide a comprehensive review of various X-formers. We first briefly introduce the vanilla Transformer and then propose a new taxonomy of X-formers. Next, we introduce the various X-formers from three perspectives: architectural modification, pre-training, and applications. Finally, we outline some potential directions for future research.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 111-132"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000146/pdfft?md5=802c180f3454a2e26d638dce462d3dff&pid=1-s2.0-S2666651022000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80994748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 431
Debiased recommendation with neural stratification 基于神经分层的去偏推荐
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.005
Quanyu Dai , Zhenhua Dong , Xu Chen

Debiased recommender models have recently attracted increasing attention from the academic and industry communities. Existing models are mostly based on the technique of inverse propensity score (IPS). However, in the recommendation domain, IPS can be hard to estimate given the sparse and noisy nature of the observed user–item exposure data. To alleviate this problem, in this paper, we assume that the user preference can be dominated by a small amount of latent factors, and propose to cluster the users for computing more accurate IPS via increasing the exposure densities. Basically, such method is similar with the spirit of stratification models in applied statistics. However, unlike previous heuristic stratification strategy, we learn the cluster criterion by presenting the users with low ranking embeddings, which are future shared with the user representations in the recommender model. At last, we find that our model has strong connections with the previous two types of debiased recommender models. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.

最近,去偏推荐模型越来越受到学术界和行业界的关注。现有的模型大多基于反倾向评分(IPS)技术。然而,在推荐领域,考虑到观察到的用户-项目暴露数据的稀疏性和噪声性,IPS可能很难估计。为了缓解这个问题,在本文中,我们假设用户偏好可以由少量潜在因素主导,并建议通过增加曝光密度来对用户进行聚类,以计算更准确的IPS。基本上,这种方法与应用统计学中分层模型的精神是相似的。然而,与以前的启发式分层策略不同,我们通过向用户呈现低排名嵌入来学习聚类标准,这些嵌入将来与推荐模型中的用户表示共享。最后,我们发现我们的模型与前两种类型的去偏推荐模型有很强的联系。我们基于真实世界的数据集进行了大量实验,以证明所提出方法的有效性。
{"title":"Debiased recommendation with neural stratification","authors":"Quanyu Dai ,&nbsp;Zhenhua Dong ,&nbsp;Xu Chen","doi":"10.1016/j.aiopen.2022.11.005","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.005","url":null,"abstract":"<div><p>Debiased recommender models have recently attracted increasing attention from the academic and industry communities. Existing models are mostly based on the technique of inverse propensity score (IPS). However, in the recommendation domain, IPS can be hard to estimate given the sparse and noisy nature of the observed user–item exposure data. To alleviate this problem, in this paper, we assume that the user preference can be dominated by a small amount of latent factors, and propose to cluster the users for computing more accurate IPS via increasing the exposure densities. Basically, such method is similar with the spirit of stratification models in applied statistics. However, unlike previous heuristic stratification strategy, we learn the cluster criterion by presenting the users with low ranking embeddings, which are future shared with the user representations in the recommender model. At last, we find that our model has strong connections with the previous two types of debiased recommender models. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 213-217"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000201/pdfft?md5=1244b2c9319c988375fcebe6f3172caa&pid=1-s2.0-S2666651022000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCA: Bilinear Convolutional Neural Networks and Attention Networks for legal question answering BCA:用于法律问答的双线性卷积神经网络和注意力网络
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.002
Haiguang Zhang, Tongyue Zhang, Faxin Cao, Zhizheng Wang, Yuanyu Zhang, Yuanyuan Sun, Mark Anthony Vicente

The National Judicial Examination of China is an essential examination for selecting legal practitioners. In recent years, people have tried to use machine learning algorithms to answer examination questions. With the proposal of JEC-QA (Zhong et al. 2020), the judicial examination becomes a particular legal task. The data of judicial examination contains two types, i.e., Knowledge-Driven questions and Case-Analysis questions. Both require complex reasoning and text comprehension, thus challenging computers to answer judicial examination questions. We propose Bilinear Convolutional Neural Networks and Attention Networks (BCA) in this paper, which is an improved version based on the model proposed by our team on the Challenge of AI in Law 2021 judicial examination task. It has two essential modules, Knowledge-Driven Module (KDM) for local features extraction and Case-Analysis Module (CAM) for the semantic difference clarification between the question stem and the options. We also add a post-processing module to correct the results in the final stage. The experimental results show that our system achieves state-of-the-art in the offline test of the judicial examination task.

国家司法考试是选拔法律从业人员的重要考试。近年来,人们尝试使用机器学习算法来回答考试问题。随着JEC-QA(Zhong et al.2020)的提出,司法审查成为一项特殊的法律任务。司法考试数据分为知识驱动题和案例分析题两类。两者都需要复杂的推理和文本理解,因此对计算机回答司法考试问题具有挑战性。我们在本文中提出了双线性卷积神经网络和注意力网络(BCA),这是基于我们团队在2021年法律中人工智能挑战司法考试任务中提出的模型的改进版本。它有两个基本模块,用于局部特征提取的知识驱动模块(KDM)和用于澄清题干和选项之间语义差异的案例分析模块(CAM)。我们还添加了一个后处理模块,以在最后阶段更正结果。实验结果表明,我们的系统在司法考试任务的离线测试中达到了最先进的水平。
{"title":"BCA: Bilinear Convolutional Neural Networks and Attention Networks for legal question answering","authors":"Haiguang Zhang,&nbsp;Tongyue Zhang,&nbsp;Faxin Cao,&nbsp;Zhizheng Wang,&nbsp;Yuanyu Zhang,&nbsp;Yuanyuan Sun,&nbsp;Mark Anthony Vicente","doi":"10.1016/j.aiopen.2022.11.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.002","url":null,"abstract":"<div><p>The National Judicial Examination of China is an essential examination for selecting legal practitioners. In recent years, people have tried to use machine learning algorithms to answer examination questions. With the proposal of JEC-QA (Zhong et al. 2020), the judicial examination becomes a particular legal task. The data of judicial examination contains two types, i.e., Knowledge-Driven questions and Case-Analysis questions. Both require complex reasoning and text comprehension, thus challenging computers to answer judicial examination questions. We propose <strong>B</strong>ilinear <strong>C</strong>onvolutional Neural Networks and <strong>A</strong>ttention Networks (<strong>BCA</strong>) in this paper, which is an improved version based on the model proposed by our team on the Challenge of AI in Law 2021 judicial examination task. It has two essential modules, <strong>K</strong>nowledge-<strong>D</strong>riven <strong>M</strong>odule (<strong>KDM</strong>) for local features extraction and <strong>C</strong>ase-<strong>A</strong>nalysis <strong>M</strong>odule (<strong>CAM</strong>) for the semantic difference clarification between the question stem and the options. We also add a post-processing module to correct the results in the final stage. The experimental results show that our system achieves state-of-the-art in the offline test of the judicial examination task.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 172-181"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000171/pdfft?md5=7fc8cf53d6ea6be2b3999607b407f336&pid=1-s2.0-S2666651022000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hierarchical label with imbalance and attributed network structure fusion for network embedding 不平衡分层标签与网络嵌入的属性网络结构融合
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.07.002
Shu Zhao , Jialin Chen , Jie Chen , Yanping Zhang , Jie Tang

Network embedding (NE) aims to learn low-dimensional vectors for nodes while preserving the network’s essential properties (e.g., attributes and structure). Previous methods have been proposed to learn node representations with encouraging achievements. Recent research has shown that the hierarchical label has potential value in seeking latent hierarchical structures and learning more effective classification information. Nevertheless, most existing network embedding methods either focus on the network without the hierarchical label, or the learning process of hierarchical structure for labels is separate from the network structure. Learning node embedding with the hierarchical label suffers from two challenges: (1) Fusing hierarchical labels and network is still an arduous task. (2) The data volume imbalance under different hierarchical labels is more noticeable than flat labels. This paper proposes a Hierarchical Label and Attributed Network Structure Fusion model(HANS), which realizes the fusion of hierarchical labels and nodes through attributes and the attention-based fusion module. Particularly, HANS designs a directed hierarchy structure encoder for modeling label dependencies in three directions (parent–child, child–parent, and sibling) to strengthen the co-occurrence information between labels of different frequencies and reduce the impact of the label imbalance. Experiments on real-world datasets demonstrate that the proposed method achieves significantly better performance than the state-of-the-art algorithms.

网络嵌入(NE)旨在学习节点的低维向量,同时保留网络的基本属性(如属性和结构)。先前已经提出了学习节点表示的方法,并取得了令人鼓舞的成果。最近的研究表明,层次标签在寻找潜在的层次结构和学习更有效的分类信息方面具有潜在的价值。然而,大多数现有的网络嵌入方法要么专注于没有分层标签的网络,要么标签的分层结构学习过程与网络结构分离。使用分层标签嵌入学习节点面临两个挑战:(1)融合分层标签和网络仍然是一项艰巨的任务。(2) 不同层次标签下的数据量失衡比平面标签更明显。本文提出了一种层次标签与属性网络结构融合模型(HANS),通过属性和基于注意力的融合模块实现了层次标签与节点的融合。特别是,HANS设计了一种有向层次结构编码器,用于对三个方向(父-子、子-父和兄弟)的标签依赖性进行建模,以增强不同频率标签之间的共现信息,并减少标签不平衡的影响。在真实世界数据集上的实验表明,所提出的方法比最先进的算法取得了更好的性能。
{"title":"Hierarchical label with imbalance and attributed network structure fusion for network embedding","authors":"Shu Zhao ,&nbsp;Jialin Chen ,&nbsp;Jie Chen ,&nbsp;Yanping Zhang ,&nbsp;Jie Tang","doi":"10.1016/j.aiopen.2022.07.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.07.002","url":null,"abstract":"<div><p>Network embedding (NE) aims to learn low-dimensional vectors for nodes while preserving the network’s essential properties (e.g., attributes and structure). Previous methods have been proposed to learn node representations with encouraging achievements. Recent research has shown that the hierarchical label has potential value in seeking latent hierarchical structures and learning more effective classification information. Nevertheless, most existing network embedding methods either focus on the network without the hierarchical label, or the learning process of hierarchical structure for labels is separate from the network structure. Learning node embedding with the hierarchical label suffers from two challenges: (1) Fusing hierarchical labels and network is still an arduous task. (2) The data volume imbalance under different hierarchical labels is more noticeable than flat labels. This paper proposes a <strong>H</strong>ierarchical Label and <strong>A</strong>ttributed <strong>N</strong>etwork <strong>S</strong>tructure Fusion model(HANS), which realizes the fusion of hierarchical labels and nodes through attributes and the attention-based fusion module. Particularly, HANS designs a directed hierarchy structure encoder for modeling label dependencies in three directions (parent–child, child–parent, and sibling) to strengthen the co-occurrence information between labels of different frequencies and reduce the impact of the label imbalance. Experiments on real-world datasets demonstrate that the proposed method achieves significantly better performance than the state-of-the-art algorithms.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 91-100"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000122/pdfft?md5=b0971b7ac0f357e13fd0e41f95f6412d&pid=1-s2.0-S2666651022000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-directed machine learning 自主机器学习
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.06.001
Wenwu Zhu , Xin Wang , Pengtao Xie

Conventional machine learning (ML) relies heavily on manual design from machine learning experts to decide learning tasks, data, models, optimization algorithms, and evaluation metrics, which is labor-intensive, time-consuming, and cannot learn autonomously like humans. In education science, self-directed learning, where human learners select learning tasks and materials on their own without requiring hands-on guidance, has been shown to be more effective than passive teacher-guided learning. Inspired by the concept of self-directed human learning, we introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML. Specifically, we design SDML as a self-directed learning process guided by self-awareness, including internal awareness and external awareness. Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection through self-awareness without human guidance. Meanwhile, the learning performance of the SDML process serves as feedback to further improve self-awareness. We propose a mathematical formulation for SDML based on multi-level optimization. Furthermore, we present case studies together with potential applications of SDML, followed by discussing future research directions. We expect that SDML could enable machines to conduct human-like self-directed learning and provide a new perspective towards artificial general intelligence.

传统的机器学习(ML)在很大程度上依赖于机器学习专家的手动设计来决定学习任务、数据、模型、优化算法和评估指标,这是劳动密集型的、耗时的,并且不能像人类一样自主学习。在教育科学中,人类学习者在不需要动手指导的情况下自行选择学习任务和材料的自主学习已被证明比被动的教师指导学习更有效。受人类自主学习概念的启发,我们引入了自主机器学习(SDML)的主要概念,并提出了SDML的框架。具体而言,我们将SDML设计为一个由自我意识引导的自我导向学习过程,包括内部意识和外部意识。我们提出的SDML过程受益于自我任务选择、自我数据选择、自我模型选择、自我优化策略选择和通过自我意识进行的自我评估度量选择,而无需人类指导。同时,SDML过程的学习表现可以作为反馈,进一步提高自我意识。我们提出了一个基于多级优化的SDML数学公式。此外,我们还介绍了SDML的案例研究和潜在应用,然后讨论了未来的研究方向。我们期望SDML能够使机器进行类似人类的自主学习,并为通用人工智能提供一个新的视角。
{"title":"Self-directed machine learning","authors":"Wenwu Zhu ,&nbsp;Xin Wang ,&nbsp;Pengtao Xie","doi":"10.1016/j.aiopen.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.06.001","url":null,"abstract":"<div><p>Conventional machine learning (ML) relies heavily on manual design from machine learning experts to decide learning tasks, data, models, optimization algorithms, and evaluation metrics, which is labor-intensive, time-consuming, and cannot learn autonomously like humans. In education science, self-directed learning, where human learners select learning tasks and materials on their own without requiring hands-on guidance, has been shown to be more effective than passive teacher-guided learning. Inspired by the concept of self-directed human learning, we introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML. Specifically, we design SDML as a self-directed learning process guided by self-awareness, including internal awareness and external awareness. Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection through self-awareness without human guidance. Meanwhile, the learning performance of the SDML process serves as feedback to further improve self-awareness. We propose a mathematical formulation for SDML based on multi-level optimization. Furthermore, we present case studies together with potential applications of SDML, followed by discussing future research directions. We expect that SDML could enable machines to conduct human-like self-directed learning and provide a new perspective towards artificial general intelligence.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 58-70"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000109/pdfft?md5=5480e0d544d9f6d6307d44ca29f5d00c&pid=1-s2.0-S2666651022000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72282567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deep learning for fake news detection: A comprehensive survey 深度学习用于假新闻检测:一项综合调查
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.09.001
Linmei Hu , Siqi Wei , Ziwang Zhao , Bin Wu

The information age enables people to obtain news online through various channels, yet in the meanwhile making false news spread at unprecedented speed. Fake news exerts detrimental effects for it impairs social stability and public trust, which calls for increasing demand for fake news detection (FND). As deep learning (DL) achieves tremendous success in various domains, it has also been leveraged in FND tasks and surpasses traditional machine learning based methods, yielding state-of-the-art performance. In this survey, we present a complete review and analysis of existing DL based FND methods that focus on various features such as news content, social context, and external knowledge. We review the methods under the lines of supervised, weakly supervised, and unsupervised methods. For each line, we systematically survey the representative methods utilizing different features. Then, we introduce several commonly used FND datasets and give a quantitative analysis of the performance of the DL based FND methods over these datasets. Finally, we analyze the remaining limitations of current approaches and highlight some promising future directions.

信息时代使人们能够通过各种渠道在网上获取新闻,但同时也使虚假新闻以前所未有的速度传播。假新闻损害了社会稳定和公众信任,对假新闻检测的需求不断增加。随着深度学习(DL)在各个领域取得巨大成功,它也被用于FND任务,并超越了传统的基于机器学习的方法,产生了最先进的性能。在这项调查中,我们对现有的基于DL的FND方法进行了全面的回顾和分析,这些方法侧重于新闻内容、社会背景和外部知识等各种特征。我们在有监督的、弱监督的和无监督的方法下回顾了这些方法。对于每条线,我们系统地调查了利用不同特征的代表性方法。然后,我们介绍了几种常用的FND数据集,并对基于DL的FND方法在这些数据集上的性能进行了定量分析。最后,我们分析了当前方法的剩余局限性,并强调了一些有前景的未来方向。
{"title":"Deep learning for fake news detection: A comprehensive survey","authors":"Linmei Hu ,&nbsp;Siqi Wei ,&nbsp;Ziwang Zhao ,&nbsp;Bin Wu","doi":"10.1016/j.aiopen.2022.09.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.09.001","url":null,"abstract":"<div><p>The information age enables people to obtain news online through various channels, yet in the meanwhile making false news spread at unprecedented speed. Fake news exerts detrimental effects for it impairs social stability and public trust, which calls for increasing demand for fake news detection (FND). As deep learning (DL) achieves tremendous success in various domains, it has also been leveraged in FND tasks and surpasses traditional machine learning based methods, yielding state-of-the-art performance. In this survey, we present a complete review and analysis of existing DL based FND methods that focus on various features such as news content, social context, and external knowledge. We review the methods under the lines of supervised, weakly supervised, and unsupervised methods. For each line, we systematically survey the representative methods utilizing different features. Then, we introduce several commonly used FND datasets and give a quantitative analysis of the performance of the DL based FND methods over these datasets. Finally, we analyze the remaining limitations of current approaches and highlight some promising future directions.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 133-155"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000134/pdfft?md5=d2d9826705629e3762ea484a2d93d29d&pid=1-s2.0-S2666651022000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Human motion modeling with deep learning: A survey 基于深度学习的人体运动建模:综述
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2021.12.002
Zijie Ye, Haozhe Wu, Jia Jia

The aim of human motion modeling is to understand human behaviors and create reasonable human motion like real people given different priors. With the development of deep learning, researchers tend to leverage data-driven methods to improve the performance of traditional motion modeling methods. In this paper, we present a comprehensive survey of recent human motion modeling researches. We discuss three categories of human motion modeling researches: human motion prediction, humanoid motion control and cross-modal motion synthesis and provide a detailed review over existing methods. Finally, we further discuss the remaining challenges in human motion modeling.

人体运动建模的目的是理解人类的行为,并像真实的人一样在不同的先验条件下创造出合理的人体运动。随着深度学习的发展,研究人员倾向于利用数据驱动的方法来提高传统运动建模方法的性能。本文对近年来人体运动建模的研究进行了综述。讨论了人体运动预测、类人运动控制和跨模态运动综合三大类人体运动建模研究,并对现有方法进行了详细综述。最后,我们进一步讨论了人体运动建模中存在的挑战。
{"title":"Human motion modeling with deep learning: A survey","authors":"Zijie Ye,&nbsp;Haozhe Wu,&nbsp;Jia Jia","doi":"10.1016/j.aiopen.2021.12.002","DOIUrl":"10.1016/j.aiopen.2021.12.002","url":null,"abstract":"<div><p>The aim of human motion modeling is to understand human behaviors and create reasonable human motion like real people given different priors. With the development of deep learning, researchers tend to leverage data-driven methods to improve the performance of traditional motion modeling methods. In this paper, we present a comprehensive survey of recent human motion modeling researches. We discuss three categories of human motion modeling researches: human motion prediction, humanoid motion control and cross-modal motion synthesis and provide a detailed review over existing methods. Finally, we further discuss the remaining challenges in human motion modeling.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 35-39"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000309/pdfft?md5=ad9a69283a477c5f5d6b127141e48a38&pid=1-s2.0-S2666651021000309-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83892046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On the distribution alignment of propagation in graph neural networks 关于图神经网络中传播的分布对齐
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.006
Qinkai Zheng , Xiao Xia , Kun Zhang , Evgeny Kharlamov , Yuxiao Dong

Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing different strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of distribution alignment, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.1

图神经网络(GNN)已被广泛用于对图结构数据进行建模。大多数现有的GNN研究都集中在设计不同的策略来在图结构上传播信息。经过系统的研究,我们观察到GNN中的传播步骤很重要,但其性能改进对我们应用它的位置不敏感。我们的实证检验进一步表明,传播带来的性能改进主要来自分布对齐现象,即。,图上的传播实际上导致训练集和测试集之间的底层分布的对齐。这些发现有助于理解GNN,例如,为什么解耦的GNN可以像标准GNN一样工作。1
{"title":"On the distribution alignment of propagation in graph neural networks","authors":"Qinkai Zheng ,&nbsp;Xiao Xia ,&nbsp;Kun Zhang ,&nbsp;Evgeny Kharlamov ,&nbsp;Yuxiao Dong","doi":"10.1016/j.aiopen.2022.11.006","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.006","url":null,"abstract":"<div><p>Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing <em>different</em> strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of <em>distribution alignment</em>, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.<span><sup>1</sup></span></p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 218-228"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000213/pdfft?md5=e78f6562530f06a112827f05883082be&pid=1-s2.0-S2666651022000213-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72282565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HSSDA: Hierarchical relation aided Semi-Supervised Domain Adaptation HSSDA:层次关系辅助的半监督域自适应
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.001
Xiechao Guo , Ruiping Liu , Dandan Song

The mainstream domain adaptation (DA) methods transfer the supervised source domain knowledge to the unsupervised or semi-supervised target domain, so as to assist the classification task in the target domain. Usually the supervision only contains the class label of the object. However, when human beings recognize a new object, they will not only learn the class label of the object, but also correlate the object to its parent class, and use this information to learn the similarities and differences between child classes. Our model utilizes hierarchical relations via making the parent class label of labeled data (all the source domain data and part of target domain data) as a part of supervision to guide prototype learning module vbfd to learn the parent class information encoding, so that the prototypes of the same parent class are closer in the prototype space, which leads to better classification results. Inspired by this mechanism, we propose a Hierarchical relation aided Semi-Supervised Domain Adaptation (HSSDA) method which incorporates the hierarchical relations into the Semi-Supervised Domain Adaptation (SSDA) method to improve the classification results of the model. Our model performs well on the DomainNet dataset, and gets the state-of-the-art results in the semi-supervised DA problem.

主流的领域自适应(DA)方法将有监督的源领域知识转移到无监督或半监督的目标领域,以辅助目标领域中的分类任务。通常监督只包含对象的类标签。然而,当人类识别出一个新对象时,他们不仅会学习该对象的类标签,还会将该对象与其父类关联起来,并利用这些信息来学习子类之间的异同。我们的模型利用层次关系,将标记数据(所有源域数据和部分目标域数据)的父类标签作为监督的一部分,指导原型学习模块vbfd学习父类信息编码,使同一父类的原型在原型空间中更近,从而获得更好的分类结果。受此机制的启发,我们提出了一种层次关系辅助半监督域自适应(HSSDA)方法,该方法将层次关系纳入半监督域适应(SSDA)方法中,以提高模型的分类结果。我们的模型在DomainNet数据集上表现良好,并在半监督DA问题中获得了最先进的结果。
{"title":"HSSDA: Hierarchical relation aided Semi-Supervised Domain Adaptation","authors":"Xiechao Guo ,&nbsp;Ruiping Liu ,&nbsp;Dandan Song","doi":"10.1016/j.aiopen.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.001","url":null,"abstract":"<div><p>The mainstream domain adaptation (DA) methods transfer the supervised source domain knowledge to the unsupervised or semi-supervised target domain, so as to assist the classification task in the target domain. Usually the supervision only contains the class label of the object. However, when human beings recognize a new object, they will not only learn the class label of the object, but also correlate the object to its parent class, and use this information to learn the similarities and differences between child classes. Our model utilizes hierarchical relations via making the parent class label of labeled data (all the source domain data and part of target domain data) as a part of supervision to guide prototype learning module vbfd to learn the parent class information encoding, so that the prototypes of the same parent class are closer in the prototype space, which leads to better classification results. Inspired by this mechanism, we propose a <strong>Hierarchical relation aided Semi-Supervised Domain Adaptation (HSSDA)</strong> method which incorporates the hierarchical relations into the Semi-Supervised Domain Adaptation (SSDA) method to improve the classification results of the model. Our model performs well on the DomainNet dataset, and gets the state-of-the-art results in the semi-supervised DA problem.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 156-161"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665102200016X/pdfft?md5=acdf10fbc8ecc16b703bc63cf409d5c7&pid=1-s2.0-S266665102200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAILIE 1.0: A dataset for Challenge of AI in Law - Information Extraction V1.0 CAILIE 1.0:人工智能法律挑战数据集-信息提取V1.0
Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.12.002
Yu Cao, Yuanyuan Sun, Ce Xu, Chunnan Li, Jinming Du, Hongfei Lin
{"title":"CAILIE 1.0: A dataset for Challenge of AI in Law - Information Extraction V1.0","authors":"Yu Cao,&nbsp;Yuanyuan Sun,&nbsp;Ce Xu,&nbsp;Chunnan Li,&nbsp;Jinming Du,&nbsp;Hongfei Lin","doi":"10.1016/j.aiopen.2022.12.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.12.002","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"208-212"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000237/pdfft?md5=0d34de7b220463b0502bcbc2ad2a5225&pid=1-s2.0-S2666651022000237-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
AI Open
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1