首页 > 最新文献

2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)最新文献

英文 中文
An Adaptive Cross-Layer Sampling-Based Node Embedding for Multiplex Networks 基于自适应跨层采样的多路网络节点嵌入
Nianwen Ning, Chenguang Song, Pengpeng Zhou, Yunlei Zhang, Bin Wu
Network embedding aims to learn a latent representation of each node which preserves the structure information. Many real-world networks have multiple dimensions of nodes and multiple types of relations. Therefore, it is more appropriate to represent such kind of networks as multiplex networks. A multiplex network is formed by a set of nodes connected in different layers by links indicating interactions of different types. However, existing random walk based multiplex networks embedding algorithms have problems with sampling bias and imbalanced relation types, thus leading the poor performance in the downstream tasks. In this paper, we propose a node embedding method based on adaptive cross-layer forest fire sampling (FFS) for multiplex networks (FFME). We first focus on the sampling strategies of FFS to address the bias issue of random walk. We utilize a fixed-length queue to record previously visited layers, which can balance the edge distribution over different layers in sampled node sequences. In addition, to adaptively sample node's context, we also propose a metric for node called Neighbors Partition Coefficient (N P C ). The generation process of node sequence is supervised by NPC for adaptive cross-layer sampling. Experiments on real-world networks in diverse fields show that our method outperforms the state-of-the-art methods in application tasks such as cross-domain link prediction and shared community structure detection.
网络嵌入的目的是学习保留结构信息的每个节点的潜在表示。许多现实世界的网络都有多个维度的节点和多种类型的关系。因此,用多路网络来表示这类网络更为合适。多路复用网络是由一组节点通过不同类型的链路连接在不同的层中形成的。然而,现有的基于随机行走的多路网络嵌入算法存在抽样偏差和关系类型不平衡的问题,导致其在下游任务中的性能较差。提出了一种基于自适应跨层森林火灾采样(FFS)的多路网络节点嵌入方法。我们首先关注FFS的抽样策略,以解决随机漫步的偏差问题。我们利用固定长度的队列来记录之前访问过的层,这可以平衡采样节点序列中不同层的边缘分布。此外,为了对节点的上下文进行自适应采样,我们还提出了一个节点的邻居划分系数(N P C)度量。节点序列的生成过程由NPC监督,用于自适应跨层采样。在不同领域的真实网络上进行的实验表明,我们的方法在跨域链接预测和共享社区结构检测等应用任务中优于最先进的方法。
{"title":"An Adaptive Cross-Layer Sampling-Based Node Embedding for Multiplex Networks","authors":"Nianwen Ning, Chenguang Song, Pengpeng Zhou, Yunlei Zhang, Bin Wu","doi":"10.1109/ICTAI.2019.00216","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00216","url":null,"abstract":"Network embedding aims to learn a latent representation of each node which preserves the structure information. Many real-world networks have multiple dimensions of nodes and multiple types of relations. Therefore, it is more appropriate to represent such kind of networks as multiplex networks. A multiplex network is formed by a set of nodes connected in different layers by links indicating interactions of different types. However, existing random walk based multiplex networks embedding algorithms have problems with sampling bias and imbalanced relation types, thus leading the poor performance in the downstream tasks. In this paper, we propose a node embedding method based on adaptive cross-layer forest fire sampling (FFS) for multiplex networks (FFME). We first focus on the sampling strategies of FFS to address the bias issue of random walk. We utilize a fixed-length queue to record previously visited layers, which can balance the edge distribution over different layers in sampled node sequences. In addition, to adaptively sample node's context, we also propose a metric for node called Neighbors Partition Coefficient (N P C ). The generation process of node sequence is supervised by NPC for adaptive cross-layer sampling. Experiments on real-world networks in diverse fields show that our method outperforms the state-of-the-art methods in application tasks such as cross-domain link prediction and shared community structure detection.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"83 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sentence-Level Semantic Features Guided Adversarial Network for Zhuang Language Part-of-Speech Tagging 基于句子语义特征的壮语词性标注对抗网络
Zhixin Li, Yaru Sun, Suqin Tang, Canlong Zhang, Huifang Ma
The intelligent information processing of the standard Zhuang language spoken mainly in Southern China is presently in its infancy, and lacks a well-defined language corpus and automatic part-of-speech tagging methods. Therefore, this study proposes an adversarial part-of-speech tagging method based on reinforcement learning, which solves the problems associated with a lack of a language corpus, time-consuming laborious manual marking, and the low performance of machine marking. Firstly, we construct a markup dictionary based on the grammatical characteristics of standard Zhuang and the Penn Chinese Treebank. Secondly, a dependency syntax analysis is applied for constructing the semantic information feature vectors of sentences, and long short-term memory is adopted as the policy network architecture to enhance available information using recurrent memory, and a conditional random field is employed as the discriminant network to perform label inference with global normalization. Finally, we use reinforcement learning as the model framework, target parts of speech as the feedback of the environment, and then obtain the optimal policy through adversarial learning. The results show that the combination of reinforcement learning and adversarial network alleviates the dependence of the model on the training corpus to some extent, and can quickly and effectively expand the scale of the annotation dictionary for the Zhuang language, thereby obtaining better labeling results.
目前华南地区标准壮语的智能信息处理尚处于起步阶段,缺乏明确的语料库和自动词性标注方法。因此,本研究提出了一种基于强化学习的对抗性词性标注方法,解决了缺乏语料库、人工标注耗时费力、机器标注性能低等问题。首先,我们根据标准壮语的语法特征和宾夕法尼亚汉语树库构建了一个标记词典。其次,采用依赖句法分析构建句子的语义信息特征向量,采用长短期记忆作为策略网络架构,利用循环记忆增强可用信息,采用条件随机场作为判别网络,进行全局归一化的标签推理。最后,采用强化学习作为模型框架,目标词性作为环境的反馈,通过对抗性学习获得最优策略。结果表明,强化学习与对抗网络的结合在一定程度上缓解了模型对训练语料库的依赖,能够快速有效地扩展壮语标注词典的规模,从而获得更好的标注效果。
{"title":"Sentence-Level Semantic Features Guided Adversarial Network for Zhuang Language Part-of-Speech Tagging","authors":"Zhixin Li, Yaru Sun, Suqin Tang, Canlong Zhang, Huifang Ma","doi":"10.1109/ICTAI.2019.00045","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00045","url":null,"abstract":"The intelligent information processing of the standard Zhuang language spoken mainly in Southern China is presently in its infancy, and lacks a well-defined language corpus and automatic part-of-speech tagging methods. Therefore, this study proposes an adversarial part-of-speech tagging method based on reinforcement learning, which solves the problems associated with a lack of a language corpus, time-consuming laborious manual marking, and the low performance of machine marking. Firstly, we construct a markup dictionary based on the grammatical characteristics of standard Zhuang and the Penn Chinese Treebank. Secondly, a dependency syntax analysis is applied for constructing the semantic information feature vectors of sentences, and long short-term memory is adopted as the policy network architecture to enhance available information using recurrent memory, and a conditional random field is employed as the discriminant network to perform label inference with global normalization. Finally, we use reinforcement learning as the model framework, target parts of speech as the feedback of the environment, and then obtain the optimal policy through adversarial learning. The results show that the combination of reinforcement learning and adversarial network alleviates the dependence of the model on the training corpus to some extent, and can quickly and effectively expand the scale of the annotation dictionary for the Zhuang language, thereby obtaining better labeling results.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Variational Policy Chaining for Lifelong Reinforcement Learning 终身强化学习的变分策略链
Christopher Doyle, Maxime Guériau, Ivana Dusparic
With increasing applications of reinforcement learning in real life problems, it is becoming essential that agents are able to update their knowledge continually. Lifelong learning approaches aim to enable agents to retain the knowledge they learn and to selectively transfer knowledge to new tasks. Recent techniques for lifelong reinforcement learning have shown great success in getting an agent to generalise over several tasks. However, scalability becomes an issue when agents learn numerous tasks, as each task's information must be remembered. To address this issue, this paper proposes the approach of Variational Policy Chaining (VPC) which enables a reinforcement learning agent to generalise effectively in a scalable manner when presented with continuous task updates, without storing multiple historic experiences. VPC uses Kullback-Leibler divergence based method to isolate the most common pieces of knowledge, and condenses the important knowledge into a single policy chain. We evaluate VPC in a GridWorld environment and compare it to vanilla policy gradient methods, showing that VPC's ability to reuse knowledge from previously encountered tasks reduces learning time in new tasks by up to 50%.
随着强化学习在现实生活问题中的应用越来越多,智能体能够不断更新他们的知识变得至关重要。终身学习方法旨在使智能体能够保留他们学习的知识,并有选择地将知识转移到新的任务中。最近的终身强化学习技术在让智能体对多个任务进行泛化方面取得了巨大成功。然而,当代理学习大量任务时,可伸缩性成为一个问题,因为必须记住每个任务的信息。为了解决这个问题,本文提出了变分策略链接(VPC)的方法,该方法使强化学习代理能够在连续任务更新时以可扩展的方式有效地进行泛化,而无需存储多个历史经验。VPC使用基于Kullback-Leibler散度的方法分离最常见的知识片段,并将重要的知识压缩成一条策略链。我们在GridWorld环境中对VPC进行了评估,并将其与传统的策略梯度方法进行了比较,结果表明VPC从以前遇到的任务中重用知识的能力将新任务的学习时间减少了50%。
{"title":"Variational Policy Chaining for Lifelong Reinforcement Learning","authors":"Christopher Doyle, Maxime Guériau, Ivana Dusparic","doi":"10.1109/ICTAI.2019.00222","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00222","url":null,"abstract":"With increasing applications of reinforcement learning in real life problems, it is becoming essential that agents are able to update their knowledge continually. Lifelong learning approaches aim to enable agents to retain the knowledge they learn and to selectively transfer knowledge to new tasks. Recent techniques for lifelong reinforcement learning have shown great success in getting an agent to generalise over several tasks. However, scalability becomes an issue when agents learn numerous tasks, as each task's information must be remembered. To address this issue, this paper proposes the approach of Variational Policy Chaining (VPC) which enables a reinforcement learning agent to generalise effectively in a scalable manner when presented with continuous task updates, without storing multiple historic experiences. VPC uses Kullback-Leibler divergence based method to isolate the most common pieces of knowledge, and condenses the important knowledge into a single policy chain. We evaluate VPC in a GridWorld environment and compare it to vanilla policy gradient methods, showing that VPC's ability to reuse knowledge from previously encountered tasks reduces learning time in new tasks by up to 50%.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126658562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New Approaches to the Identification of Dependencies between Requirements 识别需求之间依赖关系的新方法
Ralph Samer, Martin Stettinger, Müslüm Atas, A. Felfernig, G. Ruhe, Gouri Deshpande
There is a high demand for intelligent decision support systems which assist stakeholders in requirements engineering tasks. Examples of such tasks are the elicitation of requirements, release planning, and the identification of requirement-dependencies. In particular, the detection of dependencies between requirements is a major challenge for stakeholders. In this paper, we present two content-based recommendation approaches which automatically detect and recommend such dependencies. The first approach identifies potential dependencies between requirements which are defined on a textual level by exploiting document classification techniques (based on Linear SVM, Naive Bayes, Random Forest, and k-Nearest Neighbors). This approach uses two different feature types (TF-IDF features vs. probabilistic features). The second recommendation approach is based on Latent Semantic Analysis and defines the baseline for the evaluation with a real-world data set. The evaluation shows that the recommendation approach based on Random Forest using probabilistic features achieves the best prediction quality of all approaches (F1: 0.89).
人们对智能决策支持系统有很高的需求,这些系统可以帮助涉众完成需求工程任务。此类任务的示例包括需求的获取、发布计划和需求依赖关系的识别。特别是,检测需求之间的依赖关系是涉众面临的主要挑战。在本文中,我们提出了两种基于内容的推荐方法来自动检测和推荐这些依赖关系。第一种方法通过利用文档分类技术(基于线性支持向量机、朴素贝叶斯、随机森林和k近邻)识别在文本级别上定义的需求之间的潜在依赖关系。这种方法使用两种不同的特征类型(TF-IDF特征与概率特征)。第二种推荐方法基于潜在语义分析,并定义了使用真实数据集进行评估的基线。评价结果表明,利用概率特征的基于随机森林的推荐方法在所有方法中预测质量最好(F1: 0.89)。
{"title":"New Approaches to the Identification of Dependencies between Requirements","authors":"Ralph Samer, Martin Stettinger, Müslüm Atas, A. Felfernig, G. Ruhe, Gouri Deshpande","doi":"10.1109/ICTAI.2019.00-91","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00-91","url":null,"abstract":"There is a high demand for intelligent decision support systems which assist stakeholders in requirements engineering tasks. Examples of such tasks are the elicitation of requirements, release planning, and the identification of requirement-dependencies. In particular, the detection of dependencies between requirements is a major challenge for stakeholders. In this paper, we present two content-based recommendation approaches which automatically detect and recommend such dependencies. The first approach identifies potential dependencies between requirements which are defined on a textual level by exploiting document classification techniques (based on Linear SVM, Naive Bayes, Random Forest, and k-Nearest Neighbors). This approach uses two different feature types (TF-IDF features vs. probabilistic features). The second recommendation approach is based on Latent Semantic Analysis and defines the baseline for the evaluation with a real-world data set. The evaluation shows that the recommendation approach based on Random Forest using probabilistic features achieves the best prediction quality of all approaches (F1: 0.89).","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125841961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Multi-granularity Position-Aware Convolutional Memory Network for Aspect-Based Sentiment Analysis 面向面向方面情感分析的多粒度位置感知卷积记忆网络
Yuanyuan Pan, Jun Gan, Xiangying Ran, Chong-Jun Wang
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis problem, which has received more and more attention in recent years. Convolutional Neural Networks and their variants have shown potentialities for tackling the problem recently. Building upon this line of research, we propose a novel architecture named Multi-Granularity Position-Aware Convolutional Memory Network (MP-CMN) for ABSA in this paper. MP-CMN utilizes multiple convolutional layers to extract features of different granularities to build the convolutional memories, and then incorporates aspect information and position information into convolutional memory network via attention mechanism. To make the mechanism of our model clear, we also make some visualization and case studies. Experiment results on standard SemEval 2014 datasets demonstrate the effectiveness of the proposed model.
基于方面的情感分析(ABSA)是一种细粒度的情感分析问题,近年来受到越来越多的关注。卷积神经网络及其变体最近显示出解决这一问题的潜力。在此基础上,我们提出了一种新的ABSA架构,称为多粒度位置感知卷积记忆网络(MP-CMN)。MP-CMN利用多个卷积层提取不同粒度的特征构建卷积记忆,然后通过注意机制将方面信息和位置信息纳入卷积记忆网络。为了使我们的模型的机制更清晰,我们还做了一些可视化和案例研究。在SemEval 2014标准数据集上的实验结果证明了该模型的有效性。
{"title":"Multi-granularity Position-Aware Convolutional Memory Network for Aspect-Based Sentiment Analysis","authors":"Yuanyuan Pan, Jun Gan, Xiangying Ran, Chong-Jun Wang","doi":"10.1109/ICTAI.2019.00106","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00106","url":null,"abstract":"Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis problem, which has received more and more attention in recent years. Convolutional Neural Networks and their variants have shown potentialities for tackling the problem recently. Building upon this line of research, we propose a novel architecture named Multi-Granularity Position-Aware Convolutional Memory Network (MP-CMN) for ABSA in this paper. MP-CMN utilizes multiple convolutional layers to extract features of different granularities to build the convolutional memories, and then incorporates aspect information and position information into convolutional memory network via attention mechanism. To make the mechanism of our model clear, we also make some visualization and case studies. Experiment results on standard SemEval 2014 datasets demonstrate the effectiveness of the proposed model.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126487878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent Path Planning with Heterogeneous Cooperation 异构协作下的多智能体路径规划
Keisuke Otaki, Satoshi Koide, K. Hayakawa, Ayano Okoso, Tomoki Nishi
Cooperation among different vehicles is a promising concept for Mobility as a Service (MaaS). A principal problem in MaaS is optimizing the vehicle routes to reduce the total travel cost with cooperation. For example, we know that platooning among large trucks could reduce the fuel cost because it decreases the air resistance. Traditional platoons, however, cannot model cooperation among different types of vehicles because the model assumes the homogeneity of vehicle types. We then propose a model that permits heterogeneous cooperation. Targets of our model include a logistic scenario, where a truck for the long-distance delivery also carries small self-driving vehicles for the last mile delivery. For those purposes, we formalize a new route optimization problem with heterogeneous cooperation, and provide its integer programming (IP) formulation as an exact solver. We evaluate our formulation through numerical experiments using synthetic and real graphs. We also validate our concept of heterogeneous cooperation for MaaS with examples.
不同车辆之间的合作是移动即服务(MaaS)的一个很有前途的概念。协同优化车辆路线以降低总出行成本是MaaS的一个主要问题。例如,我们知道大型卡车之间的队列可以减少燃料成本,因为它减少了空气阻力。然而,传统的车辆排由于假设了车辆类型的同质性,无法对不同类型车辆之间的合作进行建模。然后,我们提出了一个允许异构合作的模型。我们的模型的目标包括一个物流场景,在这个场景中,一辆用于长途运输的卡车也携带着用于最后一英里运输的小型自动驾驶汽车。为此,我们形式化了一个新的异构协作路径优化问题,并给出了其整数规划(IP)公式作为精确求解器。我们通过使用合成图和实图的数值实验来评估我们的公式。我们还通过实例验证了MaaS异构合作的概念。
{"title":"Multi-agent Path Planning with Heterogeneous Cooperation","authors":"Keisuke Otaki, Satoshi Koide, K. Hayakawa, Ayano Okoso, Tomoki Nishi","doi":"10.1109/ICTAI.2019.00022","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00022","url":null,"abstract":"Cooperation among different vehicles is a promising concept for Mobility as a Service (MaaS). A principal problem in MaaS is optimizing the vehicle routes to reduce the total travel cost with cooperation. For example, we know that platooning among large trucks could reduce the fuel cost because it decreases the air resistance. Traditional platoons, however, cannot model cooperation among different types of vehicles because the model assumes the homogeneity of vehicle types. We then propose a model that permits heterogeneous cooperation. Targets of our model include a logistic scenario, where a truck for the long-distance delivery also carries small self-driving vehicles for the last mile delivery. For those purposes, we formalize a new route optimization problem with heterogeneous cooperation, and provide its integer programming (IP) formulation as an exact solver. We evaluate our formulation through numerical experiments using synthetic and real graphs. We also validate our concept of heterogeneous cooperation for MaaS with examples.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127735118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Recovering Extremely Degraded Faces by Joint Super-Resolution and Facial Composite 联合超分辨率和人脸合成技术恢复极度退化的人脸
Xiu Li, Guichun Duan, Zhouxia Wang, Jimmy S. J. Ren, Yongbing Zhang, Jiawei Zhang, Kaixiang Song
In the past a few years, we witnessed rapid advancement in face super-resolution from very low resolution(VLR) images. However, most of the previous studies focus on solving such problem without explicitly considering the impact of severe real-life image degradation (e.g. blur and noise). We can show that robustly recover details from VLR images is a task beyond the ability of current state-of-the-art method. In this paper, we borrow ideas from "facial composite" and propose an alternative approach to tackle this problem. We endow the degraded VLR images with additional cues by integrating existing face components from multiple reference images into a novel learning pipeline with both low level and high level semantic loss function as well as a specialized adversarial based training scheme. We show that our method is able to effectively and robustly restore relevant facial details from 16x16 images with extreme degradation. We also tested our approach against real-life images and our method performs favorably against previous methods.
在过去的几年中,我们见证了极低分辨率(VLR)图像在人脸超分辨率方面的快速发展。然而,以往的研究大多侧重于解决这一问题,而没有明确考虑现实生活中严重的图像退化(如模糊和噪声)的影响。我们可以证明,从VLR图像中稳健地恢复细节是一项超出当前最先进方法能力的任务。在本文中,我们借鉴了“面部复合材料”的思想,并提出了一种解决这一问题的替代方法。我们通过将来自多个参考图像的现有面部成分整合到一个具有低水平和高水平语义损失函数以及专门的基于对抗的训练方案的新型学习管道中,从而赋予退化的VLR图像额外的线索。我们的研究表明,我们的方法能够有效地、鲁棒地从极度退化的16x16图像中恢复相关的面部细节。我们还针对真实图像测试了我们的方法,与之前的方法相比,我们的方法表现得更好。
{"title":"Recovering Extremely Degraded Faces by Joint Super-Resolution and Facial Composite","authors":"Xiu Li, Guichun Duan, Zhouxia Wang, Jimmy S. J. Ren, Yongbing Zhang, Jiawei Zhang, Kaixiang Song","doi":"10.1109/ICTAI.2019.00079","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00079","url":null,"abstract":"In the past a few years, we witnessed rapid advancement in face super-resolution from very low resolution(VLR) images. However, most of the previous studies focus on solving such problem without explicitly considering the impact of severe real-life image degradation (e.g. blur and noise). We can show that robustly recover details from VLR images is a task beyond the ability of current state-of-the-art method. In this paper, we borrow ideas from \"facial composite\" and propose an alternative approach to tackle this problem. We endow the degraded VLR images with additional cues by integrating existing face components from multiple reference images into a novel learning pipeline with both low level and high level semantic loss function as well as a specialized adversarial based training scheme. We show that our method is able to effectively and robustly restore relevant facial details from 16x16 images with extreme degradation. We also tested our approach against real-life images and our method performs favorably against previous methods.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128787724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Advanced Harmony Search Algorithm Based on Harmony Anchoring and Reverse Learning 一种基于和谐锚定和逆向学习的高级和谐搜索算法
Lin Liu, D. Shi, Dansong Cheng, Maysam Orouskhani
In this paper, we propose a new and effective multi-objective optimization algorithm based on a modified harmony search. The proposed method employs reverse learning in the harmony vector updating equation in order to enhance the global searching ability. Moreover, it adopts a harmony anchoring scheme so that unnecessary exploration is avoided. Experimental studies carried on eight benchmark problems show quite satisfactory results and indicate the higher performance of the proposed algorithm in comparison with traditional multi-objective optimization algorithms. Finally, it has been applied to solve the image segmentation problem.
本文提出了一种新的、有效的基于改进和声搜索的多目标优化算法。该方法在和声向量更新方程中采用逆向学习,增强了全局搜索能力。并且采用和谐锚定方案,避免了不必要的探索。对8个基准问题进行了实验研究,结果令人满意,与传统的多目标优化算法相比,该算法具有更高的性能。最后,将其应用于解决图像分割问题。
{"title":"An Advanced Harmony Search Algorithm Based on Harmony Anchoring and Reverse Learning","authors":"Lin Liu, D. Shi, Dansong Cheng, Maysam Orouskhani","doi":"10.1109/ICTAI.2019.00255","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00255","url":null,"abstract":"In this paper, we propose a new and effective multi-objective optimization algorithm based on a modified harmony search. The proposed method employs reverse learning in the harmony vector updating equation in order to enhance the global searching ability. Moreover, it adopts a harmony anchoring scheme so that unnecessary exploration is avoided. Experimental studies carried on eight benchmark problems show quite satisfactory results and indicate the higher performance of the proposed algorithm in comparison with traditional multi-objective optimization algorithms. Finally, it has been applied to solve the image segmentation problem.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132375785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TFPN: Twin Feature Pyramid Networks for Object Detection TFPN:用于目标检测的双特征金字塔网络
Yi Liang, Changjian Wang, Fangzhao Li, Yuxing Peng, Q. Lv, Yuan Yuan, Zhen Huang
FPN (Feature Pyramid Networks) is one of the most popular object detection networks, which can improve small object detection by enhancing shallow features. However, limited attention has been paid to the improvement of large object detection via deeper feature enhancement. One existing approach merges the feature maps of different layers into a new feature map for object detection, but can lead to increased noise and loss of information. The other approach adds a bottom-up structure after the feature pyramid of FPN, which superimposes the information from shallow layers into the deep feature map but weakens the strength of FPN in detecting small objects. To address these challenges, this paper proposes TFPN (Twin Feature Pyramid Networks), which consists of (1) FPN+, a bottom-up structure that improves large object detection; (2) TPS, a Twin Pyramid Structure that improves medium object detection; and (3) innovative integration of these two with FPN, which can significantly improve the detection accuracy of large and medium objects while maintaining the advantage of FPN in small object detection. Extensive experiments using the MSCOCO object detection datasets and the BDD100K automatic driving dataset demonstrate that TFPN significantly improves over existing models, achieving up to 2.2 improvement in detection accuracy (e.g., 36.3 for FPN vs. 38.5 for TFPN on COCO Val-17). Our method can obtain the same accuracy as FPN with ResNet-101 based on ResNet-50 and needs fewer parameters.
特征金字塔网络(Feature Pyramid Networks,简称FPN)是目前最流行的目标检测网络之一,它可以通过增强浅层特征来改善小目标的检测。然而,通过更深层次的特征增强来改进大目标检测的研究却很少。现有的一种方法是将不同层的特征图合并成一个新的特征图用于目标检测,但这可能导致噪声增加和信息丢失。另一种方法是在FPN的特征金字塔之后增加一个自下而上的结构,将浅层信息叠加到深层特征图中,但削弱了FPN检测小目标的强度。为了解决这些挑战,本文提出了TFPN(双特征金字塔网络),它包括:(1)FPN+,一种自下而上的结构,可以提高大型目标的检测;(2) TPS,双金字塔结构,提高介质目标检测;(3)二者与FPN的创新融合,在保持FPN在小目标检测中的优势的同时,显著提高了大中型目标的检测精度。使用MSCOCO目标检测数据集和BDD100K自动驾驶数据集进行的大量实验表明,TFPN比现有模型有了显著改善,检测精度提高了2.2(例如,FPN的36.3比COCO var -17上的TFPN的38.5)。该方法可以获得与基于ResNet-50的ResNet-101的FPN相同的精度,并且需要更少的参数。
{"title":"TFPN: Twin Feature Pyramid Networks for Object Detection","authors":"Yi Liang, Changjian Wang, Fangzhao Li, Yuxing Peng, Q. Lv, Yuan Yuan, Zhen Huang","doi":"10.1109/ICTAI.2019.00251","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00251","url":null,"abstract":"FPN (Feature Pyramid Networks) is one of the most popular object detection networks, which can improve small object detection by enhancing shallow features. However, limited attention has been paid to the improvement of large object detection via deeper feature enhancement. One existing approach merges the feature maps of different layers into a new feature map for object detection, but can lead to increased noise and loss of information. The other approach adds a bottom-up structure after the feature pyramid of FPN, which superimposes the information from shallow layers into the deep feature map but weakens the strength of FPN in detecting small objects. To address these challenges, this paper proposes TFPN (Twin Feature Pyramid Networks), which consists of (1) FPN+, a bottom-up structure that improves large object detection; (2) TPS, a Twin Pyramid Structure that improves medium object detection; and (3) innovative integration of these two with FPN, which can significantly improve the detection accuracy of large and medium objects while maintaining the advantage of FPN in small object detection. Extensive experiments using the MSCOCO object detection datasets and the BDD100K automatic driving dataset demonstrate that TFPN significantly improves over existing models, achieving up to 2.2 improvement in detection accuracy (e.g., 36.3 for FPN vs. 38.5 for TFPN on COCO Val-17). Our method can obtain the same accuracy as FPN with ResNet-101 based on ResNet-50 and needs fewer parameters.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132402450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Shallow Deep Learning: Embedding Verbatim K-Means in Deep Neural Networks 浅深度学习:在深度神经网络中逐字嵌入k均值
Len Du
In this paper we show how to implement a deep neural network that is strictly equivalent (sans floating-point errors) to the verbatim (batch) k-means algorithm or Lloyd's algorithm, when trained with gradient descent. Most interestingly, doing so shows that the k-means algorithm, a staple of "conventional'" or "shallow'" machine learning, can actually be seen as a special case of deep learning, contrary to the general perception that deep learning is a subset of machine learning. Doing so also automatically introduces yet another unsupervised learning technique into the arsenal of deep learning, which happens to be an example of interpretable deep neural networks as well. Finally, we also show how to utilize the powerful deep learning infrastructures with very little extra effort for adaptation.
在本文中,我们展示了如何实现一个深度神经网络,当使用梯度下降训练时,它与逐字(批处理)k-均值算法或劳埃德算法严格等效(无浮点误差)。最有趣的是,这样做表明,作为“传统”或“浅”机器学习的主要内容,k-means算法实际上可以被视为深度学习的一个特例,这与深度学习是机器学习的一个子集的普遍看法相反。这样做也会自动将另一种无监督学习技术引入深度学习的武器库,这恰好也是可解释深度神经网络的一个例子。最后,我们还展示了如何利用强大的深度学习基础设施,而无需额外的适应工作。
{"title":"Shallow Deep Learning: Embedding Verbatim K-Means in Deep Neural Networks","authors":"Len Du","doi":"10.1109/ICTAI.2019.00035","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00035","url":null,"abstract":"In this paper we show how to implement a deep neural network that is strictly equivalent (sans floating-point errors) to the verbatim (batch) k-means algorithm or Lloyd's algorithm, when trained with gradient descent. Most interestingly, doing so shows that the k-means algorithm, a staple of \"conventional'\" or \"shallow'\" machine learning, can actually be seen as a special case of deep learning, contrary to the general perception that deep learning is a subset of machine learning. Doing so also automatically introduces yet another unsupervised learning technique into the arsenal of deep learning, which happens to be an example of interpretable deep neural networks as well. Finally, we also show how to utilize the powerful deep learning infrastructures with very little extra effort for adaptation.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114931452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1