首页 > 最新文献

2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)最新文献

英文 中文
Random Walk-Based Top-k Tag Generation in Bipartite Networks of Entity-Term Type 实体项型二部网络中基于随机行走的Top-k标签生成
Mingxi Zhang, Guanying Su, Wei Wang
Tag generation aims to find relevant tags for a given entity, which has numerous applications, such as classification, information retrieval and recommender system. Practically, the data of real applications is sparse and lacks sufficient description for entities, which might lead to incomprehensive results. Random walk with restart (RWR) can find the hidden relationship between nodes by utilizing indirect connections. However, the traditional RWR computation is based on the whole structure of the given network, which maintains a matrix for storing all relevances between nodes. And the efficiency problem would be run into as network grows large. In this paper, we propose a top-k tag generation algorithm, namely DRWR, for efficiently generating the tags from entity-term network. The terms are treated as candidate tags, and the most relevant terms are treated as the tags for a given entity. The relevance computation between entity and terms is divided into two stages: off-line stage and on-line stage. In off-line stage, the relevances between terms are computed over the term-term network that is built based on the whole structure of entity-term network. In on-line stage, the relevances between entity and each term are computed based on the relevances between terms. For supporting fast on-line query processing, we develop a pruning algorithm, which skips the operations on relevances between terms smaller than a threshold. Extensive experiments on real datasets demonstrate the efficiency and effectiveness of the proposed approach.
标签生成的目的是为给定的实体找到相关的标签,在分类、信息检索和推荐系统等方面有着广泛的应用。实际应用中的数据是稀疏的,缺乏对实体的充分描述,可能导致结果不全面。RWR (Random walk with restart)可以利用间接连接来发现节点之间隐藏的关系。然而,传统的RWR计算是基于给定网络的整体结构,它维护一个矩阵来存储节点之间的所有相关性。随着网络规模的扩大,效率问题也会出现。为了有效地从实体术语网络中生成标签,我们提出了一种top-k标签生成算法,即DRWR。这些术语被视为候选标记,最相关的术语被视为给定实体的标记。实体与术语之间的关联计算分为离线和在线两个阶段。在离线阶段,基于实体-术语网络的整体结构构建术语-术语网络,计算术语之间的相关性。在在线阶段,根据词之间的关联度计算实体与各词之间的关联度。为了支持快速在线查询处理,我们开发了一种剪枝算法,该算法跳过了小于阈值的项之间的相关性操作。在实际数据集上的大量实验证明了该方法的效率和有效性。
{"title":"Random Walk-Based Top-k Tag Generation in Bipartite Networks of Entity-Term Type","authors":"Mingxi Zhang, Guanying Su, Wei Wang","doi":"10.1109/ICTAI.2019.00026","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00026","url":null,"abstract":"Tag generation aims to find relevant tags for a given entity, which has numerous applications, such as classification, information retrieval and recommender system. Practically, the data of real applications is sparse and lacks sufficient description for entities, which might lead to incomprehensive results. Random walk with restart (RWR) can find the hidden relationship between nodes by utilizing indirect connections. However, the traditional RWR computation is based on the whole structure of the given network, which maintains a matrix for storing all relevances between nodes. And the efficiency problem would be run into as network grows large. In this paper, we propose a top-k tag generation algorithm, namely DRWR, for efficiently generating the tags from entity-term network. The terms are treated as candidate tags, and the most relevant terms are treated as the tags for a given entity. The relevance computation between entity and terms is divided into two stages: off-line stage and on-line stage. In off-line stage, the relevances between terms are computed over the term-term network that is built based on the whole structure of entity-term network. In on-line stage, the relevances between entity and each term are computed based on the relevances between terms. For supporting fast on-line query processing, we develop a pruning algorithm, which skips the operations on relevances between terms smaller than a threshold. Extensive experiments on real datasets demonstrate the efficiency and effectiveness of the proposed approach.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130774932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Prediction Fairness via Model Ensemble 通过模型集成提高预测公平性
Dheeraj Bhaskaruni, Hui Hu, Chao Lan
Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.
公平的机器学习是一个热门问题。它研究了如何在模型预测中减少对少数民族的不道德偏见。一个很有前途的解决方案是集成学习——Nina等人[1]首先认为,可以通过将一组标准模型套袋来获得一个公平的模型。然而,他们没有提出任何经验证据或讨论有效的集成策略公平学习。在本文中,我们提出了一种新的集成策略来实现公平学习。它采用AdaBoost框架,但与AdaBoost提升错误预测实例的权重不同,它提升了我们使用Luong基于k-NN的情境测试方法的变体识别的不公平预测实例的权重[2]。通过对两个真实数据集的实验,我们表明我们提出的策略比Nina等人讨论的bagging策略和几种基线方法具有更高的公平性。我们的研究结果还表明,标准的集成策略可能不足以提高公平性。
{"title":"Improving Prediction Fairness via Model Ensemble","authors":"Dheeraj Bhaskaruni, Hui Hu, Chao Lan","doi":"10.1109/ICTAI.2019.00273","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00273","url":null,"abstract":"Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133149770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Approximating Learning Curves for Imbalanced Big Data with Limited Labels 有限标签下不平衡大数据的近似学习曲线
Aaron N. Richter, T. Khoshgoftaar
Labeling data for supervised learning can be an expensive task, especially when large amounts of data are required to build an adequate classifier. For most problems, there exists a point of diminishing returns on a learning curve where adding more data only marginally increases model performance. It would be beneficial to approximate this point for scenarios where there is a large amount of data available but only a small amount of labeled data. Then, time and resources can be spent wisely to label the sample that is required for acceptable model performance. In this study, we explore learning curve approximation methods on a big imbalanced dataset from the bioinformatics domain. We evaluate a curve fitting method developed on small data using an inverse power law model, and propose a new semi-supervised method to take advantage of the large amount of unlabeled data. We find that the traditional curve fitting method is not effective for large sample sizes, while the semi-supervised method more accurately identifies the point of diminishing returns.
为监督学习标记数据可能是一项昂贵的任务,特别是当需要大量数据来构建适当的分类器时。对于大多数问题,在学习曲线上存在一个收益递减的点,在这个点上,添加更多的数据只会略微提高模型的性能。对于有大量可用数据但只有少量标记数据的场景,近似这一点是有益的。然后,可以明智地花费时间和资源来标记可接受的模型性能所需的样本。在这项研究中,我们在生物信息学领域的一个大型不平衡数据集上探索了学习曲线近似方法。我们评估了一种利用逆幂律模型在小数据上开发的曲线拟合方法,并提出了一种新的半监督方法来利用大量未标记数据。我们发现传统的曲线拟合方法对于大样本量并不有效,而半监督方法更准确地识别出收益递减点。
{"title":"Approximating Learning Curves for Imbalanced Big Data with Limited Labels","authors":"Aaron N. Richter, T. Khoshgoftaar","doi":"10.1109/ICTAI.2019.00041","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00041","url":null,"abstract":"Labeling data for supervised learning can be an expensive task, especially when large amounts of data are required to build an adequate classifier. For most problems, there exists a point of diminishing returns on a learning curve where adding more data only marginally increases model performance. It would be beneficial to approximate this point for scenarios where there is a large amount of data available but only a small amount of labeled data. Then, time and resources can be spent wisely to label the sample that is required for acceptable model performance. In this study, we explore learning curve approximation methods on a big imbalanced dataset from the bioinformatics domain. We evaluate a curve fitting method developed on small data using an inverse power law model, and propose a new semi-supervised method to take advantage of the large amount of unlabeled data. We find that the traditional curve fitting method is not effective for large sample sizes, while the semi-supervised method more accurately identifies the point of diminishing returns.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116149756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
5M-Building: A Large-Scale High-Resolution Building Dataset with CNN Based Detection Analysis 5M-Building:基于CNN检测分析的大规模高分辨率建筑数据集
Zeshan Lu, Tao Xu, Kun Liu, Z. Liu, Feipeng Zhou, Qingjie Liu
Building detection in remote sensing images plays an important role in applications such as urban management and urban planning. Recently, convolutional neural network (CNN) based methods which benefits from the popularity of large-scale datasets have achieved good performance for object detection. To our best knowledge, there is no large-scale remote sensing image dataset specially build for building detection. Existing building datasets are in small size and lack of diversity, which hinder the development of building detection. In this paper, we present a large-scale high-resolution building dataset named 5M-Building after the number of samples in the dataset. The dataset consists of more than 10 thousand images all collected from GaoFen-2 with a spatial resolution of 0.8 meter. We also present a baseline for the dataset by evaluating three state of the art CNN based detectors. The experiments demonstrate that it is great challenge to accurately detect various buildings from remote sensing images. We hope the 5M-Building dataset will facilitate the research on building detection.
遥感影像中的建筑物检测在城市管理、城市规划等方面有着重要的应用。近年来,得益于大规模数据集的普及,基于卷积神经网络(CNN)的方法在目标检测方面取得了良好的效果。据我们所知,目前还没有专门构建用于建筑物检测的大规模遥感图像数据集。现有的建筑数据集规模小,缺乏多样性,阻碍了建筑检测的发展。在本文中,我们提出了一个大规模的高分辨率建筑数据集,以数据集中的样本数量命名为5M-Building。该数据集由一万多张图像组成,这些图像都是在高分二号上采集的,空间分辨率为0.8米。我们还通过评估三种最先进的基于CNN的检测器来为数据集提供基线。实验表明,从遥感图像中准确检测各种建筑物是一项巨大的挑战。我们希望5M-Building数据集能够促进建筑检测的研究。
{"title":"5M-Building: A Large-Scale High-Resolution Building Dataset with CNN Based Detection Analysis","authors":"Zeshan Lu, Tao Xu, Kun Liu, Z. Liu, Feipeng Zhou, Qingjie Liu","doi":"10.1109/ICTAI.2019.00194","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00194","url":null,"abstract":"Building detection in remote sensing images plays an important role in applications such as urban management and urban planning. Recently, convolutional neural network (CNN) based methods which benefits from the popularity of large-scale datasets have achieved good performance for object detection. To our best knowledge, there is no large-scale remote sensing image dataset specially build for building detection. Existing building datasets are in small size and lack of diversity, which hinder the development of building detection. In this paper, we present a large-scale high-resolution building dataset named 5M-Building after the number of samples in the dataset. The dataset consists of more than 10 thousand images all collected from GaoFen-2 with a spatial resolution of 0.8 meter. We also present a baseline for the dataset by evaluating three state of the art CNN based detectors. The experiments demonstrate that it is great challenge to accurately detect various buildings from remote sensing images. We hope the 5M-Building dataset will facilitate the research on building detection.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Cases and Clusters in Reuse Policies for Decision-Making in Card Games 纸牌游戏决策中重用策略的案例与聚类
G. B. Paulus, J. Assunção, L. A. L. Silva
This work investigates the combination of cases and clusters in the reuse of game actions (e.g., cards played, bets made) recorded in the cases retrieved for a given query in Case-based Reasoning (CBR) card-playing agents. With the support of the K-MEANS clustering algorithm, clustering results detailing problem states/situations and game outcomes relationships recorded in cases from the case base guide the execution of augmented reuse policies. These policies consider the game actions recorded in the retrieved cases in the selection of the clusters to be used. Then, the cases that belong to the selected clusters are used in the determination of which game action is reused as a solution to the current game problem situation. With this two-step reuse process, the proposed policies rely on the majority with clusters, the probability with clusters, the number of points won with clusters and the chance of victory with clusters. To evaluate these proposals, card-playing agents implemented with different reuse policies competed against each other in duplicated game matches where all of them played using the same set of cards.
这项工作调查了在基于案例推理(CBR)的纸牌游戏代理中为给定查询检索的案例中记录的游戏动作(例如,出牌,下注)的案例和集群的组合。在K-MEANS聚类算法的支持下,从案例库中记录案例的详细问题状态/情况和游戏结果关系的聚类结果指导增强重用策略的执行。这些策略考虑在选择要使用的集群时,在检索案例中记录的游戏动作。然后,属于所选集群的案例被用于确定哪个游戏动作被重用作为当前游戏问题情况的解决方案。在这个两步重用过程中,所提出的策略依赖于具有聚类的多数、具有聚类的概率、具有聚类赢得的点数和具有聚类获胜的机会。为了评估这些建议,使用不同重用策略实现的纸牌游戏代理在重复的游戏比赛中相互竞争,其中所有代理都使用同一组纸牌。
{"title":"Cases and Clusters in Reuse Policies for Decision-Making in Card Games","authors":"G. B. Paulus, J. Assunção, L. A. L. Silva","doi":"10.1109/ICTAI.2019.00190","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00190","url":null,"abstract":"This work investigates the combination of cases and clusters in the reuse of game actions (e.g., cards played, bets made) recorded in the cases retrieved for a given query in Case-based Reasoning (CBR) card-playing agents. With the support of the K-MEANS clustering algorithm, clustering results detailing problem states/situations and game outcomes relationships recorded in cases from the case base guide the execution of augmented reuse policies. These policies consider the game actions recorded in the retrieved cases in the selection of the clusters to be used. Then, the cases that belong to the selected clusters are used in the determination of which game action is reused as a solution to the current game problem situation. With this two-step reuse process, the proposed policies rely on the majority with clusters, the probability with clusters, the number of points won with clusters and the chance of victory with clusters. To evaluate these proposals, card-playing agents implemented with different reuse policies competed against each other in duplicated game matches where all of them played using the same set of cards.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132765733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Incorporating Domain Knowledge in Learning Word Embedding 结合领域知识学习词嵌入
Arpita Roy, Youngja Park, Shimei Pan
Word embedding is a Natural Language Processing (NLP) technique that automatically maps words from a vocabulary to vectors of real numbers in an embedding space. It has been widely used in recent years to boost the performance of a variety of NLP tasks such as named entity recognition, syntactic parsing and sentiment analysis. Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this paper, we describe a novel method, called Annotation Word Embedding (AWE), to train domain-specific word embeddings from sparse texts. Our method is generic and can leverage diverse types of domain knowledge such as domain vocabulary, semantic relations and attribute specifications. Specifically, our method encodes diverse types of domain knowledge as text annotations and incorporates the annotations in word embedding. We have evaluated AWE in two cybersecurity applications: identifying malware aliases and identifying relevant Common Vulnerabilities and Exposures (CVEs). Our evaluation results have demonstrated the effectiveness of our method over state-of-the-art baselines.
词嵌入是一种自然语言处理(NLP)技术,它将词汇表中的词自动映射到嵌入空间中的实数向量。近年来,它被广泛用于提高各种NLP任务的性能,如命名实体识别、句法分析和情感分析。经典的词嵌入方法,如Word2Vec和GloVe,在给定大型文本语料库时效果良好。当输入文本是稀疏的,如在许多专门的领域(例如,网络安全),这些方法往往不能产生高质量的向量。在本文中,我们描述了一种新的方法,称为标注词嵌入(AWE),从稀疏文本中训练特定领域的词嵌入。我们的方法是通用的,可以利用不同类型的领域知识,如领域词汇表、语义关系和属性规范。具体地说,我们的方法将不同类型的领域知识编码为文本注释,并将这些注释合并到词嵌入中。我们在两个网络安全应用中评估了AWE:识别恶意软件别名和识别相关的常见漏洞和暴露(cve)。我们的评估结果证明了我们的方法在最先进的基线上的有效性。
{"title":"Incorporating Domain Knowledge in Learning Word Embedding","authors":"Arpita Roy, Youngja Park, Shimei Pan","doi":"10.1109/ICTAI.2019.00226","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00226","url":null,"abstract":"Word embedding is a Natural Language Processing (NLP) technique that automatically maps words from a vocabulary to vectors of real numbers in an embedding space. It has been widely used in recent years to boost the performance of a variety of NLP tasks such as named entity recognition, syntactic parsing and sentiment analysis. Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this paper, we describe a novel method, called Annotation Word Embedding (AWE), to train domain-specific word embeddings from sparse texts. Our method is generic and can leverage diverse types of domain knowledge such as domain vocabulary, semantic relations and attribute specifications. Specifically, our method encodes diverse types of domain knowledge as text annotations and incorporates the annotations in word embedding. We have evaluated AWE in two cybersecurity applications: identifying malware aliases and identifying relevant Common Vulnerabilities and Exposures (CVEs). Our evaluation results have demonstrated the effectiveness of our method over state-of-the-art baselines.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134499530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Some Improvements of Deep Knowledge Tracing 深度知识跟踪的一些改进
A. Tato, R. Nkambou
Deep Knowledge Tracing (DKT), along with other machine learning approaches, are biased toward data used during the training step. Thus, for problems where we have few amounts of data for training, the generalization power will be low, the models will tend to give good results on classes containing many examples and poor results on those with few examples. Theses problems are frequent in educational data where for example, there are skills that are very difficult (floor) or very easy to master (ceiling). There will be less data on students that correctly answered questions related to difficult knowledge and that incorrectly answered questions related to knowledge easy to master. In that case, the DKT is unable to correctly predict the student's answers to questions associated with those skills. To improve the DKT, we penalize the model using a 'cost-sensitive' technique. To overcome the problem of the few amounts of data, we propose a hybrid model combining the DKT and expert knowledge. Thus, the DKT is combined with a Bayesian Network (built from domain experts) by using the attention mechanism. The resulting model can accurately track knowledge of students in Logic-Muse Intelligent Tutoring System (ITS), compared to the BKT and the original DKT.
深度知识追踪(DKT)和其他机器学习方法都倾向于在训练步骤中使用的数据。因此,对于我们训练的数据量很少的问题,泛化能力会很低,模型会倾向于在包含很多例子的类上给出好的结果,而在样本很少的类上给出不好的结果。这些问题在教育数据中很常见,例如,有些技能很难掌握(最低),有些技能很容易掌握(最高)。学生正确回答困难知识和错误回答容易掌握的知识的数据将会减少。在这种情况下,DKT无法正确预测学生对与这些技能相关的问题的答案。为了改进DKT,我们使用“成本敏感”技术对模型进行惩罚。为了克服数据量少的问题,我们提出了一种结合DKT和专家知识的混合模型。因此,DKT通过使用注意机制与贝叶斯网络(由领域专家构建)相结合。与BKT和原始DKT相比,所得到的模型可以准确地跟踪Logic-Muse智能辅导系统(ITS)中学生的知识。
{"title":"Some Improvements of Deep Knowledge Tracing","authors":"A. Tato, R. Nkambou","doi":"10.1109/ICTAI.2019.00217","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00217","url":null,"abstract":"Deep Knowledge Tracing (DKT), along with other machine learning approaches, are biased toward data used during the training step. Thus, for problems where we have few amounts of data for training, the generalization power will be low, the models will tend to give good results on classes containing many examples and poor results on those with few examples. Theses problems are frequent in educational data where for example, there are skills that are very difficult (floor) or very easy to master (ceiling). There will be less data on students that correctly answered questions related to difficult knowledge and that incorrectly answered questions related to knowledge easy to master. In that case, the DKT is unable to correctly predict the student's answers to questions associated with those skills. To improve the DKT, we penalize the model using a 'cost-sensitive' technique. To overcome the problem of the few amounts of data, we propose a hybrid model combining the DKT and expert knowledge. Thus, the DKT is combined with a Bayesian Network (built from domain experts) by using the attention mechanism. The resulting model can accurately track knowledge of students in Logic-Muse Intelligent Tutoring System (ITS), compared to the BKT and the original DKT.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122172600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Efficient Spatial-Temporal Polyp Detection Framework for Colonoscopy Video 一种用于结肠镜检查视频的高效时空息肉检测框架
Pengfei Zhang, Xinzi Sun, Dechun Wang, Xizhe Wang, Yu Cao, Benyuan Liu
Recent computer-aided polyp detection systems showed its effectiveness to decrease the polyp miss rate in colonoscopy operations, which is helpful to reduce colorectal cancer mortality. However, traditional polyp detection approaches suffer from the following drawbacks: low precision and sensitivity caused by the variance of polyp's appearance, and the system may not be able to detect polyps in real time due to the high computation complexity of the detection algorithms. To alleviate those problems, we introduce a real-time detection framework that incorporates spatial and temporal information extracted from colonoscopy videos. Our framework consists of the following three components: 1) we adopt Single Shot MultiBox Detector (SSD) to generate the proposal bounding boxes in each video frame. 2) Simultaneously, we compute optical flow from neighboring frames to extract temporal information and generate another group of polyp proposals with the temporal detection network. 3) At last, the final result is generated by a fusion module that connects the end of both streams. Experimental results on ETIS-LARIB dataset demonstrate that our proposed approach reaches the state-of-the-art performance on polyp localization with real-time performance.
近年来,计算机辅助息肉检测系统在降低结肠镜手术息肉漏诊率方面取得了良好的效果,有助于降低结直肠癌的死亡率。然而,传统的息肉检测方法存在以下缺点:由于息肉外观的差异导致精度和灵敏度不高,并且由于检测算法的高计算复杂度,系统可能无法实时检测到息肉。为了缓解这些问题,我们引入了一个实时检测框架,该框架结合了从结肠镜检查视频中提取的时空信息。我们的框架由以下三个部分组成:1)我们采用单镜头多盒检测器(Single Shot MultiBox Detector, SSD)在每个视频帧中生成提议边界框。2)同时计算相邻帧的光流,提取时间信息,利用时间检测网络生成另一组息肉建议。3)最后,融合模块将两个流的末端连接起来,生成最终结果。在ETIS-LARIB数据集上的实验结果表明,我们提出的方法在息肉定位上达到了最先进的性能,具有实时性。
{"title":"An Efficient Spatial-Temporal Polyp Detection Framework for Colonoscopy Video","authors":"Pengfei Zhang, Xinzi Sun, Dechun Wang, Xizhe Wang, Yu Cao, Benyuan Liu","doi":"10.1109/ICTAI.2019.00-93","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00-93","url":null,"abstract":"Recent computer-aided polyp detection systems showed its effectiveness to decrease the polyp miss rate in colonoscopy operations, which is helpful to reduce colorectal cancer mortality. However, traditional polyp detection approaches suffer from the following drawbacks: low precision and sensitivity caused by the variance of polyp's appearance, and the system may not be able to detect polyps in real time due to the high computation complexity of the detection algorithms. To alleviate those problems, we introduce a real-time detection framework that incorporates spatial and temporal information extracted from colonoscopy videos. Our framework consists of the following three components: 1) we adopt Single Shot MultiBox Detector (SSD) to generate the proposal bounding boxes in each video frame. 2) Simultaneously, we compute optical flow from neighboring frames to extract temporal information and generate another group of polyp proposals with the temporal detection network. 3) At last, the final result is generated by a fusion module that connects the end of both streams. Experimental results on ETIS-LARIB dataset demonstrate that our proposed approach reaches the state-of-the-art performance on polyp localization with real-time performance.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126004232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
An Improved Hybrid Heuristic Algorithm for Pickup and Delivery Problem with Three-Dimensional Loading Constraints 三维载荷约束下取货问题的改进混合启发式算法
Jiangqing Wu, Ling Zheng, Can Huang, Sifan Cai, Shaorong Feng, Defu Zhang
In the logistics industry, the integration of the vehicle routing problem and the container loading problem which has been generalised as pickup and delivery problem with three-dimensional loading constraints (3L-PDP) is very challenging. It involves not only ensuring the shortest route of travel but also minimising the effort of reloading goods. Traditional optimisation methods such as exhaustive search and greedy search are hard to achieve an optimal solution for such kinds of complex problems. In this paper, a hybrid heuristic algorithm for the 3L-PDP problem is extended by two key improvements: the usage of a tabu strategy for enlarging the local search space of the large neighbourhood search (LNS) algorithm, which is proposed in the framework of the simulated annealing process, and the employment of complex block generation and depth-first heuristic for incrementally finding one proper box at a time in the packing phase. The experimental results show that the improved hybrid heuristic algorithm outperforms its origin regarding total travel distance on benchmark instances proposed by Li and Lim or Dirk and Andreas.
在物流业中,车辆路线问题与集装箱装载问题的整合是一个非常具有挑战性的问题,它被概括为具有三维装载约束的取货和交货问题(3L-PDP)。这不仅包括确保最短的旅行路线,还包括尽量减少重新装货的工作量。对于这类复杂的问题,传统的优化方法如穷举搜索和贪婪搜索很难得到最优解。本文对3L-PDP问题的混合启发式算法进行了扩展,主要改进了两个方面:一是在模拟退火过程的框架下,使用禁忌策略来扩大大邻域搜索(LNS)算法的局部搜索空间;二是使用复杂块生成和深度优先启发式来在打包阶段逐步找到一个合适的盒子。实验结果表明,在Li和Lim或Dirk和Andreas提出的基准实例上,改进的混合启发式算法在总行程距离上优于原始算法。
{"title":"An Improved Hybrid Heuristic Algorithm for Pickup and Delivery Problem with Three-Dimensional Loading Constraints","authors":"Jiangqing Wu, Ling Zheng, Can Huang, Sifan Cai, Shaorong Feng, Defu Zhang","doi":"10.1109/ICTAI.2019.00233","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00233","url":null,"abstract":"In the logistics industry, the integration of the vehicle routing problem and the container loading problem which has been generalised as pickup and delivery problem with three-dimensional loading constraints (3L-PDP) is very challenging. It involves not only ensuring the shortest route of travel but also minimising the effort of reloading goods. Traditional optimisation methods such as exhaustive search and greedy search are hard to achieve an optimal solution for such kinds of complex problems. In this paper, a hybrid heuristic algorithm for the 3L-PDP problem is extended by two key improvements: the usage of a tabu strategy for enlarging the local search space of the large neighbourhood search (LNS) algorithm, which is proposed in the framework of the simulated annealing process, and the employment of complex block generation and depth-first heuristic for incrementally finding one proper box at a time in the packing phase. The experimental results show that the improved hybrid heuristic algorithm outperforms its origin regarding total travel distance on benchmark instances proposed by Li and Lim or Dirk and Andreas.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130262455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Novel Proposed Pooling for Convolutional Neural Network 一种新的卷积神经网络池化方法
D. Mansouri, Seif-Eddine Benkabou, Bachir Kaddar, K. Benabdeslem
In this paper, we aim to improve the performance, time complexity and energy efficiency of deep convolutional neural networks (CNNs) by combining hardware and specialization techniques. Since the pooling step represents a process that contributes significantly to CNNs performance improvement, we propose the Mode-Fisher (MF) pooling method. This form of pooling can potentially offer a very promising results in terms of improving feature extraction performance. The proposed method reduces significantly the data movement in the CNN and save up to 10% of total energy, without any performance penalty.
在本文中,我们旨在通过结合硬件和专门化技术来提高深度卷积神经网络(cnn)的性能、时间复杂度和能量效率。由于池化步骤代表了一个对cnn性能改进有重要贡献的过程,我们提出了模型-费舍尔(MF)池化方法。就提高特征提取性能而言,这种形式的池化可能会提供非常有希望的结果。该方法显著减少了CNN中的数据移动,在没有任何性能损失的情况下节省了高达10%的总能量。
{"title":"A Novel Proposed Pooling for Convolutional Neural Network","authors":"D. Mansouri, Seif-Eddine Benkabou, Bachir Kaddar, K. Benabdeslem","doi":"10.1109/ICTAI.2019.00258","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00258","url":null,"abstract":"In this paper, we aim to improve the performance, time complexity and energy efficiency of deep convolutional neural networks (CNNs) by combining hardware and specialization techniques. Since the pooling step represents a process that contributes significantly to CNNs performance improvement, we propose the Mode-Fisher (MF) pooling method. This form of pooling can potentially offer a very promising results in terms of improving feature extraction performance. The proposed method reduces significantly the data movement in the CNN and save up to 10% of total energy, without any performance penalty.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128785006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1