首页 > 最新文献

2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)最新文献

英文 中文
Targeted Sentiment Classification with Knowledge Powered Attention Network 基于知识关注网络的目标情感分类
Ximo Bian, Chong Feng, Arshad Ahmad, Jinming Dai, Guifen Zhao
Targeted sentiment classification aims to identify the sentiment expressed towards some targets given context sentences, having great application value in social media, ecommerce platform and other fields. Most of the previous methods model context and target words with RNN and attention mechanism, which primarily do not use any external knowledge. In this paper, we utilize external knowledge from knowledge bases to reinforce the semantic representation of context and target. We propose a new model called Knowledge Powered Attention Network (KPAN), which uses the multi-head attention mechanism to represent target and context and to fuse with conceptual knowledge extracted from external knowledge bases. The experiments on three public datasets revealed that our proposed model outperforms the state-of-the-art methods, which signify the validity of our model.
有针对性的情感分类旨在识别给定上下文句子中对某些目标表达的情感,在社交媒体、电商平台等领域具有很大的应用价值。以往的方法大多采用RNN和注意机制对上下文和目标词进行建模,基本上不使用任何外部知识。在本文中,我们利用来自知识库的外部知识来增强上下文和目标的语义表示。本文提出了一种新的知识驱动注意网络(KPAN)模型,该模型使用多头注意机制来表示目标和上下文,并融合从外部知识库中提取的概念知识。在三个公共数据集上的实验表明,我们提出的模型优于最先进的方法,这表明我们的模型是有效的。
{"title":"Targeted Sentiment Classification with Knowledge Powered Attention Network","authors":"Ximo Bian, Chong Feng, Arshad Ahmad, Jinming Dai, Guifen Zhao","doi":"10.1109/ICTAI.2019.00150","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00150","url":null,"abstract":"Targeted sentiment classification aims to identify the sentiment expressed towards some targets given context sentences, having great application value in social media, ecommerce platform and other fields. Most of the previous methods model context and target words with RNN and attention mechanism, which primarily do not use any external knowledge. In this paper, we utilize external knowledge from knowledge bases to reinforce the semantic representation of context and target. We propose a new model called Knowledge Powered Attention Network (KPAN), which uses the multi-head attention mechanism to represent target and context and to fuse with conceptual knowledge extracted from external knowledge bases. The experiments on three public datasets revealed that our proposed model outperforms the state-of-the-art methods, which signify the validity of our model.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130017578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Network Learning for Classification via Transfer Method 基于迁移方法的贝叶斯网络分类学习
April H. Liu, Zihao Cheng, Justin Jiang
In classification problem, Bayesian networks play an important role because of its efficiency and interpretability. Bayesian networks learning methods require enough data to produce reliable results. Unfortunately, in practice, the training data are often either too few, expensive to label, or easy to be outdated. However, there may be sufficient labeled data that are available in a different but related domain. Learning reliable Bayesian networks from limited data is difficult; and transfer learning might be used to improve the robustness of learned networks by combining data from auxiliary and related labeled dataset. In this paper, we propose a novel transfer learning method for Bayesian networks for classification that considers both structure and parameter learning. Our solution is to first construct the initial Bayesian networks model for auxiliary labeled data, and then revise the model according to an Expectation-Maximization (EM) algorithm, structure and parameters are revised by turns, in order to make it applicable to the target unlabeled dataset. We mainly apply our method on a special type of Bayesian networks, namely tree-based Bayesian network. To validate our approach, we evaluated the method on a real and typical classification scenario - text classification problem. We compared our method with other transfer learning method as well as the traditional supervised and semi-supervised learning algorithms. The experimental results show that our algorithm is very effective and obtains a significant improvement when we transfer knowledge from related dataset.
在分类问题中,贝叶斯网络以其高效和可解释性发挥着重要作用。贝叶斯网络学习方法需要足够的数据来产生可靠的结果。不幸的是,在实践中,训练数据往往要么太少,要么标签昂贵,要么很容易过时。然而,在不同但相关的领域中可能有足够的标记数据可用。从有限的数据中学习可靠的贝叶斯网络是困难的;迁移学习可以通过结合辅助数据集和相关标记数据集的数据来提高学习网络的鲁棒性。在本文中,我们提出了一种新的贝叶斯网络分类迁移学习方法,同时考虑了结构和参数学习。我们的解决方案是首先对辅助标记数据构建初始贝叶斯网络模型,然后根据期望最大化(EM)算法对模型进行修改,依次修改结构和参数,使其适用于目标未标记数据集。我们主要将我们的方法应用于一种特殊类型的贝叶斯网络,即基于树的贝叶斯网络。为了验证我们的方法,我们在一个真实而典型的分类场景-文本分类问题上对该方法进行了评估。将该方法与其他迁移学习方法以及传统的监督学习和半监督学习算法进行了比较。实验结果表明,我们的算法是非常有效的,当我们从相关数据集中转移知识时,得到了显著的改进。
{"title":"Bayesian Network Learning for Classification via Transfer Method","authors":"April H. Liu, Zihao Cheng, Justin Jiang","doi":"10.1109/ICTAI.2019.00154","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00154","url":null,"abstract":"In classification problem, Bayesian networks play an important role because of its efficiency and interpretability. Bayesian networks learning methods require enough data to produce reliable results. Unfortunately, in practice, the training data are often either too few, expensive to label, or easy to be outdated. However, there may be sufficient labeled data that are available in a different but related domain. Learning reliable Bayesian networks from limited data is difficult; and transfer learning might be used to improve the robustness of learned networks by combining data from auxiliary and related labeled dataset. In this paper, we propose a novel transfer learning method for Bayesian networks for classification that considers both structure and parameter learning. Our solution is to first construct the initial Bayesian networks model for auxiliary labeled data, and then revise the model according to an Expectation-Maximization (EM) algorithm, structure and parameters are revised by turns, in order to make it applicable to the target unlabeled dataset. We mainly apply our method on a special type of Bayesian networks, namely tree-based Bayesian network. To validate our approach, we evaluated the method on a real and typical classification scenario - text classification problem. We compared our method with other transfer learning method as well as the traditional supervised and semi-supervised learning algorithms. The experimental results show that our algorithm is very effective and obtains a significant improvement when we transfer knowledge from related dataset.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131758919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Semi-Supervised Ovulation Detection Based on Multiple Properties 基于多属性的半监督排卵检测
A. Azaria, Seagal Azaria
Despite being a well-researched problem, ovulation detection in human female remains a difficult task. Most current methods for ovulation detection rely on measurements of a single property (e.g. morning body temperature) or at most on two properties (e.g. both salivary and vaginal electrical resistance). In this paper we present a machine learning based method for detecting the day in which ovulation occurs. Our method considered measurements of five different properties. We crawled a data-set from the web and showed that our method outperforms current state-of-the-art methods for ovulation detection. Our method performs well also when considering measurements of fewer properties. We show that our method's performance can be further improved by using unlabeled data, that is, mensuration cycles without a know ovulation date. Our resulted machine learning model can be very useful for women trying to conceive that have trouble in recognizing their ovulation period, especially when some measurements are missing.
尽管这是一个研究得很充分的问题,但人类女性的排卵检测仍然是一项艰巨的任务。目前大多数排卵检测方法依赖于测量单一特性(如早晨体温),或至多测量两种特性(如唾液和阴道电阻)。在本文中,我们提出了一种基于机器学习的方法来检测排卵发生的日期。我们的方法考虑了五种不同性质的测量。我们从网络上抓取了一组数据,并表明我们的方法优于当前最先进的排卵检测方法。我们的方法在考虑较少性质的测量时也表现良好。我们表明,使用未标记的数据,即没有已知排卵日期的测量周期,我们的方法的性能可以进一步提高。我们的机器学习模型对于那些难以识别排卵期的女性来说非常有用,尤其是在缺少一些测量数据的情况下。
{"title":"Semi-Supervised Ovulation Detection Based on Multiple Properties","authors":"A. Azaria, Seagal Azaria","doi":"10.1109/ICTAI.2019.00039","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00039","url":null,"abstract":"Despite being a well-researched problem, ovulation detection in human female remains a difficult task. Most current methods for ovulation detection rely on measurements of a single property (e.g. morning body temperature) or at most on two properties (e.g. both salivary and vaginal electrical resistance). In this paper we present a machine learning based method for detecting the day in which ovulation occurs. Our method considered measurements of five different properties. We crawled a data-set from the web and showed that our method outperforms current state-of-the-art methods for ovulation detection. Our method performs well also when considering measurements of fewer properties. We show that our method's performance can be further improved by using unlabeled data, that is, mensuration cycles without a know ovulation date. Our resulted machine learning model can be very useful for women trying to conceive that have trouble in recognizing their ovulation period, especially when some measurements are missing.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133085334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experience Sharing Between Cooperative Reinforcement Learning Agents 协作强化学习智能体之间的经验共享
Lucas O. Souza, G. Ramos, C. Ralha
The idea of experience sharing between cooperative agents naturally emerges from our understanding of how humans learn. Our evolution as a species is tightly linked to the ability to exchange learned knowledge with one another. It follows that experience sharing (ES) between autonomous and independent agents could become the key to accelerate learning in cooperative multiagent settings. We investigate if randomly selecting experiences to share can increase the performance of deep reinforcement learning agents, and propose three new methods for selecting experiences to accelerate the learning process. Firstly, we introduce Focused ES, which prioritizes unexplored regions of the state space. Secondly, we present Prioritized ES, in which temporal-difference error is used as a measure of priority. Finally, we devise Focused Prioritized ES, which combines both previous approaches. The methods are empirically validated in a control problem. While sharing randomly selected experiences between two Deep Q-Network agents shows no improvement over a single agent baseline, we show that the proposed ES methods can successfully outperform the baseline. In particular, the Focused ES accelerates learning by a factor of 2, reducing by 51% the number of episodes required to complete the task.
合作主体之间分享经验的想法自然来自于我们对人类学习方式的理解。作为一个物种,我们的进化与彼此交流所学知识的能力密切相关。由此可见,自主智能体和独立智能体之间的经验共享(ES)可能成为加速多智能体协作学习的关键。我们研究了随机选择经验共享是否可以提高深度强化学习代理的性能,并提出了三种新的选择经验的方法来加速学习过程。首先,我们引入了聚焦ES,它优先考虑状态空间中未探索的区域。其次,我们提出了优先级ES,其中时间差误差作为优先级的度量。最后,我们设计了集中优先的ES,它结合了前面两种方法。该方法在一个控制问题中得到了经验验证。虽然在两个Deep Q-Network代理之间共享随机选择的经验比单个代理基线没有任何改进,但我们表明所提出的ES方法可以成功地优于基线。特别值得一提的是,Focused ES将学习速度提高了2倍,将完成任务所需的情节数量减少了51%。
{"title":"Experience Sharing Between Cooperative Reinforcement Learning Agents","authors":"Lucas O. Souza, G. Ramos, C. Ralha","doi":"10.1109/ICTAI.2019.00136","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00136","url":null,"abstract":"The idea of experience sharing between cooperative agents naturally emerges from our understanding of how humans learn. Our evolution as a species is tightly linked to the ability to exchange learned knowledge with one another. It follows that experience sharing (ES) between autonomous and independent agents could become the key to accelerate learning in cooperative multiagent settings. We investigate if randomly selecting experiences to share can increase the performance of deep reinforcement learning agents, and propose three new methods for selecting experiences to accelerate the learning process. Firstly, we introduce Focused ES, which prioritizes unexplored regions of the state space. Secondly, we present Prioritized ES, in which temporal-difference error is used as a measure of priority. Finally, we devise Focused Prioritized ES, which combines both previous approaches. The methods are empirically validated in a control problem. While sharing randomly selected experiences between two Deep Q-Network agents shows no improvement over a single agent baseline, we show that the proposed ES methods can successfully outperform the baseline. In particular, the Focused ES accelerates learning by a factor of 2, reducing by 51% the number of episodes required to complete the task.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133043566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Knowledge Graph Embedding by Bias Vectors 基于偏差向量的知识图谱嵌入
Minjie Ding, W. Tong, Xuehai Ding, Xiaoli Zhi, Xiao Wang, Guoqing Zhang
Knowledge graph completion can predict the possible relation between entities. Previous work such as TransE, TransR, TransPES and GTrans embed knowledge graph into vector space and treat relations between entities as translations. In most cases, the more complex the algorithm is, the better the result will be, but it is difficult to apply to large-scale knowledge graphs. Therefore, we propose TransB, an efficient model, in this paper. We avoid the complex matrix or vector multiplication operation. Meanwhile, we make the representation of entities not too simple, which can satisfy the operation in the case of non-one-to-one relation. We use link prediction to evaluate the performance of our model in the experiment. The experimental results show that our model is valid and has low time complexity.
知识图谱补全可以预测实体之间可能存在的关系。TransE, TransR, tranes和GTrans等先前的工作将知识图嵌入到向量空间中,并将实体之间的关系视为翻译。在大多数情况下,算法越复杂,结果越好,但难以应用于大规模的知识图。因此,我们在本文中提出了一个高效模型TransB。我们避免了复杂的矩阵或向量乘法运算。同时,我们使实体的表示不太简单,可以满足非一对一关系情况下的操作。我们在实验中使用链接预测来评估我们的模型的性能。实验结果表明,该模型是有效的,具有较低的时间复杂度。
{"title":"Knowledge Graph Embedding by Bias Vectors","authors":"Minjie Ding, W. Tong, Xuehai Ding, Xiaoli Zhi, Xiao Wang, Guoqing Zhang","doi":"10.1109/ICTAI.2019.00180","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00180","url":null,"abstract":"Knowledge graph completion can predict the possible relation between entities. Previous work such as TransE, TransR, TransPES and GTrans embed knowledge graph into vector space and treat relations between entities as translations. In most cases, the more complex the algorithm is, the better the result will be, but it is difficult to apply to large-scale knowledge graphs. Therefore, we propose TransB, an efficient model, in this paper. We avoid the complex matrix or vector multiplication operation. Meanwhile, we make the representation of entities not too simple, which can satisfy the operation in the case of non-one-to-one relation. We use link prediction to evaluate the performance of our model in the experiment. The experimental results show that our model is valid and has low time complexity.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133600239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Transfer Learning with Ensemble Feature Extraction and Low-Rank Matrix Factorization for Severity Stage Classification of Diabetic Retinopathy 集成特征提取和低秩矩阵分解的迁移学习用于糖尿病视网膜病变严重程度分期分类
Isuru Wijesinghe, C. Gamage, Charith D. Chitraranjan
The automatic classification of diabetic retinopathy (DR) is of vital importance, as it is the leading cause of irreversible vision loss in the working-age population all over the world today. Current clinical approaches require a well-trained clinician to manually evaluate digital colour fundus photographs of retina and locate lesions associated with vascular abnormalities due to diabetes, which is time-consuming. Recently, deep feature extraction using pretrained convolutional neural networks has been used to predict DR from fundus images with reasonable accuracy. However, techniques such as global average pooling (GAP), singular value decomposition (SVD) and ensemble learning have not been used in automatic prediction of DR. We propose to use a combination of deep features produced by an ensemble of pretrained-CNNs (DenseNet-201, ResNet-18 and VGG-16) as a single feature vector to predict five-class severity levels of diabetic retinopathy. Our results show a promising F1-measure of over 98% on the kaggle dataset and another dataset provided to us by an ophthalmic clinic. This is an improvement on the current state-of-the-art approaches in DR classification. We evaluated prominent CNN architectures (DenseNet, ResNet, Xception, InceptionV3, InceptionResNetV2 and VGG) that can be used for the task of transfer learning for DR. Moreover, we describe a technique of reducing memory consumption and processing time whereas preserving classification accuracy by using dimensional reduction based on GAP and SVD.
糖尿病视网膜病变(DR)的自动分类是至关重要的,因为它是当今世界劳动年龄人口中不可逆转的视力丧失的主要原因。目前的临床方法需要训练有素的临床医生手动评估视网膜的数字彩色眼底照片,并定位与糖尿病引起的血管异常相关的病变,这很耗时。近年来,基于预训练卷积神经网络的深度特征提取已被用于眼底图像的DR预测,并具有一定的精度。然而,诸如全局平均池化(GAP)、奇异值分解(SVD)和集成学习等技术尚未用于dr的自动预测。我们建议使用由预训练的cnn集合(DenseNet-201、ResNet-18和VGG-16)产生的深度特征组合作为单个特征向量来预测糖尿病视网膜病变的五个等级严重程度。我们的结果显示,在kaggle数据集和另一个眼科诊所提供给我们的数据集上,我们的f1测量值超过98%。这是对当前最先进的DR分类方法的改进。我们评估了可用于dr迁移学习任务的著名CNN架构(DenseNet, ResNet, Xception, InceptionV3, InceptionResNetV2和VGG)。此外,我们描述了一种减少内存消耗和处理时间的技术,同时通过基于GAP和SVD的降维来保持分类准确性。
{"title":"Transfer Learning with Ensemble Feature Extraction and Low-Rank Matrix Factorization for Severity Stage Classification of Diabetic Retinopathy","authors":"Isuru Wijesinghe, C. Gamage, Charith D. Chitraranjan","doi":"10.1109/ICTAI.2019.00132","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00132","url":null,"abstract":"The automatic classification of diabetic retinopathy (DR) is of vital importance, as it is the leading cause of irreversible vision loss in the working-age population all over the world today. Current clinical approaches require a well-trained clinician to manually evaluate digital colour fundus photographs of retina and locate lesions associated with vascular abnormalities due to diabetes, which is time-consuming. Recently, deep feature extraction using pretrained convolutional neural networks has been used to predict DR from fundus images with reasonable accuracy. However, techniques such as global average pooling (GAP), singular value decomposition (SVD) and ensemble learning have not been used in automatic prediction of DR. We propose to use a combination of deep features produced by an ensemble of pretrained-CNNs (DenseNet-201, ResNet-18 and VGG-16) as a single feature vector to predict five-class severity levels of diabetic retinopathy. Our results show a promising F1-measure of over 98% on the kaggle dataset and another dataset provided to us by an ophthalmic clinic. This is an improvement on the current state-of-the-art approaches in DR classification. We evaluated prominent CNN architectures (DenseNet, ResNet, Xception, InceptionV3, InceptionResNetV2 and VGG) that can be used for the task of transfer learning for DR. Moreover, we describe a technique of reducing memory consumption and processing time whereas preserving classification accuracy by using dimensional reduction based on GAP and SVD.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128863579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On Solving Exactly-One-SAT 关于解精确一sat题
Yazid Boumarafi, Y. Salhi
In this paper, we aim at studying the Exactly-One-SAT problem (in short EO-SAT). This problem consists in deciding whether a given CNF formula admits a model so that each clause has exactly one satisfied literal. The contribution of this work is twofold. Firstly, we introduce a tractable class in EO-SAT, which is defined by a property that has to be satisfied by combinations of clauses. This class can be seen as a counterpart of tractable classes in the maximum independent set problem. Secondly, we propose graph-based approaches for reducing the number of variables and clauses of EO-SAT instances, which consequently allow for reducing the search space. We provide an experimental study for evaluating these approach by showing its interest in the context of the graph coloring problem.
本文旨在研究精确一sat问题(简称EO-SAT)。这个问题包括决定给定的CNF公式是否允许一个模型,以便每个子句只有一个满足的字面量。这项工作的贡献是双重的。首先,我们在EO-SAT中引入了一个可处理的类,它由一个必须由子句组合满足的属性来定义。该类可以看作是最大独立集问题中可处理类的对应物。其次,我们提出了基于图的方法来减少EO-SAT实例的变量和子句的数量,从而减少了搜索空间。我们提供了一个实验研究,通过展示其在图着色问题的背景下的兴趣来评估这些方法。
{"title":"On Solving Exactly-One-SAT","authors":"Yazid Boumarafi, Y. Salhi","doi":"10.1109/ICTAI.2019.00011","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00011","url":null,"abstract":"In this paper, we aim at studying the Exactly-One-SAT problem (in short EO-SAT). This problem consists in deciding whether a given CNF formula admits a model so that each clause has exactly one satisfied literal. The contribution of this work is twofold. Firstly, we introduce a tractable class in EO-SAT, which is defined by a property that has to be satisfied by combinations of clauses. This class can be seen as a counterpart of tractable classes in the maximum independent set problem. Secondly, we propose graph-based approaches for reducing the number of variables and clauses of EO-SAT instances, which consequently allow for reducing the search space. We provide an experimental study for evaluating these approach by showing its interest in the context of the graph coloring problem.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115439483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Multi-label Hashing for Image Retrieval 深度多标签哈希图像检索
X. Zhong, Jiachen Li, Wenxin Huang, Liang Xie
Due to its low storage cost and fast query speed, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by learning a good image representation. However, existing deep hash methods simplify multi-label images into single-label processing, so the rich semantic information from multi-label is ignored. Meanwhile, the imbalance of similarity information leads to the wrong sample weight in the loss function, which makes unsatisfactory training performance and lower recall rate. In this paper, we propose Deep Multi-Label Hashing (DMLH) model that generates binary hash codes which retain the semantic relationship of multi-label of the image. The contributions of this new model mainly include the following two aspects: (1) A novel sample weight calculation model adaptively adjusts the weight of the sample pair by calculating the semantic similarity of the multi-label image pairs. (2) The sample weight cross-entropy loss function, which is designed according to the similarity of the image, adjusts the balance of similar image pairs and dissimilar image pairs. Extensive experiments demonstrate that the proposed method can generate hash codes which achieve better retrieval performance on two benchmark datasets, NUS-WIDE and MS-COCO.
由于存储成本低、查询速度快,哈希被广泛应用于大规模图像检索的近似最近邻搜索中,而深度哈希通过学习良好的图像表示进一步提高了检索质量。然而,现有的深度哈希方法将多标签图像简化为单标签处理,忽略了多标签图像中丰富的语义信息。同时,相似信息的不平衡导致损失函数中的样本权值错误,使得训练效果不理想,召回率较低。本文提出了深度多标签哈希(Deep Multi-Label hash, DMLH)模型,该模型生成的二进制哈希码保留了图像的多标签语义关系。该模型的贡献主要包括以下两个方面:(1)一种新的样本权重计算模型,通过计算多标签图像对的语义相似度,自适应调整样本对的权重。(2)根据图像的相似度设计样本权交叉熵损失函数,调整相似图像对和不相似图像对的平衡。大量实验表明,该方法可以在NUS-WIDE和MS-COCO两个基准数据集上生成具有较好检索性能的哈希码。
{"title":"Deep Multi-label Hashing for Image Retrieval","authors":"X. Zhong, Jiachen Li, Wenxin Huang, Liang Xie","doi":"10.1109/ICTAI.2019.00-94","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00-94","url":null,"abstract":"Due to its low storage cost and fast query speed, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by learning a good image representation. However, existing deep hash methods simplify multi-label images into single-label processing, so the rich semantic information from multi-label is ignored. Meanwhile, the imbalance of similarity information leads to the wrong sample weight in the loss function, which makes unsatisfactory training performance and lower recall rate. In this paper, we propose Deep Multi-Label Hashing (DMLH) model that generates binary hash codes which retain the semantic relationship of multi-label of the image. The contributions of this new model mainly include the following two aspects: (1) A novel sample weight calculation model adaptively adjusts the weight of the sample pair by calculating the semantic similarity of the multi-label image pairs. (2) The sample weight cross-entropy loss function, which is designed according to the similarity of the image, adjusts the balance of similar image pairs and dissimilar image pairs. Extensive experiments demonstrate that the proposed method can generate hash codes which achieve better retrieval performance on two benchmark datasets, NUS-WIDE and MS-COCO.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114923490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
EPMS: A Framework for Large-Scale Patient Matching EPMS:大规模患者匹配的框架
Himanshu Singhal, Harish Ravi, S. N. Chakravarthy, Prabavathy Balasundaram, Chitra Babu
The healthcare industry, through digitization, is trying to achieve interoperability, but has not been able to achieve complete Health Information Exchange (HIE). One of the major challenges in achieving this is the inability to accurately match patient data. Mismatching of patient records can lead to improper treatment which can prove to be fatal. Also, the presence of duplicate overheads has caused inaccessibility to crucial information in the time of need. Existing solutions to patient matching are both time-consuming and non-scalable. This paper proposes a framework, namely, Electronic Patient Matching System (EPMS), which attempts to overcome these barriers while achieving a good accuracy in matching patient records. The framework encodes the patient records using variational autoencoder and amalgamates them by performing locality sensitive hashing on an Apache spark cluster. This makes the process faster and highly scalable. Furthermore, a fuzzy matching of the records in each block is performed using Levenshtein distances to identify the duplicate patient records. Experimental investigations were performed on a synthetically generated dataset consisting of 44555 patient records. The proposed framework achieved a matching accuracy of 81.15% on this dataset.
医疗保健行业正试图通过数字化实现互操作性,但尚未能够实现完整的健康信息交换(HIE)。实现这一目标的主要挑战之一是无法准确匹配患者数据。患者记录的不匹配可能导致治疗不当,这可能是致命的。此外,重复管理费用的存在导致在需要时无法获得关键信息。现有的患者匹配解决方案既耗时又不可扩展。本文提出了一个框架,即电子患者匹配系统(Electronic Patient Matching System, EPMS),它试图克服这些障碍,同时在匹配患者记录方面取得良好的准确性。该框架使用变分自编码器对患者记录进行编码,并通过在Apache spark集群上执行局部敏感散列来合并它们。这使得该过程更快且具有高度可扩展性。此外,使用Levenshtein距离对每个块中的记录进行模糊匹配,以识别重复的患者记录。实验研究在一个由44555例患者记录组成的合成数据集上进行。该框架在该数据集上的匹配精度为81.15%。
{"title":"EPMS: A Framework for Large-Scale Patient Matching","authors":"Himanshu Singhal, Harish Ravi, S. N. Chakravarthy, Prabavathy Balasundaram, Chitra Babu","doi":"10.1109/ICTAI.2019.00153","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00153","url":null,"abstract":"The healthcare industry, through digitization, is trying to achieve interoperability, but has not been able to achieve complete Health Information Exchange (HIE). One of the major challenges in achieving this is the inability to accurately match patient data. Mismatching of patient records can lead to improper treatment which can prove to be fatal. Also, the presence of duplicate overheads has caused inaccessibility to crucial information in the time of need. Existing solutions to patient matching are both time-consuming and non-scalable. This paper proposes a framework, namely, Electronic Patient Matching System (EPMS), which attempts to overcome these barriers while achieving a good accuracy in matching patient records. The framework encodes the patient records using variational autoencoder and amalgamates them by performing locality sensitive hashing on an Apache spark cluster. This makes the process faster and highly scalable. Furthermore, a fuzzy matching of the records in each block is performed using Levenshtein distances to identify the duplicate patient records. Experimental investigations were performed on a synthetically generated dataset consisting of 44555 patient records. The proposed framework achieved a matching accuracy of 81.15% on this dataset.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122119550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Learning Classification Scheme for Brain EEG Patterns 一种新的脑电模式学习分类方案
Spyridon Manganas, N. Bourbakis
EEG has been extensively used to aid the diagnosis of various brain disorders and also, for the identification of brain activities during cognitive tasks. However, the visual evaluation of EEG recordings is a demanding process, susceptible to error and bias due to the human factor involved. The development of EEG analysis methods coupled with data processing and mining techniques have assisted the feature extraction process from EEG recordings. In this paper, a novel method for classification of EEG signals based on features derived from the EEG morphology is proposed. The classification accuracy, as illustrated through experiment evaluation, shows that the proposed method can achieve adequate results and moreover the extracted features can be used collaboratively with commonly used features from time and time-frequency domain to increase the EEG signal's classification performance.
脑电图已被广泛用于帮助诊断各种脑部疾病,也用于识别认知任务期间的大脑活动。然而,脑电图记录的视觉评价是一个要求很高的过程,由于涉及人为因素,容易出现误差和偏差。脑电图分析方法的发展与数据处理和挖掘技术相结合,有助于从脑电图记录中提取特征。本文提出了一种基于脑电信号形态学特征的脑电信号分类方法。实验结果表明,该方法可以达到较好的分类精度,并且可以将提取的特征与常用的时频域特征协同使用,提高脑电信号的分类性能。
{"title":"A Novel Learning Classification Scheme for Brain EEG Patterns","authors":"Spyridon Manganas, N. Bourbakis","doi":"10.1109/ICTAI.2019.00144","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00144","url":null,"abstract":"EEG has been extensively used to aid the diagnosis of various brain disorders and also, for the identification of brain activities during cognitive tasks. However, the visual evaluation of EEG recordings is a demanding process, susceptible to error and bias due to the human factor involved. The development of EEG analysis methods coupled with data processing and mining techniques have assisted the feature extraction process from EEG recordings. In this paper, a novel method for classification of EEG signals based on features derived from the EEG morphology is proposed. The classification accuracy, as illustrated through experiment evaluation, shows that the proposed method can achieve adequate results and moreover the extracted features can be used collaboratively with commonly used features from time and time-frequency domain to increase the EEG signal's classification performance.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125830035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1