Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00030
E. Marin, Mohammed Almukaynizi, P. Shakarian
With the widespread of cyber-attack incidents, cybersecurity has become a major concern for organizations. The waste of time, money and resources while organizations counter irrelevant cyber threats can turn them into the next victim of malicious hackers. In addition, the online hacking community has grown rapidly, making the cyber threat landscape hard to keep track of. In this work, we describe an AI tool that uses a temporal logical framework to learn rules that correlate malicious hacking activity with real-world cyber incidents, aiming to leverage these rules for predicting future cyber-attacks. The framework considers socio-personal and technical indicators of enterprise attacks, analyzing the hackers and their strategies when they are planning cyber offensives online. Our results demonstrate the viability of the proposed approach, which outperforms baseline systems by an average F1 score increase of 138%, 71% and 17% for intervals of 1, 2 and 3 days respectively, providing security teams mechanisms to predict and avoid cyber-attacks.
{"title":"Reasoning About Future Cyber-Attacks Through Socio-Technical Hacking Information","authors":"E. Marin, Mohammed Almukaynizi, P. Shakarian","doi":"10.1109/ICTAI.2019.00030","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00030","url":null,"abstract":"With the widespread of cyber-attack incidents, cybersecurity has become a major concern for organizations. The waste of time, money and resources while organizations counter irrelevant cyber threats can turn them into the next victim of malicious hackers. In addition, the online hacking community has grown rapidly, making the cyber threat landscape hard to keep track of. In this work, we describe an AI tool that uses a temporal logical framework to learn rules that correlate malicious hacking activity with real-world cyber incidents, aiming to leverage these rules for predicting future cyber-attacks. The framework considers socio-personal and technical indicators of enterprise attacks, analyzing the hackers and their strategies when they are planning cyber offensives online. Our results demonstrate the viability of the proposed approach, which outperforms baseline systems by an average F1 score increase of 138%, 71% and 17% for intervals of 1, 2 and 3 days respectively, providing security teams mechanisms to predict and avoid cyber-attacks.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114986404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00236
Shixuan Wang, Jiabin Yuan, Jing Wen
With the popularity of mobile terminals and the convenience of mobile operating, more and more private data is stored in mobile phones, which makes users pay more attention to mobile security. The data of mobile motion sensors are used to construct the user's behavioral characteristics and biometric characteristics. The principle is to capture the subtle changes in the smartphones caused by the user holding smartphones and touching screens. These changes are unique to different users. This paper studies the effects of mobile phone orientations on continuous authentication based on mobile motion sensors. Additionally, this paper constructs an adaptive phone orientation method for continuous authentication. An orientation detection model is constructed using K-means and Random Forest. The authentication model is constructed by using one-class SVM. Meanwhile, our experiments show that the data of mobile motion sensors are great difference in different phone orientations. We believe considering the situation of multi-orientations data affecting authentication accuracy. The adaptive phone orientation method can better fit for the scenarios where users use mobile devices in different phone orientations. Considering the orientation of the data will improve the robustness of the system and the accuracy of authentication.
{"title":"Adaptive Phone Orientation Method for Continuous Authentication Based on Mobile Motion Sensors","authors":"Shixuan Wang, Jiabin Yuan, Jing Wen","doi":"10.1109/ICTAI.2019.00236","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00236","url":null,"abstract":"With the popularity of mobile terminals and the convenience of mobile operating, more and more private data is stored in mobile phones, which makes users pay more attention to mobile security. The data of mobile motion sensors are used to construct the user's behavioral characteristics and biometric characteristics. The principle is to capture the subtle changes in the smartphones caused by the user holding smartphones and touching screens. These changes are unique to different users. This paper studies the effects of mobile phone orientations on continuous authentication based on mobile motion sensors. Additionally, this paper constructs an adaptive phone orientation method for continuous authentication. An orientation detection model is constructed using K-means and Random Forest. The authentication model is constructed by using one-class SVM. Meanwhile, our experiments show that the data of mobile motion sensors are great difference in different phone orientations. We believe considering the situation of multi-orientations data affecting authentication accuracy. The adaptive phone orientation method can better fit for the scenarios where users use mobile devices in different phone orientations. Considering the orientation of the data will improve the robustness of the system and the accuracy of authentication.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00264
Kazuki Nishisaka, H. Iima
Just generation gap (JGG) is a generational alternation model of real-coded genetic algorithms (GAs), and is excellent at finding the global optimum solution of a function optimization problem. However, its population size is large, and therefore its convergence speed is low. A method to accelerate the convergence speed is to reduce the population size. However, if it is reduced throughout the search by JGG, the population diversity is lost, which may cause the failure to find the global optimum solution. The population size should be reduced during only a part of the search period during which the population diversity is not lost. In this paper, we propose a real-coded GA realizing fast convergence by introducing the reduction of the population size into JGG. In the proposed method, the population size is reduced during only an early or late search period. The performance of the proposed method is empirically evaluated by comparing it with only JGG and an existing GA.
{"title":"Real-Coded Genetic Algorithm Realizing Fast Convergence by Reducing Its Population Size","authors":"Kazuki Nishisaka, H. Iima","doi":"10.1109/ICTAI.2019.00264","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00264","url":null,"abstract":"Just generation gap (JGG) is a generational alternation model of real-coded genetic algorithms (GAs), and is excellent at finding the global optimum solution of a function optimization problem. However, its population size is large, and therefore its convergence speed is low. A method to accelerate the convergence speed is to reduce the population size. However, if it is reduced throughout the search by JGG, the population diversity is lost, which may cause the failure to find the global optimum solution. The population size should be reduced during only a part of the search period during which the population diversity is not lost. In this paper, we propose a real-coded GA realizing fast convergence by introducing the reduction of the population size into JGG. In the proposed method, the population size is reduced during only an early or late search period. The performance of the proposed method is empirically evaluated by comparing it with only JGG and an existing GA.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114552825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00101
Sarthak Ghosh, C. Ramakrishnan
Decision-making based on probabilistic reasoning often involves selecting a subset of expensive observations that best predict the system state. In an earlier work, adopting the general notion of value of information (VoI) first introduced by Krause and Guestrin, Ghosh and Ramakrishnan considered the problem of determining optimal conditional observation plans in temporal graphical models, based on non-myopic (non-greedy) VoI, over a finite time horizon. They cast the problem as determining optimal policies in finite-horizon, non-discounted Markov Decision Processes (MDPs). However, there are many practical scenarios where a time horizon is undefinable. In this paper, we consider the VoI optimization problem over an infinite (or equivalently, undefined) time horizon. Adopting an approach similar to Ghosh and Ramakrishnan's, we cast this problem as determining optimal policies in infinite-horizon, finite-state, discounted MDPs. Although our MDP-based framework addresses Dynamic Bayesian Networks (DBNs) that are more restricted than those addressed by Ghosh and Ramakrishnan, we incorporate Krause and Guestrin's general idea of VoI even though it was fundamentally envisioned for finite-horizon settings. We establish the utility of our approach on two graphical models based on real-world datasets.
{"title":"Optimizing Value of Information Over an Infinite Time Horizon","authors":"Sarthak Ghosh, C. Ramakrishnan","doi":"10.1109/ICTAI.2019.00101","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00101","url":null,"abstract":"Decision-making based on probabilistic reasoning often involves selecting a subset of expensive observations that best predict the system state. In an earlier work, adopting the general notion of value of information (VoI) first introduced by Krause and Guestrin, Ghosh and Ramakrishnan considered the problem of determining optimal conditional observation plans in temporal graphical models, based on non-myopic (non-greedy) VoI, over a finite time horizon. They cast the problem as determining optimal policies in finite-horizon, non-discounted Markov Decision Processes (MDPs). However, there are many practical scenarios where a time horizon is undefinable. In this paper, we consider the VoI optimization problem over an infinite (or equivalently, undefined) time horizon. Adopting an approach similar to Ghosh and Ramakrishnan's, we cast this problem as determining optimal policies in infinite-horizon, finite-state, discounted MDPs. Although our MDP-based framework addresses Dynamic Bayesian Networks (DBNs) that are more restricted than those addressed by Ghosh and Ramakrishnan, we incorporate Krause and Guestrin's general idea of VoI even though it was fundamentally envisioned for finite-horizon settings. We establish the utility of our approach on two graphical models based on real-world datasets.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122123740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relation Extraction is concentrated on finding the unknown relational facts automatically from the unstructured texts. Most current methods, especially the distant supervision relation extraction (DSRE), have been successfully applied to achieve this goal. DSRE combines knowledge graph and text corpus to corporately generate plenty of labeled data without human efforts. However, the existing methods of DSRE ignore the noisy words within sentences and suffer from the noisy labelling problem; the additional knowledge is represented in a common semantic space and ignores the semantic-space difference between relations and entities. To address these problems, this study proposes a novel hierarchical attention model, named the Bi-GRU-based Knowledge Graph Attention Model (BG2KGA) for DSRE using the Bidirectional Gated Recurrent Unit (Bi-GRU) network. BG2KGA contains the word-level and sentence-level attentions with the guidance of additional knowledge graph, to highlight the key words and sentences respectively which can contribute more to the final relation representations. Further-more, the additional knowledge graph are embedded in the multi-semantic vector space to capture the relations in 1-N, N-1 and N-N entity pairs. Experiments are conducted on a widely used dataset for distant supervision. The experimental results have shown that the proposed model outperforms the current methods and can improve the Precision/Recall (PR) curve area by 8% to 16% compared to the state-of-the-art models; the AUC of BG2KGA can reach 0.468 in the best case.
{"title":"Distant-Supervised Relation Extraction with Hierarchical Attention Based on Knowledge Graph","authors":"Hong Yao, Lijun Dong, Shiqi Zhen, Xiaojun Kang, Xinchuan Li, Qingzhong Liang","doi":"10.1109/ICTAI.2019.00040","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00040","url":null,"abstract":"Relation Extraction is concentrated on finding the unknown relational facts automatically from the unstructured texts. Most current methods, especially the distant supervision relation extraction (DSRE), have been successfully applied to achieve this goal. DSRE combines knowledge graph and text corpus to corporately generate plenty of labeled data without human efforts. However, the existing methods of DSRE ignore the noisy words within sentences and suffer from the noisy labelling problem; the additional knowledge is represented in a common semantic space and ignores the semantic-space difference between relations and entities. To address these problems, this study proposes a novel hierarchical attention model, named the Bi-GRU-based Knowledge Graph Attention Model (BG2KGA) for DSRE using the Bidirectional Gated Recurrent Unit (Bi-GRU) network. BG2KGA contains the word-level and sentence-level attentions with the guidance of additional knowledge graph, to highlight the key words and sentences respectively which can contribute more to the final relation representations. Further-more, the additional knowledge graph are embedded in the multi-semantic vector space to capture the relations in 1-N, N-1 and N-N entity pairs. Experiments are conducted on a widely used dataset for distant supervision. The experimental results have shown that the proposed model outperforms the current methods and can improve the Precision/Recall (PR) curve area by 8% to 16% compared to the state-of-the-art models; the AUC of BG2KGA can reach 0.468 in the best case.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128440596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ictai.2019.00005
Mohammed Atiquzzaman
The 17 annual IEEE International Conference on Bioinformatics and Bioengineering aims at building synergy between Bioinformatics and Bioengineering/Biomedical, two complementary disciplines that hold great promise for the advancement of research and development in complex medical and biological systems, agriculture, environment, public health, drug design. Research and development in these two areas are impacting the science and technology in fields such as medicine, food production, forensics, etc. by advancing fundamental concepts in molecular biology, by helping us understand living organisms at multiple levels, by developing innovative implants and bio-prosthetics, and by improving tools and techniques for the detection, prevention and treatment of diseases. The BIBE series provides a common platform for the cross fertilization of ideas, and for shaping knowledge and scientific achievements by bridging these two very important and complementary disciplines into an interactive and attractive forum.
{"title":"General Chair’s Foreword","authors":"Mohammed Atiquzzaman","doi":"10.1109/ictai.2019.00005","DOIUrl":"https://doi.org/10.1109/ictai.2019.00005","url":null,"abstract":"The 17 annual IEEE International Conference on Bioinformatics and Bioengineering aims at building synergy between Bioinformatics and Bioengineering/Biomedical, two complementary disciplines that hold great promise for the advancement of research and development in complex medical and biological systems, agriculture, environment, public health, drug design. Research and development in these two areas are impacting the science and technology in fields such as medicine, food production, forensics, etc. by advancing fundamental concepts in molecular biology, by helping us understand living organisms at multiple levels, by developing innovative implants and bio-prosthetics, and by improving tools and techniques for the detection, prevention and treatment of diseases. The BIBE series provides a common platform for the cross fertilization of ideas, and for shaping knowledge and scientific achievements by bridging these two very important and complementary disciplines into an interactive and attractive forum.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128714336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data are omnipresent nowadays and contain knowledge and patterns that machine learning (ML) algorithms can extract so as to take decisions or perform a task without explicit instructions. To achieve that, these algorithms learn a mathematical model using sample data. However, there are numerous ML algorithms, all learning different models of reality. Furthermore, the behavior of these algorithms can be altered by modifying some of their plethora of hyperparameters. Cleverly tuning these algorithms is costly but essential to reach decent performance. Yet it requires a lot of expertise and remains hard even for experts who tend to resort to exploration-only approaches like random search and grid search. The field of AutoML has consequently emerged in the quest for automatized machine learning processes that would be less expensive than brute force searches. In this paper we continue the research initiated on the Tree-based Pipeline Optimization Tool (TPOT), an AutoML based on Evolutionary Algorithms (EA). EAs are typically slow to converge which makes TPOT incapable of scaling to large datasets. As a consequence, we introduce TPOT-SH inspired from the concept of Successive Halving used in Multi-Armed Bandit problems. This solution allows TPOT to explore the search space faster and have much better performance on larger datasets.
{"title":"TPOT-SH: A Faster Optimization Algorithm to Solve the AutoML Problem on Large Datasets","authors":"Laurent Parmentier, Olivier Nicol, Laetitia Vermeulen-Jourdan, Marie-Éléonore Kessaci","doi":"10.1109/ICTAI.2019.00072","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00072","url":null,"abstract":"Data are omnipresent nowadays and contain knowledge and patterns that machine learning (ML) algorithms can extract so as to take decisions or perform a task without explicit instructions. To achieve that, these algorithms learn a mathematical model using sample data. However, there are numerous ML algorithms, all learning different models of reality. Furthermore, the behavior of these algorithms can be altered by modifying some of their plethora of hyperparameters. Cleverly tuning these algorithms is costly but essential to reach decent performance. Yet it requires a lot of expertise and remains hard even for experts who tend to resort to exploration-only approaches like random search and grid search. The field of AutoML has consequently emerged in the quest for automatized machine learning processes that would be less expensive than brute force searches. In this paper we continue the research initiated on the Tree-based Pipeline Optimization Tool (TPOT), an AutoML based on Evolutionary Algorithms (EA). EAs are typically slow to converge which makes TPOT incapable of scaling to large datasets. As a consequence, we introduce TPOT-SH inspired from the concept of Successive Halving used in Multi-Armed Bandit problems. This solution allows TPOT to explore the search space faster and have much better performance on larger datasets.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00093
Shangbo Mao, V. Natarajan, L. Chia, G. Huang
Inspection of metal surface textures using computer vision and machine learning techniques plays an important role in Automated Visual Inspection (AVI) systems. Texture recognition on metal surface is challenging because the characteristics of each texture type are dependent on the properties of the metal surface when captured under different lighting conditions. Since these textures have no obvious repetitive patterns like general textures, this results in high intra-class diversities. Prior knowledge has shown that surface properties such as surface curvature and depth are discriminant to different texture types on metal surface. Since scale, shapes and location of textures within the same type are not fixed, scale property and spatial ordering information are less important for differentiating between texture types. There-fore, surface property, scale invariance and order-less property should be considered when exploring a suitable image feature for metal surface texture recognition. This paper proposes Order-less Scale Invariant Gradient Local Auto-Correlation (OS-GLAC) which meets all three requirements for robust texture recognition. The experiment results show that OS-GLAC is robust to separate different metal surface texture types. In addition, we observed that OS-GLAC is not only useful for texture recognition on metal surface but also for general texture recognition when combined with pre-trained deep learning features as these two features capture complimentary information. The experiment results show that such a combination of OS-GLAC achieves competitive results on three well-established general texture datasets i.e., KTH-TIP-2a, KTH-TIPS-2b and FMD.
利用计算机视觉和机器学习技术检测金属表面纹理在自动视觉检测(AVI)系统中起着重要的作用。金属表面的纹理识别具有挑战性,因为每种纹理类型的特征取决于在不同光照条件下捕获的金属表面的特性。由于这些纹理不像一般纹理那样具有明显的重复图案,这导致了高的类内多样性。先前的知识表明,表面曲率和深度等表面性质对金属表面的不同纹理类型具有区别性。由于同一类型内纹理的尺度、形状和位置不固定,尺度属性和空间排序信息对于纹理类型的区分不太重要。因此,在探索适合金属表面纹理识别的图像特征时,应综合考虑表面特性、尺度不变性和无序性。本文提出的无阶尺度不变梯度局部自相关(OS-GLAC)算法满足了鲁棒纹理识别的这三个要求。实验结果表明,OS-GLAC对不同金属表面织构类型的分离具有较强的鲁棒性。此外,我们观察到OS-GLAC不仅可用于金属表面的纹理识别,而且当与预训练的深度学习特征相结合时,也可用于一般纹理识别,因为这两个特征捕获了互补信息。实验结果表明,这种OS-GLAC组合在KTH-TIP-2a、kth - tip -2b和FMD这三个已建立的通用纹理数据集上取得了具有竞争力的结果。
{"title":"Texture Recognition on Metal Surface using Order-Less Scale Invariant GLAC","authors":"Shangbo Mao, V. Natarajan, L. Chia, G. Huang","doi":"10.1109/ICTAI.2019.00093","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00093","url":null,"abstract":"Inspection of metal surface textures using computer vision and machine learning techniques plays an important role in Automated Visual Inspection (AVI) systems. Texture recognition on metal surface is challenging because the characteristics of each texture type are dependent on the properties of the metal surface when captured under different lighting conditions. Since these textures have no obvious repetitive patterns like general textures, this results in high intra-class diversities. Prior knowledge has shown that surface properties such as surface curvature and depth are discriminant to different texture types on metal surface. Since scale, shapes and location of textures within the same type are not fixed, scale property and spatial ordering information are less important for differentiating between texture types. There-fore, surface property, scale invariance and order-less property should be considered when exploring a suitable image feature for metal surface texture recognition. This paper proposes Order-less Scale Invariant Gradient Local Auto-Correlation (OS-GLAC) which meets all three requirements for robust texture recognition. The experiment results show that OS-GLAC is robust to separate different metal surface texture types. In addition, we observed that OS-GLAC is not only useful for texture recognition on metal surface but also for general texture recognition when combined with pre-trained deep learning features as these two features capture complimentary information. The experiment results show that such a combination of OS-GLAC achieves competitive results on three well-established general texture datasets i.e., KTH-TIP-2a, KTH-TIPS-2b and FMD.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127024862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/ICTAI.2019.00062
Sultan Ahmed, Malek Mouhoub
The Conditional Preference Network (CP-net) represents user's conditional ceteris paribus (all else being equal) preference statements in a graphical manner. In general, an acyclic CP-net induces a strict partial order over the outcomes. The task of comparing two outcomes (dominance testing) is generally PSPACE-complete, which is a limitation for this intuitive model, especially when representing and solving preference-based constrained optimization problems. In order to overcome this limitation in practice, we propose a divide and conquer algorithm that compares two outcomes according to dominance testing. The algorithm divides the original CP-net into sub CP-nets, and recursively calls itself for each of the sub CP-nets until it reaches to a termination criterion. In the termination criterion, the answer of the dominance query is returned. With a theoretical analysis of the time performance, we demonstrate that the proposed algorithm outperforms the existing methods.
{"title":"A Divide and Conquer Algorithm for Dominance Testing in Acyclic CP-Nets","authors":"Sultan Ahmed, Malek Mouhoub","doi":"10.1109/ICTAI.2019.00062","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00062","url":null,"abstract":"The Conditional Preference Network (CP-net) represents user's conditional ceteris paribus (all else being equal) preference statements in a graphical manner. In general, an acyclic CP-net induces a strict partial order over the outcomes. The task of comparing two outcomes (dominance testing) is generally PSPACE-complete, which is a limitation for this intuitive model, especially when representing and solving preference-based constrained optimization problems. In order to overcome this limitation in practice, we propose a divide and conquer algorithm that compares two outcomes according to dominance testing. The algorithm divides the original CP-net into sub CP-nets, and recursively calls itself for each of the sub CP-nets until it reaches to a termination criterion. In the termination criterion, the answer of the dominance query is returned. With a theoretical analysis of the time performance, we demonstrate that the proposed algorithm outperforms the existing methods.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123361229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RGB-D dense mapping has become more and more popular, however, when encountering rapid movement or shake, the robustness and accuracy of most RGB-D dense mapping methods are degraded and the generated maps are overlapped or distorted, due to the drift of pose estimation. In this paper, we present a novel RGB-D dense mapping method, which can obtain accurate, robust and global consistency map even in the above complex conditions. Firstly, the improved ORBSLAM method, which tightly-couples RGB-D information and inertial information to estimate the current pose of robot, is firstly introduced for accurate pose estimation rather than traditional frame-to-frame method in most RGB-D dense mapping methods. Besides, the TSDF (Truncated Signed Distance Function) method is used to effectively fuse depth frame into a global model, and to keep the global consistency of the generated map. Furthermore, since the drift error is inevitable, a deformation graph is constructed to minimize the consistent error in global model, to further improve the mapping performance. The performance of the proposed RGB-D dense mapping method was validated by extensive localization and mapping experiments on public datasets and real scene datasets, and it showed strongly accuracy and robustness over other state-of-the-art methods. What's more, the proposed method can achieve real-time performance implemented on GPU.
RGB-D密集映射越来越受欢迎,然而,当遇到快速运动或震动时,由于姿态估计的漂移,大多数RGB-D密集映射方法的鲁棒性和精度下降,生成的地图重叠或扭曲。本文提出了一种新的RGB-D密集映射方法,即使在上述复杂条件下也能得到精确、鲁棒和全局一致性的映射。首先,引入改进的ORBSLAM方法,将RGB-D信息与惯性信息紧密耦合来估计机器人当前姿态,以准确估计姿态,而不是传统的RGB-D密集映射方法中的帧对帧方法。利用TSDF (Truncated Signed Distance Function)方法有效地将深度帧融合到全局模型中,保证了生成地图的全局一致性。此外,由于漂移误差是不可避免的,为了使全局模型的一致性误差最小化,构造了变形图,进一步提高了映射性能。在公共数据集和真实场景数据集上进行了大量的定位和映射实验,验证了所提出的RGB-D密集映射方法的性能,与其他最新方法相比,它具有很强的准确性和鲁棒性。该方法在GPU上实现了实时性。
{"title":"Accurate and Robust RGB-D Dense Mapping with Inertial Fusion and Deformation-Graph Optimization","authors":"Yong Liu, Liming Bao, Chaofan Zhang, Wen Zhang, Yingwei Xia","doi":"10.1109/ICTAI.2019.00249","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00249","url":null,"abstract":"RGB-D dense mapping has become more and more popular, however, when encountering rapid movement or shake, the robustness and accuracy of most RGB-D dense mapping methods are degraded and the generated maps are overlapped or distorted, due to the drift of pose estimation. In this paper, we present a novel RGB-D dense mapping method, which can obtain accurate, robust and global consistency map even in the above complex conditions. Firstly, the improved ORBSLAM method, which tightly-couples RGB-D information and inertial information to estimate the current pose of robot, is firstly introduced for accurate pose estimation rather than traditional frame-to-frame method in most RGB-D dense mapping methods. Besides, the TSDF (Truncated Signed Distance Function) method is used to effectively fuse depth frame into a global model, and to keep the global consistency of the generated map. Furthermore, since the drift error is inevitable, a deformation graph is constructed to minimize the consistent error in global model, to further improve the mapping performance. The performance of the proposed RGB-D dense mapping method was validated by extensive localization and mapping experiments on public datasets and real scene datasets, and it showed strongly accuracy and robustness over other state-of-the-art methods. What's more, the proposed method can achieve real-time performance implemented on GPU.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}