Advanced metering infrastructure (AMI) plays an important role in smart grid. On one hand, AMI makes the smart grid more vulnerable to cyber attacks. On the other hand, large amount of available usage data helps detect energy thefts using machine learning methods. In this paper, we focus on energy theft that results in customer usage pattern change in utility database. To overcome the imbalance problem between normal and anomaly behavior data, we propose an anomaly detection framework called semi-supervised generative Gaussian mixture model, which can be controlled with detection indicator thresholds to adjust the intensity of detection. Human knowledge is successfully introduced into the model using detection indicators. We analyze it with various machine learning based methods including one-class SVM and autoencoder, and show that our framework has the most effective performance validated by simulation that is based on real-world energy consumption data.
{"title":"Electricity Theft Detection Using Generative Models","authors":"Qianru Zhang, Meng Zhang, Tinghuan Chen, Jinan Fan, Zhou Yang, Guoqing Li","doi":"10.1109/ICTAI.2018.00050","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00050","url":null,"abstract":"Advanced metering infrastructure (AMI) plays an important role in smart grid. On one hand, AMI makes the smart grid more vulnerable to cyber attacks. On the other hand, large amount of available usage data helps detect energy thefts using machine learning methods. In this paper, we focus on energy theft that results in customer usage pattern change in utility database. To overcome the imbalance problem between normal and anomaly behavior data, we propose an anomaly detection framework called semi-supervised generative Gaussian mixture model, which can be controlled with detection indicator thresholds to adjust the intensity of detection. Human knowledge is successfully introduced into the model using detection indicators. We analyze it with various machine learning based methods including one-class SVM and autoencoder, and show that our framework has the most effective performance validated by simulation that is based on real-world energy consumption data.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129524437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00142
S. Benferhat, Amélie Levray, Karim Tabia
Possibilistic networks are powerful graphical uncertainty representations based on possibility theory. This paper analyzes the computational complexity of querying min-based and product-based possibilistic networks. It particularly focuses on a very common kind of queries: computing maximum a posteriori explanation (MAP). The main result of the paper is to show that the decision problem of answering MAP queries in both min-based and product-based possibilistic networks is NP-complete. Such computational complexity results represent an advantage of possibilistic networks over probabilistic networks since MAP querying is NP^PP -complete in probabilistic Bayesian networks. We provide the proof based on reduction from the 3SAT decision problem to MAP querying possibilistic networks decision problem. As well as reductions that are useful for implementation of MAP queries using SAT solvers.
{"title":"Possibilistic Networks: MAP Query and Computational Analysis","authors":"S. Benferhat, Amélie Levray, Karim Tabia","doi":"10.1109/ICTAI.2018.00142","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00142","url":null,"abstract":"Possibilistic networks are powerful graphical uncertainty representations based on possibility theory. This paper analyzes the computational complexity of querying min-based and product-based possibilistic networks. It particularly focuses on a very common kind of queries: computing maximum a posteriori explanation (MAP). The main result of the paper is to show that the decision problem of answering MAP queries in both min-based and product-based possibilistic networks is NP-complete. Such computational complexity results represent an advantage of possibilistic networks over probabilistic networks since MAP querying is NP^PP -complete in probabilistic Bayesian networks. We provide the proof based on reduction from the 3SAT decision problem to MAP querying possibilistic networks decision problem. As well as reductions that are useful for implementation of MAP queries using SAT solvers.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129071181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00012
Wenwu Yu, Rui Wang, Ruiying Li, Jing Gao, Xiaohui Hu
The popular DQN algorithm is known to have some instability and variability which make its performance poor sometimes. In prior work, there is only one target network, the network that is updated by the latest learned Q-value estimate. In this paper, we present multiple target networks which are the extension to the Deep Q-Networks (DQN). Based on the previously learned Q-value estimate networks, we choose several networks that perform best in all previous networks as our auxiliary networks. We show that in order to solve the problem of determining which network is better, we use the score of each episode as a measure of the quality of the network. The key behind our method is that each auxiliary network has some states that it is good at handling and guides the agent to make the right choices. We apply our method to the Atari 2600 games from the OpenAI Gym. We find that DQN with auxiliary networks significantly improves the performance and the stability of games.
{"title":"Historical Best Q-Networks for Deep Reinforcement Learning","authors":"Wenwu Yu, Rui Wang, Ruiying Li, Jing Gao, Xiaohui Hu","doi":"10.1109/ICTAI.2018.00012","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00012","url":null,"abstract":"The popular DQN algorithm is known to have some instability and variability which make its performance poor sometimes. In prior work, there is only one target network, the network that is updated by the latest learned Q-value estimate. In this paper, we present multiple target networks which are the extension to the Deep Q-Networks (DQN). Based on the previously learned Q-value estimate networks, we choose several networks that perform best in all previous networks as our auxiliary networks. We show that in order to solve the problem of determining which network is better, we use the score of each episode as a measure of the quality of the network. The key behind our method is that each auxiliary network has some states that it is good at handling and guides the agent to make the right choices. We apply our method to the Atari 2600 games from the OpenAI Gym. We find that DQN with auxiliary networks significantly improves the performance and the stability of games.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121058332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00120
Marcelo Pereira, A. Britto, Luiz Oliveira, R. Sabourin
This work describes a new oracle based Dynamic Ensemble Selection (DES) method in which an Ensemble of Classifiers (EoC) is selected to predict the class of a given test instance (xt). The competence of each classifier is estimated on a local region (LR) of the feature space (Region of Competence - RoC) represented by the most promising k-nearest neighbors (or advisors) related to xt according to a discrimination index (D) originally proposed in the Item and Test Analysis (ITA) theory. The D value is used to better define the advisors of the RoC since they will suggest the classifiers (local oracles) to compose the EoC. A robust experimental protocol based on 30 classification problems and 20 replications have shown that the proposed DES compares favorably with 15 state-of-the-art dynamic selection methods and the combination of all classifiers in the pool.
{"title":"Dynamic Ensemble Selection by K-Nearest Local Oracles with Discrimination Index","authors":"Marcelo Pereira, A. Britto, Luiz Oliveira, R. Sabourin","doi":"10.1109/ICTAI.2018.00120","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00120","url":null,"abstract":"This work describes a new oracle based Dynamic Ensemble Selection (DES) method in which an Ensemble of Classifiers (EoC) is selected to predict the class of a given test instance (xt). The competence of each classifier is estimated on a local region (LR) of the feature space (Region of Competence - RoC) represented by the most promising k-nearest neighbors (or advisors) related to xt according to a discrimination index (D) originally proposed in the Item and Test Analysis (ITA) theory. The D value is used to better define the advisors of the RoC since they will suggest the classifiers (local oracles) to compose the EoC. A robust experimental protocol based on 30 classification problems and 20 replications have shown that the proposed DES compares favorably with 15 state-of-the-art dynamic selection methods and the combination of all classifiers in the pool.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00022
G. Behnke, D. Höller, Susanne Biundo-Stephan
Planning via SAT has proven to be an efficient and versatile planning technique. Its declarative nature allows for an easy integration of additional constraints and can harness the progress made in the SAT community without the need to adapt the planner. However, there has been only little attention to SAT planning for hierarchical domains. To ease encoding, existing approaches for HTN planning require additional assumptions, like non-recursiveness or totally-ordered methods. Both limit the expressiveness of HTN planning severely. We propose the first propositional encodings which are able to solve general, i.e., partially-ordered, HTN planning problems, based on a previous encoding for totally-ordered problems. The empirical evaluation of our encoding shows that it outperforms existing HTN planners significantly.
{"title":"Tracking Branches in Trees - A Propositional Encoding for Solving Partially-Ordered HTN Planning Problems","authors":"G. Behnke, D. Höller, Susanne Biundo-Stephan","doi":"10.1109/ICTAI.2018.00022","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00022","url":null,"abstract":"Planning via SAT has proven to be an efficient and versatile planning technique. Its declarative nature allows for an easy integration of additional constraints and can harness the progress made in the SAT community without the need to adapt the planner. However, there has been only little attention to SAT planning for hierarchical domains. To ease encoding, existing approaches for HTN planning require additional assumptions, like non-recursiveness or totally-ordered methods. Both limit the expressiveness of HTN planning severely. We propose the first propositional encodings which are able to solve general, i.e., partially-ordered, HTN planning problems, based on a previous encoding for totally-ordered problems. The empirical evaluation of our encoding shows that it outperforms existing HTN planners significantly.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132730762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00084
Abir Chaabani, L. B. Said
Bi-level optimization problems (BLOPs) are a class of challenging problems with two levels of optimization tasks. The usefulness of bi-level optimization in designing hierarchical decision processes prompted several researchers, in particular the evolutionary computation community, to pay more attention to such kind of problems. Several solution approaches have been proposed to solve these problems; however, most of them are restricted to the continuous case. Motivated by this observation, we have recently proposed a Co-evolutionary Decomposition-based Algorithm (CODBA-II) to solve combinatorial bi-level problems. CODBA-II scheme has been able to improve the bi-level performance and to bring down the computational expense significantly as compared to other competitive approaches within this research area. In this paper, we present an extension of the recently proposed CODBA-II algorithm. The improved version, called CODBA-IILS, further improves the algorithm by incorporating a local search process to both upper and lower levels in order to help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems based on the bi-level production-distribution problems in supply chain management, and comparison results against the contemporary approaches are also provided.
{"title":"Hybrid CODBA-II Algorithm Coupling a Co-Evolutionary Decomposition-Based Algorithm with Local Search Method to Solve Bi-Level Combinatorial Optimization","authors":"Abir Chaabani, L. B. Said","doi":"10.1109/ICTAI.2018.00084","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00084","url":null,"abstract":"Bi-level optimization problems (BLOPs) are a class of challenging problems with two levels of optimization tasks. The usefulness of bi-level optimization in designing hierarchical decision processes prompted several researchers, in particular the evolutionary computation community, to pay more attention to such kind of problems. Several solution approaches have been proposed to solve these problems; however, most of them are restricted to the continuous case. Motivated by this observation, we have recently proposed a Co-evolutionary Decomposition-based Algorithm (CODBA-II) to solve combinatorial bi-level problems. CODBA-II scheme has been able to improve the bi-level performance and to bring down the computational expense significantly as compared to other competitive approaches within this research area. In this paper, we present an extension of the recently proposed CODBA-II algorithm. The improved version, called CODBA-IILS, further improves the algorithm by incorporating a local search process to both upper and lower levels in order to help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems based on the bi-level production-distribution problems in supply chain management, and comparison results against the contemporary approaches are also provided.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"23 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132736916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00021
Lahari Poddar, W. Hsu, M. Lee, Shruti Subramaniyam
Detecting rumors is a crucial task requiring significant time and manual effort in forms of investigative journalism. In social media such as Twitter, unverified information can get disseminated rapidly making early detection of potentially false rumors critical. We observe that the early reactions of people towards an emerging claim can be predictive of its veracity. We propose a novel neural network architecture using the stances of people engaging in a conversation on Twitter about a rumor for detecting its veracity. Our proposed solution comprises two key steps. We first detect the stance of each individual tweet, by considering the textual content of the tweet, its timestamp, as well as the sequential conversation structure leading up to the target tweet. Then we use the predicted stances of all tweets in a conversation tree to determine the veracity of the original rumor. We evaluate our model on the SemEval2017 rumor detection dataset and demonstrate that our solution outperforms the state-of-the-art approaches for both stance prediction and rumor veracity prediction tasks.
{"title":"Predicting Stances in Twitter Conversations for Detecting Veracity of Rumors: A Neural Approach","authors":"Lahari Poddar, W. Hsu, M. Lee, Shruti Subramaniyam","doi":"10.1109/ICTAI.2018.00021","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00021","url":null,"abstract":"Detecting rumors is a crucial task requiring significant time and manual effort in forms of investigative journalism. In social media such as Twitter, unverified information can get disseminated rapidly making early detection of potentially false rumors critical. We observe that the early reactions of people towards an emerging claim can be predictive of its veracity. We propose a novel neural network architecture using the stances of people engaging in a conversation on Twitter about a rumor for detecting its veracity. Our proposed solution comprises two key steps. We first detect the stance of each individual tweet, by considering the textual content of the tweet, its timestamp, as well as the sequential conversation structure leading up to the target tweet. Then we use the predicted stances of all tweets in a conversation tree to determine the veracity of the original rumor. We evaluate our model on the SemEval2017 rumor detection dataset and demonstrate that our solution outperforms the state-of-the-art approaches for both stance prediction and rumor veracity prediction tasks.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134321990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00082
Willian Heitor Martins, Lucia Helena Souza Alves de Santiago, Rafael de Santiago, L. Lamb
The family health strategy in Brazil is a program that aims at universal access to actions and services of health promotion, protection, and recovery. In this nationwide program, teams of health professionals are responsible for attending and promoting health actions to a community of a specific area. These teams perform home visits that will support the patients of their respective target areas who demand special health care. To help in the scheduling process of these visits, we propose a new bi-objective problem and two methods for its implementation. The main method is an Ant Colony Optimization-based (ACO) heuristic. The other one is an exact linear programming algorithm designed to allow for experimental comparisons. Our experiments suggest that our ACO surpassed the exact solver in runtime, reaching the optimal solutions for all the solutions known. Amortized complexity analysis showed that the ACO heuristic has sublinear complexity over the number of patients.
{"title":"Effective Ant Colony Optimization Solution for the Brazilian Family Health Team Scheduling Problem","authors":"Willian Heitor Martins, Lucia Helena Souza Alves de Santiago, Rafael de Santiago, L. Lamb","doi":"10.1109/ICTAI.2018.00082","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00082","url":null,"abstract":"The family health strategy in Brazil is a program that aims at universal access to actions and services of health promotion, protection, and recovery. In this nationwide program, teams of health professionals are responsible for attending and promoting health actions to a community of a specific area. These teams perform home visits that will support the patients of their respective target areas who demand special health care. To help in the scheduling process of these visits, we propose a new bi-objective problem and two methods for its implementation. The main method is an Ant Colony Optimization-based (ACO) heuristic. The other one is an exact linear programming algorithm designed to allow for experimental comparisons. Our experiments suggest that our ACO surpassed the exact solver in runtime, reaching the optimal solutions for all the solutions known. Amortized complexity analysis showed that the ACO heuristic has sublinear complexity over the number of patients.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114311555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00122
Peiqi Liu, S. Zhong, Zhong Ming, Yan Liu
Dialogue response generation system is one of the hot topics in natural language processing, but it is still a long way to go before it can generate human-like dialogues. A good evaluation method will help narrow the gap between the machine and human in dialogue generation. Unfortunately, current evaluation methods cannot measure whether the dialogue response generation system is able to produce high-quality, knowledge-related, and informative dialogues. Aiming to identify and measure the existence of information in dialogues, we propose a novel automatic evaluation metric. By learning from the knowledge representation method in knowledge base, we define the heuristic rules to extract the information triples from dialogue pairs. And we design an information matching method to measure the probability of the existence of information in a dialogue. In experiments, our proposed metric demonstrates its effectiveness in dialogue selection and model evaluation on the Reddit dataset (English) and the Weibo dataset (Chinese).
{"title":"Information-Oriented Evaluation Metric for Dialogue Response Generation Systems","authors":"Peiqi Liu, S. Zhong, Zhong Ming, Yan Liu","doi":"10.1109/ICTAI.2018.00122","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00122","url":null,"abstract":"Dialogue response generation system is one of the hot topics in natural language processing, but it is still a long way to go before it can generate human-like dialogues. A good evaluation method will help narrow the gap between the machine and human in dialogue generation. Unfortunately, current evaluation methods cannot measure whether the dialogue response generation system is able to produce high-quality, knowledge-related, and informative dialogues. Aiming to identify and measure the existence of information in dialogues, we propose a novel automatic evaluation metric. By learning from the knowledge representation method in knowledge base, we define the heuristic rules to extract the information triples from dialogue pairs. And we design an information matching method to measure the probability of the existence of information in a dialogue. In experiments, our proposed metric demonstrates its effectiveness in dialogue selection and model evaluation on the Reddit dataset (English) and the Weibo dataset (Chinese).","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116962092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-01DOI: 10.1109/ICTAI.2018.00081
L. Chrpa, M. Vallati
Most learning for planning approaches rely on analysis of training plans. This is especially the case for one of the best-known learning approach: the generation of macro-operators (macros). These plans, usually generated from a very limited set of training tasks, must provide a ground to extract useful knowledge that can be fruitfully exploited by planning engines. In that, training tasks have to be representative of the larger class of planning tasks on which planning engines will then be run. A pivotal question is how such a set of training tasks can be selected. To address this question, here we introduce a notion of structural similarity of plans. We conjecture that if a class of planning tasks presents structurally similar plans, then a small subset of these tasks is representative enough to learn the same knowledge (macros) as could be learnt from a larger set of tasks of the same class. We have tested our conjecture by focusing on two state-of-the-art macro generation approaches. Our large empirical analysis considering seven state-of-the-art planners, and fourteen benchmark domains from the International Planning Competition, generally confirms our conjecture which can be exploited for selecting small-yet-informative training sets of tasks.
{"title":"Determining Representativeness of Training Plans: A Case of Macro-Operators","authors":"L. Chrpa, M. Vallati","doi":"10.1109/ICTAI.2018.00081","DOIUrl":"https://doi.org/10.1109/ICTAI.2018.00081","url":null,"abstract":"Most learning for planning approaches rely on analysis of training plans. This is especially the case for one of the best-known learning approach: the generation of macro-operators (macros). These plans, usually generated from a very limited set of training tasks, must provide a ground to extract useful knowledge that can be fruitfully exploited by planning engines. In that, training tasks have to be representative of the larger class of planning tasks on which planning engines will then be run. A pivotal question is how such a set of training tasks can be selected. To address this question, here we introduce a notion of structural similarity of plans. We conjecture that if a class of planning tasks presents structurally similar plans, then a small subset of these tasks is representative enough to learn the same knowledge (macros) as could be learnt from a larger set of tasks of the same class. We have tested our conjecture by focusing on two state-of-the-art macro generation approaches. Our large empirical analysis considering seven state-of-the-art planners, and fourteen benchmark domains from the International Planning Competition, generally confirms our conjecture which can be exploited for selecting small-yet-informative training sets of tasks.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116978428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}