The Google Android mobile phone platform is one of the most anticipated smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications. On one hand, the popularity of Android absorbs attention of most developers for producing their applications on this platform. The increased numbers of applications, on the other hand, prepares a suitable prone for some users to develop different kinds of malware and insert them in Google Android market or other third party markets as safe applications. In this paper, we propose to combine permission and API (Application Program Interface) calls and use machine learning methods to detect malicious Android Apps. In our design, the permission is extracted from each App's profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions and API calls as features to characterize each Apps, we can learn a classifier to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamical tracing of the system calls but only uses simple static analysis to find system functions involved in each App. In addition, because permission settings and APIs are alwaysavailable for each App, our method can be generalized to all mobile applications. Experiments on real-world Apps with more than 1200 malware and 1200 benign samples validate the algorithm performance.
{"title":"Machine Learning for Android Malware Detection Using Permission and API Calls","authors":"Naser Peiravian, Xingquan Zhu","doi":"10.1109/ICTAI.2013.53","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.53","url":null,"abstract":"The Google Android mobile phone platform is one of the most anticipated smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications. On one hand, the popularity of Android absorbs attention of most developers for producing their applications on this platform. The increased numbers of applications, on the other hand, prepares a suitable prone for some users to develop different kinds of malware and insert them in Google Android market or other third party markets as safe applications. In this paper, we propose to combine permission and API (Application Program Interface) calls and use machine learning methods to detect malicious Android Apps. In our design, the permission is extracted from each App's profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions and API calls as features to characterize each Apps, we can learn a classifier to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamical tracing of the system calls but only uses simple static analysis to find system functions involved in each App. In addition, because permission settings and APIs are alwaysavailable for each App, our method can be generalized to all mobile applications. Experiments on real-world Apps with more than 1200 malware and 1200 benign samples validate the algorithm performance.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127727918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated planning has achieved significant breakthroughs in recent years. Nonetheless, attempts to improve search algorithm efficiency remain the primary focus of most research. However, it is also possible to build on previous searches and learn from previously found solutions. Our approach consists in learning macro-actions and adding them into the planner's domain. A macro-action is an action sequence selected for application at search time and applied as a single indivisible action. Carefully chosen macros can drastically improve the planning performances by reducing the search space depth. However, macros also increase the branching factor. Therefore, the use of macros entails a utility problem: a trade-off has to be addressed between the benefit of adding macros to speed up the goal search and the overhead caused by increasing the branching factor in the search space. In this paper, we propose an online domain and planner-independent approach to learn 'useful' macros, i.e. macros that address the utility problem. These useful macros are obtained by statistical and heuristic filtering of a domain specific macro library. The library is created from the most frequent action sequences derived from an n-gram analysis on successful plans previously computed by the planner. The relevance of this approach is proven by experiments on International Planning Competition domains.
{"title":"Learning Useful Macro-actions for Planning with N-Grams","authors":"A. Dulac, D. Pellier, H. Fiorino, D. Janiszek","doi":"10.1109/ICTAI.2013.123","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.123","url":null,"abstract":"Automated planning has achieved significant breakthroughs in recent years. Nonetheless, attempts to improve search algorithm efficiency remain the primary focus of most research. However, it is also possible to build on previous searches and learn from previously found solutions. Our approach consists in learning macro-actions and adding them into the planner's domain. A macro-action is an action sequence selected for application at search time and applied as a single indivisible action. Carefully chosen macros can drastically improve the planning performances by reducing the search space depth. However, macros also increase the branching factor. Therefore, the use of macros entails a utility problem: a trade-off has to be addressed between the benefit of adding macros to speed up the goal search and the overhead caused by increasing the branching factor in the search space. In this paper, we propose an online domain and planner-independent approach to learn 'useful' macros, i.e. macros that address the utility problem. These useful macros are obtained by statistical and heuristic filtering of a domain specific macro library. The library is created from the most frequent action sequences derived from an n-gram analysis on successful plans previously computed by the planner. The relevance of this approach is proven by experiments on International Planning Competition domains.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114154233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensemble of classifier is an effective way of improving performance of individual classifiers. However, the choice of the ensemble members can become a very difficult task, which, in some cases, can lead to ensembles with no performance improvement. Dynamic ensemble selection systems aim to select a group of classifiers that is most adequate for a specific query pattern. In this paper, we present a strategy that optimizes the dynamic ensemble selection procedure. Initially, a pool of classifiers has been built in an automatic way through an evolutionary algorithm. After, we improved the regions of competence in order to avoid noise and create smoother class boundaries. Finally, we use a dynamic ensemble selection rule. Extreme Learning Machines were used in the classification phase. Performance of the system was compared against other methods.
{"title":"Optimizing Dynamic Ensemble Selection Procedure by Evolutionary Extreme Learning Machines and a Noise Reduction Filter","authors":"Tiago Lima, Teresa B Ludermir","doi":"10.1109/ICTAI.2013.87","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.87","url":null,"abstract":"Ensemble of classifier is an effective way of improving performance of individual classifiers. However, the choice of the ensemble members can become a very difficult task, which, in some cases, can lead to ensembles with no performance improvement. Dynamic ensemble selection systems aim to select a group of classifiers that is most adequate for a specific query pattern. In this paper, we present a strategy that optimizes the dynamic ensemble selection procedure. Initially, a pool of classifiers has been built in an automatic way through an evolutionary algorithm. After, we improved the regions of competence in order to avoid noise and create smoother class boundaries. Finally, we use a dynamic ensemble selection rule. Extreme Learning Machines were used in the classification phase. Performance of the system was compared against other methods.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114223894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several techniques have been proposed to automatically grade students' free-text answers in e-learning systems. However, these techniques provide no or limited support for the evaluation of acquired procedural knowledge. To address this issue, we propose a new approach, named ProcMark, specifically designed to assess answers containing procedural knowledge. It requires a teacher to provide the ideal answer as a semantic network (SN) that is used to automatically score learners' answers in plain text. The novelty of our approach resides mainly in three areas: a) the variable granularity levels possible in the SN and the parameterizing of ontology concepts, thus allowing the students free expression of their ideas, b) the new similarity measures of the grading system that give refined numerical scores, c) the language-independence of the grading system as all linguistic information is given as data files or dictionaries and is distinct of the semantic knowledge of the SN. Experimental results in a Computer Algorithms course show that the approach gives marks that are very close to those of human graders, with a very strong (0.70, 0.79, and 0.79) positive correlation.
{"title":"Assessing Procedural Knowledge in Free-Text Answers through a Hybrid Semantic Web Approach","authors":"E. Snow, C. Moghrabi, Philippe Fournier-Viger","doi":"10.1109/ICTAI.2013.108","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.108","url":null,"abstract":"Several techniques have been proposed to automatically grade students' free-text answers in e-learning systems. However, these techniques provide no or limited support for the evaluation of acquired procedural knowledge. To address this issue, we propose a new approach, named ProcMark, specifically designed to assess answers containing procedural knowledge. It requires a teacher to provide the ideal answer as a semantic network (SN) that is used to automatically score learners' answers in plain text. The novelty of our approach resides mainly in three areas: a) the variable granularity levels possible in the SN and the parameterizing of ontology concepts, thus allowing the students free expression of their ideas, b) the new similarity measures of the grading system that give refined numerical scores, c) the language-independence of the grading system as all linguistic information is given as data files or dictionaries and is distinct of the semantic knowledge of the SN. Experimental results in a Computer Algorithms course show that the approach gives marks that are very close to those of human graders, with a very strong (0.70, 0.79, and 0.79) positive correlation.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Glodek, T. Geier, Susanne Biundo-Stephan, F. Schwenker, G. Palm
Only few cognitive architectures have been proposed that cover the complete range from recognizers working on the direct sensor input, to logical inference mechanisms of classical artificial intelligence (AI). Logical systems operate on abstract predicates, which are often related to an action-like state transition, especially when compared to the classes recognized by pattern recognition approaches. On the other hand, pattern recognition is often limited to static patterns, and temporal and multi-modal aspects of a class are often not regarded, e.g. by testing only on pre-segmented data. Recent trends in AI aim at developing applications and methods that are motivated by data-driven real world scenarios, while the field of pattern recognition attempts to push forward the boundary of pattern complexity. We propose a new generic architecture to close the gap between AI and pattern recognition approaches. In order to detect abstract complex patterns, we process sequential data in layers. On each layer, a set of elementary classes is recognized and the outcome of the classification is passed to the successive layer such that the time granularity increases. Layers can combine modalities, additional symbolic information or make use of reasoning algorithms. We evaluated our approach in an on-line scenario of activity recognition using three layers. The obtained results show that the combination of concepts from pattern recognition and high-level symbolic information leads to a prosperous and powerful symbiosis.
{"title":"Recognizing User Preferences Based on Layered Activity Recognition and First-Order Logic","authors":"Michael Glodek, T. Geier, Susanne Biundo-Stephan, F. Schwenker, G. Palm","doi":"10.1109/ICTAI.2013.101","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.101","url":null,"abstract":"Only few cognitive architectures have been proposed that cover the complete range from recognizers working on the direct sensor input, to logical inference mechanisms of classical artificial intelligence (AI). Logical systems operate on abstract predicates, which are often related to an action-like state transition, especially when compared to the classes recognized by pattern recognition approaches. On the other hand, pattern recognition is often limited to static patterns, and temporal and multi-modal aspects of a class are often not regarded, e.g. by testing only on pre-segmented data. Recent trends in AI aim at developing applications and methods that are motivated by data-driven real world scenarios, while the field of pattern recognition attempts to push forward the boundary of pattern complexity. We propose a new generic architecture to close the gap between AI and pattern recognition approaches. In order to detect abstract complex patterns, we process sequential data in layers. On each layer, a set of elementary classes is recognized and the outcome of the classification is passed to the successive layer such that the time granularity increases. Layers can combine modalities, additional symbolic information or make use of reasoning algorithms. We evaluated our approach in an on-line scenario of activity recognition using three layers. The obtained results show that the combination of concepts from pattern recognition and high-level symbolic information leads to a prosperous and powerful symbiosis.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing provides the computing infrastructure, platform, and software application services that areoffered at low cost from remote data centers accessed overthe internet. This so called "utility computing" is changingthe future of organizations in which their internal servers are discarded in favor of applications accessible in the cloud. One of the challenges workflow applications face is the appropriate allocation of tasks due to the heterogeneous nature of the cloud resources. There are different approaches, which have been proposed in the past to address the NP-complete problem of task allocation. One such approach that successfully addressed the task allocation problem made use of Particle Swarm Optimization(PSO). This paper further improves the performance of PSO by combining PSO with a local search heuristic. In particular, PSO with a parameter-wise hill-climbing heuristic (PSO-HC) for the execution of computationally-intensive as well as I/O-intensive workflows is introduced. Experiments are conducted using Amazon's Elastic Compute Cloud as the experimental simulation platform looking at the scalability of CPU-intensive and I/O-intensive workflows in terms of cost and execution time.
{"title":"Particle Swarm Optimization Approach with Parameter-Wise Hill-Climbing Heuristic for Task Allocation of Workflow Applications on the Cloud","authors":"Simone A. Ludwig","doi":"10.1109/ICTAI.2013.39","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.39","url":null,"abstract":"Cloud computing provides the computing infrastructure, platform, and software application services that areoffered at low cost from remote data centers accessed overthe internet. This so called \"utility computing\" is changingthe future of organizations in which their internal servers are discarded in favor of applications accessible in the cloud. One of the challenges workflow applications face is the appropriate allocation of tasks due to the heterogeneous nature of the cloud resources. There are different approaches, which have been proposed in the past to address the NP-complete problem of task allocation. One such approach that successfully addressed the task allocation problem made use of Particle Swarm Optimization(PSO). This paper further improves the performance of PSO by combining PSO with a local search heuristic. In particular, PSO with a parameter-wise hill-climbing heuristic (PSO-HC) for the execution of computationally-intensive as well as I/O-intensive workflows is introduced. Experiments are conducted using Amazon's Elastic Compute Cloud as the experimental simulation platform looking at the scalability of CPU-intensive and I/O-intensive workflows in terms of cost and execution time.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125482053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a SAT-based pseudo-Boolean (PB for short) solver named PBSugar. PBSugar translates a PB instance to a SAT instance by using the order encoding, andsearches its solution by using an external SAT solver, such as Glucose. We first introduce an optimized version of the order encoding, and it is appliedto encode each PB constraint a1 x1 +...an xn # k. The encoding isreformulated as a sparse Boolean matrix, named Counter Matrix, of size n × (k+1) constructed for each PB constraint. The same Counter Matrix can be usedfor any relations ≥, ≤, =, and ≠, and can be reused for other PBconstraints having common sub-terms. The experimental results for 669 instances of DEC-SMALLINT-LIN category(decision problems, small integers, linear constraints) demonstrates thesuperior performance of PBSugar compared to other state-of-the-art PB solvers interms of the number of solved instances within the given time limit.
{"title":"Compiling Pseudo-Boolean Constraints to SAT with Order Encoding","authors":"Naoyuki Tamura, Mutsunori Banbara, Takehide Soh","doi":"10.1109/ICTAI.2013.153","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.153","url":null,"abstract":"This paper presents a SAT-based pseudo-Boolean (PB for short) solver named PBSugar. PBSugar translates a PB instance to a SAT instance by using the order encoding, andsearches its solution by using an external SAT solver, such as Glucose. We first introduce an optimized version of the order encoding, and it is appliedto encode each PB constraint a1 x1 +...an xn # k. The encoding isreformulated as a sparse Boolean matrix, named Counter Matrix, of size n × (k+1) constructed for each PB constraint. The same Counter Matrix can be usedfor any relations ≥, ≤, =, and ≠, and can be reused for other PBconstraints having common sub-terms. The experimental results for 669 instances of DEC-SMALLINT-LIN category(decision problems, small integers, linear constraints) demonstrates thesuperior performance of PBSugar compared to other state-of-the-art PB solvers interms of the number of solved instances within the given time limit.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124369237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leila Amgoud, Florence Dupin de Saint-Cyr -- Bannay
Several systems were developed for supporting public persuasion dialogs where two agents with conflicting opinions try to convince an audience. For computing the outcomes of dialogs, these systems use (abstract or structured) argumentation systems that were initially developed for nonmonotonic reasoning. Despite the increasing number of such systems, there are almost no work on high level properties they should satisfy. This paper is a first attempt for defining postulates that guide the well-definition of dialog systems and that allow their comparison. We propose six basic postulates (including e.g. the finiteness of generated dialogs). We then show that this set of postulates is incompatible with those proposed for argumentation systems devoted for nonmonotonic reasoning. This incompatibility confirms the differences between persuading and reasoning. It also suggests that reasoning systems are not suitable for computing the outcomes of dialogs.
{"title":"An Axiomatic Approach for Persuasion Dialogs","authors":"Leila Amgoud, Florence Dupin de Saint-Cyr -- Bannay","doi":"10.1109/ICTAI.2013.97","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.97","url":null,"abstract":"Several systems were developed for supporting public persuasion dialogs where two agents with conflicting opinions try to convince an audience. For computing the outcomes of dialogs, these systems use (abstract or structured) argumentation systems that were initially developed for nonmonotonic reasoning. Despite the increasing number of such systems, there are almost no work on high level properties they should satisfy. This paper is a first attempt for defining postulates that guide the well-definition of dialog systems and that allow their comparison. We propose six basic postulates (including e.g. the finiteness of generated dialogs). We then show that this set of postulates is incompatible with those proposed for argumentation systems devoted for nonmonotonic reasoning. This incompatibility confirms the differences between persuading and reasoning. It also suggests that reasoning systems are not suitable for computing the outcomes of dialogs.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121873963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subplans with limited interactions can be executed in a parallel and flexible manner. Given the success of SAT planning in the international planning competitions in 2004 and 2006 and advances in SAT solving, it is worth investigating how SAT planning can be used to generate plans for execution by multiple agents. We report on SAT encodings that have a model if and only if there are n subplans, each with up to k steps, such that they together achieve the goal and also satisfy the encoded criteria about permitted and prohibited interactions among them. These n subplans can be executed by different agents. Our SAT-based approach decomposes a planning instance fully automatically in an entirely unfamiliar domain with no knowledge from humans, if a decomposition exists. Desired properties of decomposition and solution are encoded as SAT. The key ideas in our encodings are an allocation of actions and subgoals to various agents and explanatory frame axioms for multiple agents. No domain-specific knowledge is used. We report on an empirical evaluation of the encodings. Our approach is domain-independent and fully automated. We discuss variants of our encodings.
{"title":"Fully-Automated Instance Decomposition and Subplan Synthesis for Parallel Execution","authors":"A. Mali, Ravi Puthiyattil","doi":"10.1109/ICTAI.2013.56","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.56","url":null,"abstract":"Subplans with limited interactions can be executed in a parallel and flexible manner. Given the success of SAT planning in the international planning competitions in 2004 and 2006 and advances in SAT solving, it is worth investigating how SAT planning can be used to generate plans for execution by multiple agents. We report on SAT encodings that have a model if and only if there are n subplans, each with up to k steps, such that they together achieve the goal and also satisfy the encoded criteria about permitted and prohibited interactions among them. These n subplans can be executed by different agents. Our SAT-based approach decomposes a planning instance fully automatically in an entirely unfamiliar domain with no knowledge from humans, if a decomposition exists. Desired properties of decomposition and solution are encoded as SAT. The key ideas in our encodings are an allocation of actions and subgoals to various agents and explanatory frame axioms for multiple agents. No domain-specific knowledge is used. We report on an empirical evaluation of the encodings. Our approach is domain-independent and fully automated. We discuss variants of our encodings.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122725198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Belief change has long been a major topic of research in artificial intelligence, giving rise to very active subareas of their own, like belief revision, update and knowledge fusion. Much focus has been devoted to belief change in the situations where the incoming information in interaction with the previous available beliefs leads to logical inconsistency. When no logical conflict arises, the new information is just added to the current state of beliefs. On the contrary, we claim that many situations require some change in the pre-existing beliefs in front of an incoming piece of information, even when no logical conflict arises. We claim that the agenda of logic based belief change research should be concerned with these other human reasoning paradigms, too.
{"title":"Change Your Belief about Belief Change","authors":"É. Grégoire","doi":"10.1109/ICTAI.2013.133","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.133","url":null,"abstract":"Summary form only given. Belief change has long been a major topic of research in artificial intelligence, giving rise to very active subareas of their own, like belief revision, update and knowledge fusion. Much focus has been devoted to belief change in the situations where the incoming information in interaction with the previous available beliefs leads to logical inconsistency. When no logical conflict arises, the new information is just added to the current state of beliefs. On the contrary, we claim that many situations require some change in the pre-existing beliefs in front of an incoming piece of information, even when no logical conflict arises. We claim that the agenda of logic based belief change research should be concerned with these other human reasoning paradigms, too.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125045258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}