Driss Sadoun, Catherine Dubois, Y. Ghamri-Doudane, Brigitte Grau
In order to check requirement specifications written in natural language, we have chosen to model domain knowledge through an ontology and to formally represent user requirements by its population. Our approach of ontology population focuses on instance property identification from texts. We do so using extraction rules automatically acquired from a training corpus and a bootstrapping terminology. These rules aim at identifying instance property mentions represented by triples of terms, using lexical, syntactic and semantic levels of analysis. They are generated from recurrent syntactic paths between terms denoting instances of concepts and properties. We show how focusing on instance property identification allows us to precisely identify concept instances explicitly or implicitly mentioned in texts.
{"title":"From Natural Language Requirements to Formal Specification Using an Ontology","authors":"Driss Sadoun, Catherine Dubois, Y. Ghamri-Doudane, Brigitte Grau","doi":"10.1109/ICTAI.2013.116","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.116","url":null,"abstract":"In order to check requirement specifications written in natural language, we have chosen to model domain knowledge through an ontology and to formally represent user requirements by its population. Our approach of ontology population focuses on instance property identification from texts. We do so using extraction rules automatically acquired from a training corpus and a bootstrapping terminology. These rules aim at identifying instance property mentions represented by triples of terms, using lexical, syntactic and semantic levels of analysis. They are generated from recurrent syntactic paths between terms denoting instances of concepts and properties. We show how focusing on instance property identification allows us to precisely identify concept instances explicitly or implicitly mentioned in texts.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127293514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Orlandini, M. Suriano, A. Cesta, Alberto Finzi
Safety critical planning and execution is a crucial issue in autonomous systems. This paper proposes a methodology for controller synthesis suitable for timeline-based planning and demonstrates its effectiveness in a space domain where robustness of execution is a crucial property. The proposed approach uses Timed Game Automata (TGA) for formal modeling and the UPPAAL-TIGA model checker for controllers synthesis. An experimental evaluation is performed using a real-world control system.
{"title":"Controller Synthesis for Safety Critical Planning","authors":"Andrea Orlandini, M. Suriano, A. Cesta, Alberto Finzi","doi":"10.1109/ICTAI.2013.54","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.54","url":null,"abstract":"Safety critical planning and execution is a crucial issue in autonomous systems. This paper proposes a methodology for controller synthesis suitable for timeline-based planning and demonstrates its effectiveness in a space domain where robustness of execution is a crucial property. The proposed approach uses Timed Game Automata (TGA) for formal modeling and the UPPAAL-TIGA model checker for controllers synthesis. An experimental evaluation is performed using a real-world control system.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129291219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed constraint solving are useful in tackling constrained problems when agents are not allowed to share his/her private information to others and/or gathering all necessary information to solve the problem in a centralized manner is infeasible. With these two limitations, distributed algorithms solve the problem by coordinating agents to negotiate with each other. However, once information is exchanged during negotiation, the private information may be leaked from one agent to another. We propose and design a framework based on Valuation of Possible States (VPS) to evaluate how well a distributed algorithm preserves the totality of all private information onthe entire system when solving distributed constraint optimization problems, by allowing the uses of different aggregators aggregating agents' individual privacy loss. Two classes of aggregators: idempotent aggregators and risk based aggregators are proposed. We further proposed generalized inference rules to infer privacy loss of individual agents. We implement our work on four distributed constraint solving algorithms: Synchronous Branch and Bound (SynchBB), Asynchronous Distributed Constraint Optimization (ADOPT), Branch and Bound ADOPT (BnB-ADOPT), and Distributed Pseudo-tree Optimization Procedure (DPOP). Preliminary experimental evaluations on two benchmarks, Distributed Multi-Event Scheduling Problem (DiMES) and Random Distributed COP, comparing the four algorithms are performed.
{"title":"A General Privacy Loss Aggregation Framework for Distributed Constraint Reasoning","authors":"Jimmy Ho-man Lee, Terrence W.K. Mak, Yuxiang Shi","doi":"10.1109/ICTAI.2013.148","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.148","url":null,"abstract":"Distributed constraint solving are useful in tackling constrained problems when agents are not allowed to share his/her private information to others and/or gathering all necessary information to solve the problem in a centralized manner is infeasible. With these two limitations, distributed algorithms solve the problem by coordinating agents to negotiate with each other. However, once information is exchanged during negotiation, the private information may be leaked from one agent to another. We propose and design a framework based on Valuation of Possible States (VPS) to evaluate how well a distributed algorithm preserves the totality of all private information onthe entire system when solving distributed constraint optimization problems, by allowing the uses of different aggregators aggregating agents' individual privacy loss. Two classes of aggregators: idempotent aggregators and risk based aggregators are proposed. We further proposed generalized inference rules to infer privacy loss of individual agents. We implement our work on four distributed constraint solving algorithms: Synchronous Branch and Bound (SynchBB), Asynchronous Distributed Constraint Optimization (ADOPT), Branch and Bound ADOPT (BnB-ADOPT), and Distributed Pseudo-tree Optimization Procedure (DPOP). Preliminary experimental evaluations on two benchmarks, Distributed Multi-Event Scheduling Problem (DiMES) and Random Distributed COP, comparing the four algorithms are performed.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130124998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas Labroche, Marcin Detyniecki, Thomas Bärecke
This paper introduces a racing mechanism in the cluster selection process for one-pass clustering algorithms. We focus on cases where data are not numerical vectors and where it is not necessarily possible to compute a mean for each cluster. In this case, the distance of each point to existing clusters can be computed exhaustively with a quadratic complexity which is not tractable in most of nowadays use cases. In this paper we first introduce a stochastic approach for estimating the distance of each new data point to existing clusters based on Hoeffding and Bernstein bounds, that reduces the number of computations by simultaneously selecting the quantity of data to be sampled and by eliminating the non-competitive clusters. Second, this paper shows that it is possible to improve the efficiency of our approach by reducing the theoretical values of the Hoeffding and Bernstein bounds. Our algorithms, tested on real data sets, provide significant acceleration of the one-pass clustering algorithms, while making less error (or any depending on parameters) than one-pass clustering algorithm with fixed number of comparisons with each cluster.
{"title":"Accelerating One-Pass Clustering by Cluster Selection Racing","authors":"Nicolas Labroche, Marcin Detyniecki, Thomas Bärecke","doi":"10.1109/ICTAI.2013.79","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.79","url":null,"abstract":"This paper introduces a racing mechanism in the cluster selection process for one-pass clustering algorithms. We focus on cases where data are not numerical vectors and where it is not necessarily possible to compute a mean for each cluster. In this case, the distance of each point to existing clusters can be computed exhaustively with a quadratic complexity which is not tractable in most of nowadays use cases. In this paper we first introduce a stochastic approach for estimating the distance of each new data point to existing clusters based on Hoeffding and Bernstein bounds, that reduces the number of computations by simultaneously selecting the quantity of data to be sampled and by eliminating the non-competitive clusters. Second, this paper shows that it is possible to improve the efficiency of our approach by reducing the theoretical values of the Hoeffding and Bernstein bounds. Our algorithms, tested on real data sets, provide significant acceleration of the one-pass clustering algorithms, while making less error (or any depending on parameters) than one-pass clustering algorithm with fixed number of comparisons with each cluster.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115906112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximum Satisfiability (MaxSAT) and its weighted and partial variants are well-known optimization formulations of Boolean Satisfiability (SAT). MaxSAT consists of finding an assignment that satisfies the (possibly empty) set of hard clauses, while minimizing the sum of weights of the falsified soft clauses. Recent years have witnessed the development of complete algorithms for MaxSAT motivated by a number of practical applications. The most effective approaches in such practical settings are based on iteratively calling a SAT solver and computing unsatisfiable cores to guide the search. Such approaches use computed unsatisfiable cores from unsatisfiable (UNSAT) outcomes to relax the soft clauses occurring in the computed cores. Surprisingly, only recently has an approach been proposed that exploits models from satisfiable (SAT) outcomes [1], [2] rather than unsatisfiable cores from UNSAT outcomes. This paper proposes two novel MaxSAT algorithms which exploit SAT outcomes to relax soft clauses taking into account the computed models. The new algorithms are shown to outperform classical MaxSAT algorithms and to be fairly competitive with recent core-guided MaxSAT algorithms. Finally, a well-known core-guided MaxSAT algorithm is extended to additionally exploit computed models in an attempt to integrate both approaches.
{"title":"Model-Guided Approaches for MaxSAT Solving","authors":"António Morgado, F. Heras, Joao Marques-Silva","doi":"10.1109/ICTAI.2013.142","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.142","url":null,"abstract":"Maximum Satisfiability (MaxSAT) and its weighted and partial variants are well-known optimization formulations of Boolean Satisfiability (SAT). MaxSAT consists of finding an assignment that satisfies the (possibly empty) set of hard clauses, while minimizing the sum of weights of the falsified soft clauses. Recent years have witnessed the development of complete algorithms for MaxSAT motivated by a number of practical applications. The most effective approaches in such practical settings are based on iteratively calling a SAT solver and computing unsatisfiable cores to guide the search. Such approaches use computed unsatisfiable cores from unsatisfiable (UNSAT) outcomes to relax the soft clauses occurring in the computed cores. Surprisingly, only recently has an approach been proposed that exploits models from satisfiable (SAT) outcomes [1], [2] rather than unsatisfiable cores from UNSAT outcomes. This paper proposes two novel MaxSAT algorithms which exploit SAT outcomes to relax soft clauses taking into account the computed models. The new algorithms are shown to outperform classical MaxSAT algorithms and to be fairly competitive with recent core-guided MaxSAT algorithms. Finally, a well-known core-guided MaxSAT algorithm is extended to additionally exploit computed models in an attempt to integrate both approaches.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust and reputation models are utilised by several researchers as one vital factor in the security mechanisms in MANETs to deal with selfish and misbehaving nodes and ensure packet delivery from source to destination. However, in the presence of new attacks, it is important to build a trust model to resist countermeasures related to propagation of dishonest recommendations, and aggregation which may easily degrade the effectiveness of using trust models in a hostile environment such as MANETs. However, dealing with dishonest recommendation attacks in MANETs remains an open and challenging area of research. In this work, we propose a dynamic selection algorithm to filter out recommendations in order to achieve resistance against certain existing attacks such as bad-mouthing and ballot-stuffing. The selection algorithm is based on three different rules: (i)majority rule based, (ii) personal experience based, and (iii)service reputation based. Recommendations are clustered, filtered, and selected based on these three rules in order to givethe trust and reputation model greater robustness andaccuracy over the dynamic and changeable MANETenvironment.
{"title":"Enhancing Dynamic Recommender Selection Using Multiple Rules for Trust and Reputation Models in MANETs","authors":"A. Shabut, K. Dahal, I. Awan","doi":"10.1109/ICTAI.2013.102","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.102","url":null,"abstract":"Trust and reputation models are utilised by several researchers as one vital factor in the security mechanisms in MANETs to deal with selfish and misbehaving nodes and ensure packet delivery from source to destination. However, in the presence of new attacks, it is important to build a trust model to resist countermeasures related to propagation of dishonest recommendations, and aggregation which may easily degrade the effectiveness of using trust models in a hostile environment such as MANETs. However, dealing with dishonest recommendation attacks in MANETs remains an open and challenging area of research. In this work, we propose a dynamic selection algorithm to filter out recommendations in order to achieve resistance against certain existing attacks such as bad-mouthing and ballot-stuffing. The selection algorithm is based on three different rules: (i)majority rule based, (ii) personal experience based, and (iii)service reputation based. Recommendations are clustered, filtered, and selected based on these three rules in order to givethe trust and reputation model greater robustness andaccuracy over the dynamic and changeable MANETenvironment.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114460967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Yang, Christina Leber, L. Tari, A. Chandramouli, A. Crapo, R. Messmer, Steven M. Gustafson
The Contract Search Tool is a semantic search platform that enables effective analysis of complex, long-term contractual service agreement for machines such as gas turbines. The approach we developed can effectively identify paragraphs of text for specific legal concepts. Then the key content can be decomposed and organized by the semantics model that captures key elements of the concepts and links to specific paragraphs. This is achieved by performing semantic text analysis to capture implicitly-stated provisions and the definitions of provisions, and relevant information is returned in an organized manner. The tool can be applied to increase productivity of legal review, share legal knowledge with service managers, and reduce legal risk in contract review process.
{"title":"A Natural Language Processing and Semantic-Based System for Contract Analysis","authors":"Dan Yang, Christina Leber, L. Tari, A. Chandramouli, A. Crapo, R. Messmer, Steven M. Gustafson","doi":"10.1109/ICTAI.2013.109","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.109","url":null,"abstract":"The Contract Search Tool is a semantic search platform that enables effective analysis of complex, long-term contractual service agreement for machines such as gas turbines. The approach we developed can effectively identify paragraphs of text for specific legal concepts. Then the key content can be decomposed and organized by the semantics model that captures key elements of the concepts and links to specific paragraphs. This is achieved by performing semantic text analysis to capture implicitly-stated provisions and the definitions of provisions, and relevant information is returned in an organized manner. The tool can be applied to increase productivity of legal review, share legal knowledge with service managers, and reduce legal risk in contract review process.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Singh, Arunprasath Shankar, Y. Shiyanovskii, F. Wolff, C. Papachristou, D. Weyer, Steve Clay, Jim Morrison
The number of Soft-IP vendors and designsbecoming available on the global market is growing at a phenomenal rate. The current practice of evaluating Soft IPs using their specification is a time consuming manual process. A specification document is primarily written in English, which serves as a common language for internal product development teams as well as customers. Designers have a preference for writing specifications in an informal natural language using text and notations, including diagrams, charts and tables. The lack of formality of specification documents is a limiting factor in their analysis. The current state-of-the-art in hardware design lacks any specification analysis technique. In this paper, we present a knowledge-guided methodology for specification analysis that can automatically analyze specification documents. Our approach avoids formal specification. Instead we rely on domain-based ontologies to capture design behavior. We tested our approach by analyzing floating point specification from several third party IP vendors. We define spec coverage and requirement coverage metrics to quantify our results.
{"title":"Knowledge-Guided Methodology for Specification Analysis","authors":"B. Singh, Arunprasath Shankar, Y. Shiyanovskii, F. Wolff, C. Papachristou, D. Weyer, Steve Clay, Jim Morrison","doi":"10.1109/ICTAI.2013.115","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.115","url":null,"abstract":"The number of Soft-IP vendors and designsbecoming available on the global market is growing at a phenomenal rate. The current practice of evaluating Soft IPs using their specification is a time consuming manual process. A specification document is primarily written in English, which serves as a common language for internal product development teams as well as customers. Designers have a preference for writing specifications in an informal natural language using text and notations, including diagrams, charts and tables. The lack of formality of specification documents is a limiting factor in their analysis. The current state-of-the-art in hardware design lacks any specification analysis technique. In this paper, we present a knowledge-guided methodology for specification analysis that can automatically analyze specification documents. Our approach avoids formal specification. Instead we rely on domain-based ontologies to capture design behavior. We tested our approach by analyzing floating point specification from several third party IP vendors. We define spec coverage and requirement coverage metrics to quantify our results.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123828515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Error Allowing Minimax, an algorithm resolving indifferences in the choices of pure minimax players in games of perfect information, to give the opponent the biggest possible target for errors. In contrast to the usual approach of defining a domain-specific static evaluation function with an infinite codomain, we achieve fine-grained positional evaluations by general considerations of the game tree only. To achieve applicability to real-world situations we develop Error Allowing Alpha-Beta, a variant of the standard Alpha-Beta algorithm, and a variant hybridizing these two algorithms, allowing full control over the trade-off between accuracy and computational complexity. We investigate the impact of the algorithm applying it to the perfect information game Dots and Boxes.
{"title":"Error Allowing Minimax: Getting over Indifference","authors":"F. Wisser","doi":"10.1109/ICTAI.2013.22","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.22","url":null,"abstract":"We propose Error Allowing Minimax, an algorithm resolving indifferences in the choices of pure minimax players in games of perfect information, to give the opponent the biggest possible target for errors. In contrast to the usual approach of defining a domain-specific static evaluation function with an infinite codomain, we achieve fine-grained positional evaluations by general considerations of the game tree only. To achieve applicability to real-world situations we develop Error Allowing Alpha-Beta, a variant of the standard Alpha-Beta algorithm, and a variant hybridizing these two algorithms, allowing full control over the trade-off between accuracy and computational complexity. We investigate the impact of the algorithm applying it to the perfect information game Dots and Boxes.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117324781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce and study Abstract Debates, in an attempt to incorporate Dung's argumentation frameworks in a general model of social reasoning. Informally, on a shared reasons space, a society expresses a set of possibly shared forms of subjectivity, whose deeper interactions enable new consistent collective judgments, creating social inference relations. Formally, in an abstract debate each member of a society has an opinion on a set of abstract facts, that is, a pair of two disjoint subsets of agreed and disagreed facts. A semantics is a function which assigns to each abstract debate a set of possible output opinions, based only on the interactions of the individual opinions. We consider argumentative semantics providing a novel qualitative approach to social reasoning. Two other interesting semantics are discussed.
{"title":"Abstract Debates","authors":"Cosmina Croitoru","doi":"10.1109/ICTAI.2013.110","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.110","url":null,"abstract":"We introduce and study Abstract Debates, in an attempt to incorporate Dung's argumentation frameworks in a general model of social reasoning. Informally, on a shared reasons space, a society expresses a set of possibly shared forms of subjectivity, whose deeper interactions enable new consistent collective judgments, creating social inference relations. Formally, in an abstract debate each member of a society has an opinion on a set of abstract facts, that is, a pair of two disjoint subsets of agreed and disagreed facts. A semantics is a function which assigns to each abstract debate a set of possible output opinions, based only on the interactions of the individual opinions. We consider argumentative semantics providing a novel qualitative approach to social reasoning. Two other interesting semantics are discussed.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125066895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}