Pub Date : 2022-07-04DOI: 10.1007/s10506-022-09317-8
Enrico Francesconi, Guido Governatori
This paper presents an approach for legal compliance checking in the Semantic Web which can be effectively applied for applications in the Linked Open Data environment. It is based on modeling deontic norms in terms of ontology classes and ontology property restrictions. It is also shown how this approach can handle norm defeasibility. Such methodology is implemented by decidable fragments of OWL 2, while legal reasoning is carried out by available decidable reasoners. The approach is generalised by presenting patterns for modeling deontic norms and norms compliance checking.
{"title":"Patterns for legal compliance checking in a decidable framework of linked open data","authors":"Enrico Francesconi, Guido Governatori","doi":"10.1007/s10506-022-09317-8","DOIUrl":"10.1007/s10506-022-09317-8","url":null,"abstract":"<div><p>This paper presents an approach for legal compliance checking in the Semantic Web which can be effectively applied for applications in the Linked Open Data environment. It is based on modeling deontic norms in terms of ontology classes and ontology property restrictions. It is also shown how this approach can handle norm defeasibility. Such methodology is implemented by decidable fragments of OWL 2, while legal reasoning is carried out by available decidable reasoners. The approach is generalised by presenting patterns for modeling deontic norms and norms compliance checking.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 3","pages":"445 - 464"},"PeriodicalIF":4.1,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09317-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47586580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-19DOI: 10.1007/s10506-022-09316-9
Alicja Kowalewska, Rafal Urbaniak
When we talk about the coherence of a story, we seem to think of how well its individual pieces fit together—how to explicate this notion formally, though? We develop a Bayesian network based coherence measure with implementation in R, which performs better than its purely probabilistic predecessors. The novelty is that by paying attention to the network structure, we avoid simply taking mean confirmation scores between all possible pairs of subsets of a narration. Moreover, we assign special importance to the weakest links in a narration, to improve on the other measures’ results for logically inconsistent scenarios. We illustrate and investigate the performance of the measures in relation to a few philosophically motivated examples, and (more extensively) using the real-life example of the Sally Clark case.
{"title":"Measuring coherence with Bayesian networks","authors":"Alicja Kowalewska, Rafal Urbaniak","doi":"10.1007/s10506-022-09316-9","DOIUrl":"10.1007/s10506-022-09316-9","url":null,"abstract":"<div><p>When we talk about the coherence of a story, we seem to think of how well its individual pieces fit together—how to explicate this notion formally, though? We develop a Bayesian network based coherence measure with implementation in <b><span>R</span></b>, which performs better than its purely probabilistic predecessors. The novelty is that by paying attention to the network structure, we avoid simply taking mean confirmation scores between all possible pairs of subsets of a narration. Moreover, we assign special importance to the weakest links in a narration, to improve on the other measures’ results for logically inconsistent scenarios. We illustrate and investigate the performance of the measures in relation to a few philosophically motivated examples, and (more extensively) using the real-life example of the Sally Clark case.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"369 - 395"},"PeriodicalIF":4.1,"publicationDate":"2022-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46786857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-06DOI: 10.1007/s10506-022-09315-w
Corinna Coupette, Dirk Hartung, Janis Beckedorf, Maximilian Böther, Daniel Martin Katz
Building on the computer science concept of code smells, we initiate the study of law smells, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples—namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession—, we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. We introduce text-based and graph-based methods to identify instances of law smells, confirming their utility in practice using the United States Code as a test case. Our work demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of legal code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.
{"title":"Law Smells","authors":"Corinna Coupette, Dirk Hartung, Janis Beckedorf, Maximilian Böther, Daniel Martin Katz","doi":"10.1007/s10506-022-09315-w","DOIUrl":"10.1007/s10506-022-09315-w","url":null,"abstract":"<div><p>Building on the computer science concept of <i>code smells</i>, we initiate the study of <i>law smells</i>, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples—namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession—, we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. We introduce text-based and graph-based methods to identify instances of law smells, confirming their utility in practice using the United States Code as a test case. Our work demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of <i>legal</i> code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"335 - 368"},"PeriodicalIF":4.1,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09315-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42037535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-10DOI: 10.1007/s10506-022-09314-x
Wim De Mulder, Peggy Valcke, Joke Baeck
Ex aequo et bono compensations refer to tribunal’s compensations that cannot be determined exactly according to the rule of law, in which case the judge relies on an estimate that seems fair for the case at hand. Such cases are prone to legal uncertainty, given the subjectivity that is inherent to the concept of fairness. We show how basic principles from statistics and machine learning may be used to reduce legal uncertainty in ex aequo et bono judicial decisions. For a given type of ex aequo et bono dispute, we consider two general stages in estimating the compensation. First, the stage where there is significant disagreement among judges as to which compensation is fair. In that case, we let judges rule on such disputes, while a machine tracks a certain measure of the relative differences of the granted compensations. In the second stage that measure, which expresses the degree of legal uncertainty, has dropped below a predefined threshold. From then on legal decisions on the quantity of the ex aequo et bono compensation for the considered type of dispute may be replaced by the average of previous compensations. The main consequence is that this type of dispute is, from this stage on, free of legal uncertainty.
{"title":"A collaboration between judge and machine to reduce legal uncertainty in disputes concerning ex aequo et bono compensations","authors":"Wim De Mulder, Peggy Valcke, Joke Baeck","doi":"10.1007/s10506-022-09314-x","DOIUrl":"10.1007/s10506-022-09314-x","url":null,"abstract":"<div><p>Ex aequo et bono compensations refer to tribunal’s compensations that cannot be determined exactly according to the rule of law, in which case the judge relies on an estimate that seems fair for the case at hand. Such cases are prone to legal uncertainty, given the subjectivity that is inherent to the concept of fairness. We show how basic principles from statistics and machine learning may be used to reduce legal uncertainty in ex aequo et bono judicial decisions. For a given type of ex aequo et bono dispute, we consider two general stages in estimating the compensation. First, the stage where there is significant disagreement among judges as to which compensation is fair. In that case, we let judges rule on such disputes, while a machine tracks a certain measure of the relative differences of the granted compensations. In the second stage that measure, which expresses the degree of legal uncertainty, has dropped below a predefined threshold. From then on legal decisions on the quantity of the ex aequo et bono compensation for the considered type of dispute may be replaced by the average of previous compensations. The main consequence is that this type of dispute is, from this stage on, free of legal uncertainty.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"325 - 333"},"PeriodicalIF":4.1,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43519525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-08DOI: 10.1007/s10506-022-09313-y
Joe Watson, Guy Aglionby, Samuel March
Judgments concerning animals have arisen across a variety of established practice areas. There is, however, no publicly available repository of judgments concerning the emerging practice area of animal protection law. This has hindered the identification of individual animal protection law judgments and comprehension of the scale of animal protection law made by courts. Thus, we detail the creation of an initial animal protection law repository using natural language processing and machine learning techniques. This involved domain expert classification of 500 judgments according to whether or not they were concerned with animal protection law. 400 of these judgments were used to train various models, each of which was used to predict the classification of the remaining 100 judgments. The predictions of each model were superior to a baseline measure intended to mimic current searching practice, with the best performing model being a support vector machine (SVM) approach that classified judgments according to term frequency—inverse document frequency (TF-IDF) values. Investigation of this model consisted of considering its most influential features and conducting an error analysis of all incorrectly predicted judgments. This showed the features indicative of animal protection law judgments to include terms such as ‘welfare’, ‘hunt’ and ‘cull’, and that incorrectly predicted judgments were often deemed marginal decisions by the domain expert. The TF-IDF SVM was then used to classify non-labelled judgments, resulting in an initial animal protection law repository. Inspection of this repository suggested that there were 175 animal protection judgments between January 2000 and December 2020 from the Privy Council, House of Lords, Supreme Court and upper England and Wales courts.
{"title":"Using machine learning to create a repository of judgments concerning a new practice area: a case study in animal protection law","authors":"Joe Watson, Guy Aglionby, Samuel March","doi":"10.1007/s10506-022-09313-y","DOIUrl":"10.1007/s10506-022-09313-y","url":null,"abstract":"<div><p>Judgments concerning animals have arisen across a variety of established practice areas. There is, however, no publicly available repository of judgments concerning the emerging practice area of animal protection law. This has hindered the identification of individual animal protection law judgments and comprehension of the scale of animal protection law made by courts. Thus, we detail the creation of an initial animal protection law repository using natural language processing and machine learning techniques. This involved domain expert classification of 500 judgments according to whether or not they were concerned with animal protection law. 400 of these judgments were used to train various models, each of which was used to predict the classification of the remaining 100 judgments. The predictions of each model were superior to a baseline measure intended to mimic current searching practice, with the best performing model being a support vector machine (SVM) approach that classified judgments according to term frequency—inverse document frequency (TF-IDF) values. Investigation of this model consisted of considering its most influential features and conducting an error analysis of all incorrectly predicted judgments. This showed the features indicative of animal protection law judgments to include terms such as ‘welfare’, ‘hunt’ and ‘cull’, and that incorrectly predicted judgments were often deemed marginal decisions by the domain expert. The TF-IDF SVM was then used to classify non-labelled judgments, resulting in an initial animal protection law repository. Inspection of this repository suggested that there were 175 animal protection judgments between January 2000 and December 2020 from the Privy Council, House of Lords, Supreme Court and upper England and Wales courts.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"293 - 324"},"PeriodicalIF":4.1,"publicationDate":"2022-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09313-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41755920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-05DOI: 10.1007/s10506-022-09312-z
Gizem Yalcin, Erlis Themeli, Evert Stamhuis, Stefan Philipsen, Stefano Puntoni
Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).
{"title":"Perceptions of Justice By Algorithms","authors":"Gizem Yalcin, Erlis Themeli, Evert Stamhuis, Stefan Philipsen, Stefano Puntoni","doi":"10.1007/s10506-022-09312-z","DOIUrl":"10.1007/s10506-022-09312-z","url":null,"abstract":"<div><p>Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"269 - 292"},"PeriodicalIF":4.1,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09312-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9693645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-25DOI: 10.1007/s10506-022-09311-0
Shiyang Yu, Xi Chen
The Toulmin model has been proved useful in law and argumentation theory. This model describes the basic process in justifying a claim, which comprises six elements, i.e., claim (C), data (D), warrant (W), backing (B), qualifier (Q), and rebuttal (R). Specifically, in justifying a claim, one must put forward ‘data’ and a ‘warrant’, whereas the latter is authorized by ‘backing’. The force of the ‘claim’ being justified is represented by the ‘qualifier’, and the condition under which the claim cannot be justified is represented as the ‘rebuttal’. To further improve the model, (Goodnight, Informal Logic 15:41–52, 1993) points out that the selection of a backing needs justification, which he calls legitimation justification. However, how such justification is constituted has not yet been clarified. To identify legitimation justification, we separate it into two parts. One justifies a backing’s eligibility (legitimation justification1; LJ1); the other justifies its superiority over other eligible backings (legitimation justification2; LJ2). In this paper, we focus on LJ1 and apply it to the legal justification (of judgements) in hard cases for illustration purposes. We submit that LJ1 refers to the justification of the legal interpretation of a norm by its backing, which can be further separated into several orderable subjustifications. Taking the subjustification of a norm’s existence as an example, we show how it would be influenced by different positions in the philosophy of law. Taking the position of the theory of natural law, such subjustification is presented and evaluated. This paper aims not only to inform ongoing theoretical efforts to apply the Toulmin model in the legal field, but it also seeks to clarify the process in the justification of legal judgments in hard cases. It also offers background information for the possible construction of related AI systems. In our future work, LJ2 and other subjustifications of LJ1 will be discussed.
{"title":"How to justify a backing’s eligibility for a warrant: the justification of a legal interpretation in a hard case","authors":"Shiyang Yu, Xi Chen","doi":"10.1007/s10506-022-09311-0","DOIUrl":"10.1007/s10506-022-09311-0","url":null,"abstract":"<div><p>The Toulmin model has been proved useful in law and argumentation theory. This model describes the basic process in justifying a claim, which comprises six elements, i.e., claim (C), data (D), warrant (W), backing (B), qualifier (Q), and rebuttal (R). Specifically, in justifying a claim, one must put forward ‘data’ and a ‘warrant’, whereas the latter is authorized by ‘backing’. The force of the ‘claim’ being justified is represented by the ‘qualifier’, and the condition under which the claim cannot be justified is represented as the ‘rebuttal’. To further improve the model, (Goodnight, Informal Logic 15:41–52, 1993) points out that the selection of a backing needs justification, which he calls legitimation justification. However, how such justification is constituted has not yet been clarified. To identify legitimation justification, we separate it into two parts. One justifies a backing’s eligibility (legitimation justification<sub>1</sub>; LJ<sub>1</sub>); the other justifies its superiority over other eligible backings (legitimation justification<sub>2</sub>; LJ<sub>2</sub>). In this paper, we focus on LJ<sub>1</sub> and apply it to the legal justification (of judgements) in hard cases for illustration purposes. We submit that LJ<sub>1</sub> refers to the justification of the legal interpretation of a norm by its backing, which can be further separated into several orderable subjustifications. Taking the subjustification of a norm’s existence as an example, we show how it would be influenced by different positions in the philosophy of law. Taking the position of the theory of natural law, such subjustification is presented and evaluated. This paper aims not only to inform ongoing theoretical efforts to apply the Toulmin model in the legal field, but it also seeks to clarify the process in the justification of legal judgments in hard cases. It also offers background information for the possible construction of related AI systems. In our future work, LJ<sub>2</sub> and other subjustifications of LJ<sub>1</sub> will be discussed.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"239 - 268"},"PeriodicalIF":4.1,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46991345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the digital age, the use of advanced technology is becoming a new paradigm in police work, criminal justice, and the penal system. Algorithms promise to predict delinquent behaviour, identify potentially dangerous persons, and support crime investigation. Algorithm-based applications are often deployed in this context, laying the groundwork for a ‘smart criminal justice’. In this qualitative study based on 32 interviews with criminal justice and police officials, we explore the reasons why and extent to which such a smart criminal justice system has already been established in Switzerland, and the benefits perceived by users. Drawing upon this research, we address the spread, application, technical background, institutional implementation, and psychological aspects of the use of algorithms in the criminal justice system. We find that the Swiss criminal justice system is already significantly shaped by algorithms, a change motivated by political expectations and demands for efficiency. Until now, algorithms have only been used at a low level of automation and technical complexity and the levels of benefit perceived vary. This study also identifies the need for critical evaluation and research-based optimization of the implementation of advanced technology. Societal implications, as well as the legal foundations of the use of algorithms, are often insufficiently taken into account. By discussing the main challenges to and issues with algorithm use in this field, this work lays the foundation for further research and debate regarding how to guarantee that ‘smart’ criminal justice is actually carried out smartly.
{"title":"Smart criminal justice: exploring the use of algorithms in the Swiss criminal justice system","authors":"Monika Simmler, Simone Brunner, Giulia Canova, Kuno Schedler","doi":"10.1007/s10506-022-09310-1","DOIUrl":"10.1007/s10506-022-09310-1","url":null,"abstract":"<div><p>In the digital age, the use of advanced technology is becoming a new paradigm in police work, criminal justice, and the penal system. Algorithms promise to predict delinquent behaviour, identify potentially dangerous persons, and support crime investigation. Algorithm-based applications are often deployed in this context, laying the groundwork for a ‘smart criminal justice’. In this qualitative study based on 32 interviews with criminal justice and police officials, we explore the reasons why and extent to which such a smart criminal justice system has already been established in Switzerland, and the benefits perceived by users. Drawing upon this research, we address the spread, application, technical background, institutional implementation, and psychological aspects of the use of algorithms in the criminal justice system. We find that the Swiss criminal justice system is already significantly shaped by algorithms, a change motivated by political expectations and demands for efficiency. Until now, algorithms have only been used at a low level of automation and technical complexity and the levels of benefit perceived vary. This study also identifies the need for critical evaluation and research-based optimization of the implementation of advanced technology. Societal implications, as well as the legal foundations of the use of algorithms, are often insufficiently taken into account. By discussing the main challenges to and issues with algorithm use in this field, this work lays the foundation for further research and debate regarding how to guarantee that ‘smart’ criminal justice is actually carried out smartly.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"213 - 237"},"PeriodicalIF":4.1,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09310-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47728642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-03DOI: 10.1007/s10506-022-09309-8
Enrico Francesconi
This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.
{"title":"The winter, the summer and the summer dream of artificial intelligence in law","authors":"Enrico Francesconi","doi":"10.1007/s10506-022-09309-8","DOIUrl":"10.1007/s10506-022-09309-8","url":null,"abstract":"<div><p>This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"30 2","pages":"147 - 161"},"PeriodicalIF":4.1,"publicationDate":"2022-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09309-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50444423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-25DOI: 10.1007/s10506-021-09306-3
Masha Medvedeva, Martijn Wieling, Michel Vols
In this paper, we discuss previous research in automatic prediction of court decisions. We define the difference between outcome identification, outcome-based judgement categorisation and outcome forecasting, and review how various studies fall into these categories. We discuss how important it is to understand the legal data that one works with in order to determine which task can be performed. Finally, we reflect on the needs of the legal discipline regarding the analysis of court judgements.
{"title":"Rethinking the field of automatic prediction of court decisions","authors":"Masha Medvedeva, Martijn Wieling, Michel Vols","doi":"10.1007/s10506-021-09306-3","DOIUrl":"10.1007/s10506-021-09306-3","url":null,"abstract":"<div><p>In this paper, we discuss previous research in automatic prediction of court decisions. We define the difference between outcome identification, outcome-based judgement categorisation and outcome forecasting, and review how various studies fall into these categories. We discuss how important it is to understand the legal data that one works with in order to determine which task can be performed. Finally, we reflect on the needs of the legal discipline regarding the analysis of court judgements.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 1","pages":"195 - 212"},"PeriodicalIF":4.1,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-021-09306-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45979229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}