Pub Date : 2022-05-10DOI: 10.1007/s10506-022-09314-x
Wim De Mulder, Peggy Valcke, Joke Baeck
Ex aequo et bono compensations refer to tribunal’s compensations that cannot be determined exactly according to the rule of law, in which case the judge relies on an estimate that seems fair for the case at hand. Such cases are prone to legal uncertainty, given the subjectivity that is inherent to the concept of fairness. We show how basic principles from statistics and machine learning may be used to reduce legal uncertainty in ex aequo et bono judicial decisions. For a given type of ex aequo et bono dispute, we consider two general stages in estimating the compensation. First, the stage where there is significant disagreement among judges as to which compensation is fair. In that case, we let judges rule on such disputes, while a machine tracks a certain measure of the relative differences of the granted compensations. In the second stage that measure, which expresses the degree of legal uncertainty, has dropped below a predefined threshold. From then on legal decisions on the quantity of the ex aequo et bono compensation for the considered type of dispute may be replaced by the average of previous compensations. The main consequence is that this type of dispute is, from this stage on, free of legal uncertainty.
{"title":"A collaboration between judge and machine to reduce legal uncertainty in disputes concerning ex aequo et bono compensations","authors":"Wim De Mulder, Peggy Valcke, Joke Baeck","doi":"10.1007/s10506-022-09314-x","DOIUrl":"10.1007/s10506-022-09314-x","url":null,"abstract":"<div><p>Ex aequo et bono compensations refer to tribunal’s compensations that cannot be determined exactly according to the rule of law, in which case the judge relies on an estimate that seems fair for the case at hand. Such cases are prone to legal uncertainty, given the subjectivity that is inherent to the concept of fairness. We show how basic principles from statistics and machine learning may be used to reduce legal uncertainty in ex aequo et bono judicial decisions. For a given type of ex aequo et bono dispute, we consider two general stages in estimating the compensation. First, the stage where there is significant disagreement among judges as to which compensation is fair. In that case, we let judges rule on such disputes, while a machine tracks a certain measure of the relative differences of the granted compensations. In the second stage that measure, which expresses the degree of legal uncertainty, has dropped below a predefined threshold. From then on legal decisions on the quantity of the ex aequo et bono compensation for the considered type of dispute may be replaced by the average of previous compensations. The main consequence is that this type of dispute is, from this stage on, free of legal uncertainty.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"325 - 333"},"PeriodicalIF":4.1,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43519525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-08DOI: 10.1007/s10506-022-09313-y
Joe Watson, Guy Aglionby, Samuel March
Judgments concerning animals have arisen across a variety of established practice areas. There is, however, no publicly available repository of judgments concerning the emerging practice area of animal protection law. This has hindered the identification of individual animal protection law judgments and comprehension of the scale of animal protection law made by courts. Thus, we detail the creation of an initial animal protection law repository using natural language processing and machine learning techniques. This involved domain expert classification of 500 judgments according to whether or not they were concerned with animal protection law. 400 of these judgments were used to train various models, each of which was used to predict the classification of the remaining 100 judgments. The predictions of each model were superior to a baseline measure intended to mimic current searching practice, with the best performing model being a support vector machine (SVM) approach that classified judgments according to term frequency—inverse document frequency (TF-IDF) values. Investigation of this model consisted of considering its most influential features and conducting an error analysis of all incorrectly predicted judgments. This showed the features indicative of animal protection law judgments to include terms such as ‘welfare’, ‘hunt’ and ‘cull’, and that incorrectly predicted judgments were often deemed marginal decisions by the domain expert. The TF-IDF SVM was then used to classify non-labelled judgments, resulting in an initial animal protection law repository. Inspection of this repository suggested that there were 175 animal protection judgments between January 2000 and December 2020 from the Privy Council, House of Lords, Supreme Court and upper England and Wales courts.
{"title":"Using machine learning to create a repository of judgments concerning a new practice area: a case study in animal protection law","authors":"Joe Watson, Guy Aglionby, Samuel March","doi":"10.1007/s10506-022-09313-y","DOIUrl":"10.1007/s10506-022-09313-y","url":null,"abstract":"<div><p>Judgments concerning animals have arisen across a variety of established practice areas. There is, however, no publicly available repository of judgments concerning the emerging practice area of animal protection law. This has hindered the identification of individual animal protection law judgments and comprehension of the scale of animal protection law made by courts. Thus, we detail the creation of an initial animal protection law repository using natural language processing and machine learning techniques. This involved domain expert classification of 500 judgments according to whether or not they were concerned with animal protection law. 400 of these judgments were used to train various models, each of which was used to predict the classification of the remaining 100 judgments. The predictions of each model were superior to a baseline measure intended to mimic current searching practice, with the best performing model being a support vector machine (SVM) approach that classified judgments according to term frequency—inverse document frequency (TF-IDF) values. Investigation of this model consisted of considering its most influential features and conducting an error analysis of all incorrectly predicted judgments. This showed the features indicative of animal protection law judgments to include terms such as ‘welfare’, ‘hunt’ and ‘cull’, and that incorrectly predicted judgments were often deemed marginal decisions by the domain expert. The TF-IDF SVM was then used to classify non-labelled judgments, resulting in an initial animal protection law repository. Inspection of this repository suggested that there were 175 animal protection judgments between January 2000 and December 2020 from the Privy Council, House of Lords, Supreme Court and upper England and Wales courts.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"293 - 324"},"PeriodicalIF":4.1,"publicationDate":"2022-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09313-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41755920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-05DOI: 10.1007/s10506-022-09312-z
Gizem Yalcin, Erlis Themeli, Evert Stamhuis, Stefan Philipsen, Stefano Puntoni
Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).
{"title":"Perceptions of Justice By Algorithms","authors":"Gizem Yalcin, Erlis Themeli, Evert Stamhuis, Stefan Philipsen, Stefano Puntoni","doi":"10.1007/s10506-022-09312-z","DOIUrl":"10.1007/s10506-022-09312-z","url":null,"abstract":"<div><p>Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"269 - 292"},"PeriodicalIF":4.1,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09312-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9693645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-25DOI: 10.1007/s10506-022-09311-0
Shiyang Yu, Xi Chen
The Toulmin model has been proved useful in law and argumentation theory. This model describes the basic process in justifying a claim, which comprises six elements, i.e., claim (C), data (D), warrant (W), backing (B), qualifier (Q), and rebuttal (R). Specifically, in justifying a claim, one must put forward ‘data’ and a ‘warrant’, whereas the latter is authorized by ‘backing’. The force of the ‘claim’ being justified is represented by the ‘qualifier’, and the condition under which the claim cannot be justified is represented as the ‘rebuttal’. To further improve the model, (Goodnight, Informal Logic 15:41–52, 1993) points out that the selection of a backing needs justification, which he calls legitimation justification. However, how such justification is constituted has not yet been clarified. To identify legitimation justification, we separate it into two parts. One justifies a backing’s eligibility (legitimation justification1; LJ1); the other justifies its superiority over other eligible backings (legitimation justification2; LJ2). In this paper, we focus on LJ1 and apply it to the legal justification (of judgements) in hard cases for illustration purposes. We submit that LJ1 refers to the justification of the legal interpretation of a norm by its backing, which can be further separated into several orderable subjustifications. Taking the subjustification of a norm’s existence as an example, we show how it would be influenced by different positions in the philosophy of law. Taking the position of the theory of natural law, such subjustification is presented and evaluated. This paper aims not only to inform ongoing theoretical efforts to apply the Toulmin model in the legal field, but it also seeks to clarify the process in the justification of legal judgments in hard cases. It also offers background information for the possible construction of related AI systems. In our future work, LJ2 and other subjustifications of LJ1 will be discussed.
{"title":"How to justify a backing’s eligibility for a warrant: the justification of a legal interpretation in a hard case","authors":"Shiyang Yu, Xi Chen","doi":"10.1007/s10506-022-09311-0","DOIUrl":"10.1007/s10506-022-09311-0","url":null,"abstract":"<div><p>The Toulmin model has been proved useful in law and argumentation theory. This model describes the basic process in justifying a claim, which comprises six elements, i.e., claim (C), data (D), warrant (W), backing (B), qualifier (Q), and rebuttal (R). Specifically, in justifying a claim, one must put forward ‘data’ and a ‘warrant’, whereas the latter is authorized by ‘backing’. The force of the ‘claim’ being justified is represented by the ‘qualifier’, and the condition under which the claim cannot be justified is represented as the ‘rebuttal’. To further improve the model, (Goodnight, Informal Logic 15:41–52, 1993) points out that the selection of a backing needs justification, which he calls legitimation justification. However, how such justification is constituted has not yet been clarified. To identify legitimation justification, we separate it into two parts. One justifies a backing’s eligibility (legitimation justification<sub>1</sub>; LJ<sub>1</sub>); the other justifies its superiority over other eligible backings (legitimation justification<sub>2</sub>; LJ<sub>2</sub>). In this paper, we focus on LJ<sub>1</sub> and apply it to the legal justification (of judgements) in hard cases for illustration purposes. We submit that LJ<sub>1</sub> refers to the justification of the legal interpretation of a norm by its backing, which can be further separated into several orderable subjustifications. Taking the subjustification of a norm’s existence as an example, we show how it would be influenced by different positions in the philosophy of law. Taking the position of the theory of natural law, such subjustification is presented and evaluated. This paper aims not only to inform ongoing theoretical efforts to apply the Toulmin model in the legal field, but it also seeks to clarify the process in the justification of legal judgments in hard cases. It also offers background information for the possible construction of related AI systems. In our future work, LJ<sub>2</sub> and other subjustifications of LJ<sub>1</sub> will be discussed.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"239 - 268"},"PeriodicalIF":4.1,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46991345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the digital age, the use of advanced technology is becoming a new paradigm in police work, criminal justice, and the penal system. Algorithms promise to predict delinquent behaviour, identify potentially dangerous persons, and support crime investigation. Algorithm-based applications are often deployed in this context, laying the groundwork for a ‘smart criminal justice’. In this qualitative study based on 32 interviews with criminal justice and police officials, we explore the reasons why and extent to which such a smart criminal justice system has already been established in Switzerland, and the benefits perceived by users. Drawing upon this research, we address the spread, application, technical background, institutional implementation, and psychological aspects of the use of algorithms in the criminal justice system. We find that the Swiss criminal justice system is already significantly shaped by algorithms, a change motivated by political expectations and demands for efficiency. Until now, algorithms have only been used at a low level of automation and technical complexity and the levels of benefit perceived vary. This study also identifies the need for critical evaluation and research-based optimization of the implementation of advanced technology. Societal implications, as well as the legal foundations of the use of algorithms, are often insufficiently taken into account. By discussing the main challenges to and issues with algorithm use in this field, this work lays the foundation for further research and debate regarding how to guarantee that ‘smart’ criminal justice is actually carried out smartly.
{"title":"Smart criminal justice: exploring the use of algorithms in the Swiss criminal justice system","authors":"Monika Simmler, Simone Brunner, Giulia Canova, Kuno Schedler","doi":"10.1007/s10506-022-09310-1","DOIUrl":"10.1007/s10506-022-09310-1","url":null,"abstract":"<div><p>In the digital age, the use of advanced technology is becoming a new paradigm in police work, criminal justice, and the penal system. Algorithms promise to predict delinquent behaviour, identify potentially dangerous persons, and support crime investigation. Algorithm-based applications are often deployed in this context, laying the groundwork for a ‘smart criminal justice’. In this qualitative study based on 32 interviews with criminal justice and police officials, we explore the reasons why and extent to which such a smart criminal justice system has already been established in Switzerland, and the benefits perceived by users. Drawing upon this research, we address the spread, application, technical background, institutional implementation, and psychological aspects of the use of algorithms in the criminal justice system. We find that the Swiss criminal justice system is already significantly shaped by algorithms, a change motivated by political expectations and demands for efficiency. Until now, algorithms have only been used at a low level of automation and technical complexity and the levels of benefit perceived vary. This study also identifies the need for critical evaluation and research-based optimization of the implementation of advanced technology. Societal implications, as well as the legal foundations of the use of algorithms, are often insufficiently taken into account. By discussing the main challenges to and issues with algorithm use in this field, this work lays the foundation for further research and debate regarding how to guarantee that ‘smart’ criminal justice is actually carried out smartly.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 2","pages":"213 - 237"},"PeriodicalIF":4.1,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09310-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47728642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-03DOI: 10.1007/s10506-022-09309-8
Enrico Francesconi
This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.
{"title":"The winter, the summer and the summer dream of artificial intelligence in law","authors":"Enrico Francesconi","doi":"10.1007/s10506-022-09309-8","DOIUrl":"10.1007/s10506-022-09309-8","url":null,"abstract":"<div><p>This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"30 2","pages":"147 - 161"},"PeriodicalIF":4.1,"publicationDate":"2022-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-022-09309-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50444423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-25DOI: 10.1007/s10506-021-09306-3
Masha Medvedeva, Martijn Wieling, Michel Vols
In this paper, we discuss previous research in automatic prediction of court decisions. We define the difference between outcome identification, outcome-based judgement categorisation and outcome forecasting, and review how various studies fall into these categories. We discuss how important it is to understand the legal data that one works with in order to determine which task can be performed. Finally, we reflect on the needs of the legal discipline regarding the analysis of court judgements.
{"title":"Rethinking the field of automatic prediction of court decisions","authors":"Masha Medvedeva, Martijn Wieling, Michel Vols","doi":"10.1007/s10506-021-09306-3","DOIUrl":"10.1007/s10506-021-09306-3","url":null,"abstract":"<div><p>In this paper, we discuss previous research in automatic prediction of court decisions. We define the difference between outcome identification, outcome-based judgement categorisation and outcome forecasting, and review how various studies fall into these categories. We discuss how important it is to understand the legal data that one works with in order to determine which task can be performed. Finally, we reflect on the needs of the legal discipline regarding the analysis of court judgements.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 1","pages":"195 - 212"},"PeriodicalIF":4.1,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-021-09306-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45979229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-24DOI: 10.1007/s10506-021-09307-2
Holger Andreas, Matthias Armgardt, Mario Gunther
We define a formal semantics of conditionals based on normatively ideal worlds. Such worlds are described informally by Armgardt (Gabbay D, Magnani L, Park W, Pietarinen A-V (eds) Natural arguments: a tribute to john woods, College Publications, London, pp 699–708, 2018) to address well-known problems of the counterfactual approach to causation. Drawing on Armgardt’s proposal, we use iterated conditionals in order to analyse causal relations in scenarios of multi-agent interaction. This results in a refined counterfactual approach to causal responsibility in legal contexts, which solves overdetermination problems in an intuitively accessible manner.
{"title":"Counterfactuals for causal responsibility in legal contexts","authors":"Holger Andreas, Matthias Armgardt, Mario Gunther","doi":"10.1007/s10506-021-09307-2","DOIUrl":"10.1007/s10506-021-09307-2","url":null,"abstract":"<div><p>We define a formal semantics of conditionals based on <i>normatively ideal worlds</i>. Such worlds are described informally by Armgardt (Gabbay D, Magnani L, Park W, Pietarinen A-V (eds) Natural arguments: a tribute to john woods, College Publications, London, pp 699–708, 2018) to address well-known problems of the counterfactual approach to causation. Drawing on Armgardt’s proposal, we use iterated conditionals in order to analyse causal relations in scenarios of multi-agent interaction. This results in a refined counterfactual approach to causal responsibility in legal contexts, which solves overdetermination problems in an intuitively accessible manner.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 1","pages":"115 - 132"},"PeriodicalIF":4.1,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43965410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-24DOI: 10.1007/s10506-021-09298-0
Scott McLachlan, Evangelia Kyrimi, Kudakwashe Dube, Norman Fenton, Lisa C. Webley
Modelling that exploits visual elements and information visualisation are important areas that have contributed immensely to understanding and the computerisation advancements in many domains and yet remain unexplored for the benefit of the law and legal practice. This paper investigates the challenge of modelling and expressing structures and processes in legislation and the law by using visual modelling and information visualisation (InfoVis) to assist accessibility of legal knowledge, practice and knowledge formalisation as a basis for legal AI. The paper uses a subset of the well-defined Unified Modelling Language (UML) to visually express the structure and process of the legislation and the law to create visual flow diagrams called lawmaps, which form the basis of further formalisation. A lawmap development methodology is presented and evaluated by creating a set of lawmaps for the practice of conveyancing and the Landlords and Tenants Act 1954 of the United Kingdom. This paper is the first of a new breed of preliminary solutions capable of application across all aspects, from legislation to practice; and capable of accelerating development of legal AI.
{"title":"Lawmaps: enabling legal AI development through visualisation of the implicit structure of legislation and lawyerly process","authors":"Scott McLachlan, Evangelia Kyrimi, Kudakwashe Dube, Norman Fenton, Lisa C. Webley","doi":"10.1007/s10506-021-09298-0","DOIUrl":"10.1007/s10506-021-09298-0","url":null,"abstract":"<div><p>Modelling that exploits visual elements and information visualisation are important areas that have contributed immensely to understanding and the computerisation advancements in many domains and yet remain unexplored for the benefit of the law and legal practice. This paper investigates the challenge of modelling and expressing structures and processes in legislation and the law by using visual modelling and information visualisation (InfoVis) to assist accessibility of legal knowledge, practice and knowledge formalisation as a basis for legal AI. The paper uses a subset of the well-defined Unified Modelling Language (UML) to visually express the structure and process of the legislation and the law to create visual flow diagrams called lawmaps, which form the basis of further formalisation. A lawmap development methodology is presented and evaluated by creating a set of lawmaps for the practice of conveyancing and the Landlords and Tenants Act 1954 of the United Kingdom. This paper is the first of a new breed of preliminary solutions capable of application across all aspects, from legislation to practice; and capable of accelerating development of legal AI.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 1","pages":"169 - 194"},"PeriodicalIF":4.1,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-021-09298-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42020567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-15DOI: 10.1007/s10506-022-09308-9
Paulo Henrique Padovan, Clarice Marinho Martins, Chris Reed
Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.
{"title":"Black is the new orange: how to determine AI liability","authors":"Paulo Henrique Padovan, Clarice Marinho Martins, Chris Reed","doi":"10.1007/s10506-022-09308-9","DOIUrl":"10.1007/s10506-022-09308-9","url":null,"abstract":"<div><p>Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"31 1","pages":"133 - 167"},"PeriodicalIF":4.1,"publicationDate":"2022-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45838237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}