In 1987 the first International Conference on Artificial Intelligence and Law was held in Boston, Massachusetts. The Conference Chair of this inaugural event was Carole D. Hafner, who sadly passed away in 2015. Carole's contribution to the AI and Law community cannot be overstated; she was instrumental in the establishment of the International Association for AI and Law (IAAIL), the ICAIL conference and the AI and Law journal, and she served as IAAIL Secretary/Treasurer for many years. Carole's research contributions on conceptual information retrieval of legal knowledge were presented at the very first ICAIL conference and in later years she developed work on a variety of topics including text analysis, case-based reasoning and ontologies. Carole's research contributions leave a legacy that is still of relevance for today's research in AI and Law. Whilst the community mourns the loss of Carole, it stands proud to be building on the foundations that she established within this research field, both in terms of her research contributions and in terms of the conference itself. To honour Carole, a best paper prize in her name is being awarded at ICAIL 2015. The fifteenth edition of the International Conference on Artificial Intelligence and Law provides a program presenting a wide variety of work to tackle the diverse topics that are manifest in the goal of developing artificial intelligence approaches and applications for the legal domain. The program includes full papers, research abstracts, system demonstrations, tutorials, workshops and a doctoral consortium. Within the program, advances are presented both on topics that are long standing staples of AI and Law research and on emerging topics that have more recently become areas of concern for the field. Theoretical work in the tradition of artificial intelligence research is prominent within the program, but applications to drive forward the transfer of knowledge from the research lab into the field are also a welcome feature of the program. The program contains some events intended to reach out to a variety of communities and audiences; the diverse workshop program includes a multilingual workshop to promote inclusivity of AI and Law researchers from non-English speaking countries, and a doctoral consortium is being held to welcome and encourage student researchers who are new to the field.
1987年,第一届人工智能与法律国际会议在马萨诸塞州波士顿举行。Carole D. Hafner女士于2015年去世,她是本次会议的主席。卡罗尔对人工智能和法律界的贡献不能被夸大;她在国际人工智能与法律协会(IAAIL)、ICAIL会议和人工智能与法律期刊的建立中发挥了重要作用,并担任国际人工智能与法律协会秘书/财务主管多年。Carole在法律知识的概念信息检索方面的研究贡献在第一届ICAIL会议上发表,在后来的几年里,她在各种主题上开展了工作,包括文本分析,基于案例的推理和本体论。卡罗尔的研究贡献留下了一笔遗产,对今天的人工智能和法律研究仍然有意义。虽然社区哀悼卡罗尔的损失,但它很自豪能够建立在她在这个研究领域建立的基础上,无论是就她的研究贡献还是就会议本身而言。为了纪念卡罗尔,ICAIL 2015将颁发以她的名字命名的最佳论文奖。第十五届国际人工智能与法律会议提供了一个项目,展示了各种各样的工作,以解决在法律领域开发人工智能方法和应用的目标中所体现的各种主题。该计划包括论文全文、研究摘要、系统演示、教程、研讨会和博士联盟。在该计划中,人工智能和法律研究的长期主要主题以及最近成为该领域关注领域的新兴主题都取得了进展。传统人工智能研究的理论工作在该计划中是突出的,但推动知识从研究实验室转移到现场的应用程序也是该计划的一个受欢迎的特征。该计划包括一些旨在接触各种社区和受众的活动;多样化的研讨会计划包括一个多语言研讨会,以促进来自非英语国家的人工智能和法律研究人员的包容性,以及一个博士联盟,以欢迎和鼓励新进入该领域的学生研究人员。
{"title":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","authors":"Ted Sichelman, Katie Atkinson","doi":"10.1145/2746090","DOIUrl":"https://doi.org/10.1145/2746090","url":null,"abstract":"In 1987 the first International Conference on Artificial Intelligence and Law was held in Boston, Massachusetts. The Conference Chair of this inaugural event was Carole D. Hafner, who sadly passed away in 2015. Carole's contribution to the AI and Law community cannot be overstated; she was instrumental in the establishment of the International Association for AI and Law (IAAIL), the ICAIL conference and the AI and Law journal, and she served as IAAIL Secretary/Treasurer for many years. Carole's research contributions on conceptual information retrieval of legal knowledge were presented at the very first ICAIL conference and in later years she developed work on a variety of topics including text analysis, case-based reasoning and ontologies. Carole's research contributions leave a legacy that is still of relevance for today's research in AI and Law. Whilst the community mourns the loss of Carole, it stands proud to be building on the foundations that she established within this research field, both in terms of her research contributions and in terms of the conference itself. To honour Carole, a best paper prize in her name is being awarded at ICAIL 2015. \u0000 \u0000The fifteenth edition of the International Conference on Artificial Intelligence and Law provides a program presenting a wide variety of work to tackle the diverse topics that are manifest in the goal of developing artificial intelligence approaches and applications for the legal domain. The program includes full papers, research abstracts, system demonstrations, tutorials, workshops and a doctoral consortium. Within the program, advances are presented both on topics that are long standing staples of AI and Law research and on emerging topics that have more recently become areas of concern for the field. Theoretical work in the tradition of artificial intelligence research is prominent within the program, but applications to drive forward the transfer of knowledge from the research lab into the field are also a welcome feature of the program. The program contains some events intended to reach out to a variety of communities and audiences; the diverse workshop program includes a multilingual workshop to promote inclusivity of AI and Law researchers from non-English speaking countries, and a doctoral consortium is being held to welcome and encourage student researchers who are new to the field.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114154195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Boella, Luigi Di Caro, Michele Graziadei, Loredana Cupi, C. Salaroglio, Llio Humphreys, Hristo Konstantinov, K. Markó, L. Robaldo, Claudio Ruffini, K. Simov, Andrea Violato, V. Stroetmann
In this paper we describe how the EUCases FP7 project is addressing the problem of lifting Legal Open Data to Linked Open Data to develop new applications for the legal information provision market by enriching structurally the documents (first of all with navigable references among legal texts) and semantically (with concepts from ontologies and classification). First we describe the social and economic need for breaking the accessibility barrier in legal information in the EU, then we describe the technological challenges and finally we explain how the EUCases project is addressing them by a combination of Human Language Technologies.
{"title":"Linking legal open data: breaking the accessibility and language barrier in european legislation and case law","authors":"G. Boella, Luigi Di Caro, Michele Graziadei, Loredana Cupi, C. Salaroglio, Llio Humphreys, Hristo Konstantinov, K. Markó, L. Robaldo, Claudio Ruffini, K. Simov, Andrea Violato, V. Stroetmann","doi":"10.1145/2746090.2746106","DOIUrl":"https://doi.org/10.1145/2746090.2746106","url":null,"abstract":"In this paper we describe how the EUCases FP7 project is addressing the problem of lifting Legal Open Data to Linked Open Data to develop new applications for the legal information provision market by enriching structurally the documents (first of all with navigable references among legal texts) and semantically (with concepts from ontologies and classification). First we describe the social and economic need for breaking the accessibility barrier in legal information in the EU, then we describe the technological challenges and finally we explain how the EUCases project is addressing them by a combination of Human Language Technologies.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127459115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.
本文对《人工智能与法律》(Artificial Intelligence and Law)杂志2005年至2014年十年间发表的文章中评估的存在和形式进行了探讨。它代表了对人工智能和法律界产生的一些最重要作品的元层面研究,在这种情况下,在人工智能和法律杂志上发表的近140篇研究文章。它还将其发现与之前发表在《人工智能与法律国际会议论文集》(ICAIL)上的评估工作进行了比较。此外,本文还重点介绍了利用绩效评估作为其主要科学工具之一的工作及其使用方法。它扩展了为什么评估在正式的人工智能和法律报告(如期刊上的那些报告)中至关重要的论点。就像之前关于这个主题的两部作品一样,它追求的是以下问题的答案:系统、算法或提案有多好?方法或技术的可靠性如何?最后,这个方法是否有效?这篇论文调查了绩效评估在科研报告中的作用,强调了一个基于绩效的“伦理”标志着一个社区的成熟程度和科学严谨性的论点。此外,该工作还审查了最近的出版物,这些出版物在更广泛的人工智能领域内解决了同样的关键问题。
{"title":"The role of evaluation in AI and law: an examination of its different forms in the AI and law journal","authors":"Jack G. Conrad, John Zeleznikow","doi":"10.1145/2746090.2746116","DOIUrl":"https://doi.org/10.1145/2746090.2746116","url":null,"abstract":"This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132975037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In civil litigation, documents that are found to be relevant to a production request are usually subjected to an exhaustive manual review for privilege (e.g, for attorney-client privilege, attorney-work product doctrine) in order to be sure that materials that could be withheld is not inadvertently revealed. Usually, the majority of the cost associated in such review process is due to the procedure of having human annotators linearly review documents (for privilege) that the classifier predicts as responsive. This paper investigates the extent to which such privilege judgments obtained by the annotators are useful for training privilege classifiers. The judgments utilized in this paper are derived from the privilege test collection that was created during the 2010 TREC Legal Track. The collection consists of two classes of annotators: "expert" judges, who are topic originators called the Topic Authority (TA) and "non-expert" judges called assessors. The questions asked in this paper are; (1) Are cheaper, non-expert annotations from assessors sufficient for classifier training? (2) Does the process of selecting special (adjudicated) documents for training affect the classifier results? The paper studies the effect of training classifiers on multiple annotators (with different expertise) and training sets (with and without selection bias). The findings in this paper show that automated privilege classifiers trained on the unbiased set of annotations yield the best results. The usefulness of the biased annotations (from experts and non-experts) for classifier training are comparable.
{"title":"Evaluating expertise and sample bias effects for privilege classification in e-discovery","authors":"J. K. Vinjumur","doi":"10.1145/2746090.2746101","DOIUrl":"https://doi.org/10.1145/2746090.2746101","url":null,"abstract":"In civil litigation, documents that are found to be relevant to a production request are usually subjected to an exhaustive manual review for privilege (e.g, for attorney-client privilege, attorney-work product doctrine) in order to be sure that materials that could be withheld is not inadvertently revealed. Usually, the majority of the cost associated in such review process is due to the procedure of having human annotators linearly review documents (for privilege) that the classifier predicts as responsive. This paper investigates the extent to which such privilege judgments obtained by the annotators are useful for training privilege classifiers. The judgments utilized in this paper are derived from the privilege test collection that was created during the 2010 TREC Legal Track. The collection consists of two classes of annotators: \"expert\" judges, who are topic originators called the Topic Authority (TA) and \"non-expert\" judges called assessors. The questions asked in this paper are; (1) Are cheaper, non-expert annotations from assessors sufficient for classifier training? (2) Does the process of selecting special (adjudicated) documents for training affect the classifier results? The paper studies the effect of training classifiers on multiple annotators (with different expertise) and training sets (with and without selection bias). The findings in this paper show that automated privilege classifiers trained on the unbiased set of annotations yield the best results. The usefulness of the biased annotations (from experts and non-experts) for classifier training are comparable.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116257137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we use statistical machine learning to classify statutory texts in terms of highly specific functional categories. We focus on regulatory provisions from multiple US state jurisdictions, all dealing with the same general topic of public health system emergency preparedness and response. In prior work we have established that one can improve classification performance on one jurisdiction's statutory texts using texts from another jurisdiction. Here we describe a framework facilitating transfer of predictive models for classification of statutory texts among multiple state jurisdictions. Our results show that the classification performance improves as we employ an increasing number of models trained on data coming from different states.
{"title":"Transfer of predictive models for classification of statutory texts in multi-jurisdictional settings","authors":"Jaromír Šavelka, Kevin D. Ashley","doi":"10.1145/2746090.2746109","DOIUrl":"https://doi.org/10.1145/2746090.2746109","url":null,"abstract":"In this paper we use statistical machine learning to classify statutory texts in terms of highly specific functional categories. We focus on regulatory provisions from multiple US state jurisdictions, all dealing with the same general topic of public health system emergency preparedness and response. In prior work we have established that one can improve classification performance on one jurisdiction's statutory texts using texts from another jurisdiction. Here we describe a framework facilitating transfer of predictive models for classification of statutory texts among multiple state jurisdictions. Our results show that the classification performance improves as we employ an increasing number of models trained on data coming from different states.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131497250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hana Chockler, N. Fenton, Jeroen Keppens, D. Lagnado
An important challenge in the field of law is the attribution of responsibility and blame to individuals and organisations for a given harm. Attributing legal responsibility often involves (but is not limited to) assessing to what extent certain parties have caused harm, or could have prevented harm from occurring. This paper presents a causal framework for performing such assessments that is particularly suitable for the analysis of complex legal cases, where the actions of many parties have had a direct or indirect effect on the harm that did occur. This framework is evaluated by means of a case study that applies it to the Baby P. case, a high-profile case of child abuse leading to the death of a child that has been the subject of a number of public inquiries in the UK. The paper concludes with a discussion of the framework, including a roadmap of future work and barriers to adoption.
{"title":"Causal analysis for attributing responsibility in legal cases","authors":"Hana Chockler, N. Fenton, Jeroen Keppens, D. Lagnado","doi":"10.1145/2746090.2746102","DOIUrl":"https://doi.org/10.1145/2746090.2746102","url":null,"abstract":"An important challenge in the field of law is the attribution of responsibility and blame to individuals and organisations for a given harm. Attributing legal responsibility often involves (but is not limited to) assessing to what extent certain parties have caused harm, or could have prevented harm from occurring. This paper presents a causal framework for performing such assessments that is particularly suitable for the analysis of complex legal cases, where the actions of many parties have had a direct or indirect effect on the harm that did occur. This framework is evaluated by means of a case study that applies it to the Baby P. case, a high-profile case of child abuse leading to the death of a child that has been the subject of a number of public inquiries in the UK. The paper concludes with a discussion of the framework, including a roadmap of future work and barriers to adoption.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132790095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Latifa Al-Abdulkarim, Katie Atkinson, Trevor J. M. Bench-Capon
In this paper we revisit reasoning with legal cases, with a view to articulating the relationships between issues, factors, facts and values, and to identifying areas for future work on these topics. We start from the different ways in which attempts have been made to go beyond a fortori reasoning from the precedent base, so that conclusions not fully justified by the precedents can be drawn. We then use a particular example domain taken from the literature to illustrate our preferred approach and to relate factors and values. From this we observe that much current work depends critically on the ascription of factors to cases in a Boolean manner, while in practice there are compelling reasons to see the presence of factors as a matter of degree. On the basis of our observations we make suggestions for the directions of future work on this topic.
{"title":"Factors, issues and values: revisiting reasoning with cases","authors":"Latifa Al-Abdulkarim, Katie Atkinson, Trevor J. M. Bench-Capon","doi":"10.1145/2746090.2746103","DOIUrl":"https://doi.org/10.1145/2746090.2746103","url":null,"abstract":"In this paper we revisit reasoning with legal cases, with a view to articulating the relationships between issues, factors, facts and values, and to identifying areas for future work on these topics. We start from the different ways in which attempts have been made to go beyond a fortori reasoning from the precedent base, so that conclusions not fully justified by the precedents can be drawn. We then use a particular example domain taken from the literature to illustrate our preferred approach and to relate factors and values. From this we observe that much current work depends critically on the ascription of factors to cases in a Boolean manner, while in practice there are compelling reasons to see the presence of factors as a matter of degree. On the basis of our observations we make suggestions for the directions of future work on this topic.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114282297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a formal model of dialogues in court using a gradual argumentation model. The gradual argumentation model provides computations for the strengths of arguments in an argumentation framework and the degrees of justification of arguments in a gradual argumentation semantic. In dialogues in court the adjudicator plays a neutral or active role to decide about burdens and standards of proof in the common law system or in the civil law system. The notions of strength and degree of justification are applied to define the corresponding standards of proof which are suggested as the measurements to assess the burden of production in the argumentation phase and the burden of persuasion in the decision phase. With application of the gradual argumentation model, this paper studies a formal model of dialogues in court. Specifically several new moves for the adjudicator are given within an updated communication language, protocol rules are defined for the adjudicator in accordance with the updated communication language are defined, a new notion of Record of commitments (RC) for the adjudicator is added in order to record qualified commitments, and the adjudicator's options in the decision phase are discussed. This paper tests the new model through a criminal case study.
{"title":"Modelling dialogues in court using a gradual argumentation model: a case study","authors":"B. Wei, Jinhua Huang","doi":"10.1145/2746090.2746104","DOIUrl":"https://doi.org/10.1145/2746090.2746104","url":null,"abstract":"This paper presents a formal model of dialogues in court using a gradual argumentation model. The gradual argumentation model provides computations for the strengths of arguments in an argumentation framework and the degrees of justification of arguments in a gradual argumentation semantic. In dialogues in court the adjudicator plays a neutral or active role to decide about burdens and standards of proof in the common law system or in the civil law system. The notions of strength and degree of justification are applied to define the corresponding standards of proof which are suggested as the measurements to assess the burden of production in the argumentation phase and the burden of persuasion in the decision phase. With application of the gradual argumentation model, this paper studies a formal model of dialogues in court. Specifically several new moves for the adjudicator are given within an updated communication language, protocol rules are defined for the adjudicator in accordance with the updated communication language are defined, a new notion of Record of commitments (RC) for the adjudicator is added in order to record qualified commitments, and the adjudicator's options in the decision phase are discussed. This paper tests the new model through a criminal case study.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe our work towards a method for a formal analysis of law. The Dutch Immigration and Naturalization Service (IND) is responsible for the implementation and execution of complex and ever changing regulations. Given the amount of cases to handle, the use of IT systems is a necessity. From 2007 the IND, being aware of their dependence on trustworthy methods to assure the correct implementation of law into their operations and services, have been working on developing an approach that enables them to 'translate' the legal rules expressed in natural language to specifications in computer executable form. In this paper, we will explain this approach and illustrate it with some concrete examples. The work is part of a larger innovation programme initiative that we collaboratively conduct within a virtual collaboration, called the 'Blue Chamber'.
{"title":"At your service, on the definition of services from sources of law","authors":"T. Engers, R. V. Doesburg","doi":"10.1145/2746090.2746115","DOIUrl":"https://doi.org/10.1145/2746090.2746115","url":null,"abstract":"In this paper, we describe our work towards a method for a formal analysis of law. The Dutch Immigration and Naturalization Service (IND) is responsible for the implementation and execution of complex and ever changing regulations. Given the amount of cases to handle, the use of IT systems is a necessity. From 2007 the IND, being aware of their dependence on trustworthy methods to assure the correct implementation of law into their operations and services, have been working on developing an approach that enables them to 'translate' the legal rules expressed in natural language to specifications in computer executable form. In this paper, we will explain this approach and illustrate it with some concrete examples. The work is part of a larger innovation programme initiative that we collaboratively conduct within a virtual collaboration, called the 'Blue Chamber'.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Reda Bouadjenek, S. Sanner, Gabriela Ferraro
Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help.
{"title":"A study of query reformulation for patent prior art search with partial patent applications","authors":"Mohamed Reda Bouadjenek, S. Sanner, Gabriela Ferraro","doi":"10.1145/2746090.2746092","DOIUrl":"https://doi.org/10.1145/2746090.2746092","url":null,"abstract":"Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116335605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}