{"title":"The role of evaluation in AI and law: an examination of its different forms in the AI and law journal","authors":"Jack G. Conrad, John Zeleznikow","doi":"10.1145/2746090.2746116","DOIUrl":null,"url":null,"abstract":"This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.","PeriodicalId":309125,"journal":{"name":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on Artificial Intelligence and Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2746090.2746116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
This paper explores the presence and forms of evaluation in articles published in the journal Artificial Intelligence and Law for the ten-year period from 2005 through 2014. It represents a meta-level study of some the most significant works produced by the AI and Law community, in this case nearly 140 research articles published in the AI and Law journal. It also compares its findings to previous work conducted on evaluation appearing in the Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL). In addition, the paper highlights works harnessing performance evaluation as one of their chief scientific tools and the means by which they use it. It extends the argument for why evaluation is essential in formal Artificial Intelligence and Law reports such as those in the journal. As in the case of two earlier works on the topic, it pursues answers to the questions: how good is the system, algorithm or proposal?, how reliable is the approach or technique?, and, ultimately, does the method work? The paper investigates the role of performance evaluation in scientific research reports, underscoring the argument that a performance-based 'ethic' signifies a level of maturity and scientific rigor within a community. In addition, the work examines recent publications that address the same critical issue within the broader field of Artificial Intelligence.
本文对《人工智能与法律》(Artificial Intelligence and Law)杂志2005年至2014年十年间发表的文章中评估的存在和形式进行了探讨。它代表了对人工智能和法律界产生的一些最重要作品的元层面研究,在这种情况下,在人工智能和法律杂志上发表的近140篇研究文章。它还将其发现与之前发表在《人工智能与法律国际会议论文集》(ICAIL)上的评估工作进行了比较。此外,本文还重点介绍了利用绩效评估作为其主要科学工具之一的工作及其使用方法。它扩展了为什么评估在正式的人工智能和法律报告(如期刊上的那些报告)中至关重要的论点。就像之前关于这个主题的两部作品一样,它追求的是以下问题的答案:系统、算法或提案有多好?方法或技术的可靠性如何?最后,这个方法是否有效?这篇论文调查了绩效评估在科研报告中的作用,强调了一个基于绩效的“伦理”标志着一个社区的成熟程度和科学严谨性的论点。此外,该工作还审查了最近的出版物,这些出版物在更广泛的人工智能领域内解决了同样的关键问题。