Francesco Contini, Alessandra Minissale, Stina Bergman Blix
{"title":"人工智能与真实决策:预测系统和生成式人工智能与情感认知法律审议。","authors":"Francesco Contini, Alessandra Minissale, Stina Bergman Blix","doi":"10.3389/fsoc.2024.1417766","DOIUrl":null,"url":null,"abstract":"<p><p>The use of artificial intelligence in law represents one of the biggest challenges across different legal systems. Supporters of predictive systems believe that decisionmaking could be more efficient, consistent and predictable by using AI. European legislation and legal scholars, however, identify areas where AI developments are at high risk or too dangerous to be used in judicial proceedings. In this article, we contribute to this debate by problematizing predictive systems based on previous judgments and the growing use of Generative AI in judicial proceedings. Through illustrations from real criminal cases in Italian courts and prosecution offices, we show misalignments between the functions of AI systems and the essential features of legal decision-making and identify possible legitimate usages. We argue that current predictive systems and Generative AI crunch the complexity of judicial proceedings, the dynamics of fact-finding and legal encoding. They reduce the delivery of justice to statistical connections between data or metadata, cutting off the emotive-cognitive process that lies at the core of legal decision-making.</p>","PeriodicalId":36297,"journal":{"name":"Frontiers in Sociology","volume":"9 ","pages":"1417766"},"PeriodicalIF":2.0000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11566138/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence and real decisions: predictive systems and generative AI vs. emotive-cognitive legal deliberations.\",\"authors\":\"Francesco Contini, Alessandra Minissale, Stina Bergman Blix\",\"doi\":\"10.3389/fsoc.2024.1417766\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The use of artificial intelligence in law represents one of the biggest challenges across different legal systems. Supporters of predictive systems believe that decisionmaking could be more efficient, consistent and predictable by using AI. European legislation and legal scholars, however, identify areas where AI developments are at high risk or too dangerous to be used in judicial proceedings. In this article, we contribute to this debate by problematizing predictive systems based on previous judgments and the growing use of Generative AI in judicial proceedings. Through illustrations from real criminal cases in Italian courts and prosecution offices, we show misalignments between the functions of AI systems and the essential features of legal decision-making and identify possible legitimate usages. We argue that current predictive systems and Generative AI crunch the complexity of judicial proceedings, the dynamics of fact-finding and legal encoding. They reduce the delivery of justice to statistical connections between data or metadata, cutting off the emotive-cognitive process that lies at the core of legal decision-making.</p>\",\"PeriodicalId\":36297,\"journal\":{\"name\":\"Frontiers in Sociology\",\"volume\":\"9 \",\"pages\":\"1417766\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11566138/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Sociology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fsoc.2024.1417766\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"SOCIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Sociology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fsoc.2024.1417766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"SOCIOLOGY","Score":null,"Total":0}
Artificial intelligence and real decisions: predictive systems and generative AI vs. emotive-cognitive legal deliberations.
The use of artificial intelligence in law represents one of the biggest challenges across different legal systems. Supporters of predictive systems believe that decisionmaking could be more efficient, consistent and predictable by using AI. European legislation and legal scholars, however, identify areas where AI developments are at high risk or too dangerous to be used in judicial proceedings. In this article, we contribute to this debate by problematizing predictive systems based on previous judgments and the growing use of Generative AI in judicial proceedings. Through illustrations from real criminal cases in Italian courts and prosecution offices, we show misalignments between the functions of AI systems and the essential features of legal decision-making and identify possible legitimate usages. We argue that current predictive systems and Generative AI crunch the complexity of judicial proceedings, the dynamics of fact-finding and legal encoding. They reduce the delivery of justice to statistical connections between data or metadata, cutting off the emotive-cognitive process that lies at the core of legal decision-making.