{"title":"法律配置论与人工智能归因","authors":"Jerrold Soh","doi":"10.1017/lst.2022.52","DOIUrl":null,"url":null,"abstract":"Abstract It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.","PeriodicalId":46121,"journal":{"name":"Legal Studies","volume":"132 1","pages":"0"},"PeriodicalIF":1.0000,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Legal dispositionism and artificially-intelligent attributions\",\"authors\":\"Jerrold Soh\",\"doi\":\"10.1017/lst.2022.52\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.\",\"PeriodicalId\":46121,\"journal\":{\"name\":\"Legal Studies\",\"volume\":\"132 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Legal Studies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/lst.2022.52\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Legal Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/lst.2022.52","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
Legal dispositionism and artificially-intelligent attributions
Abstract It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.