{"title":"To err is human: Managing the risks of contracting AI systems","authors":"Maarten Herbosch","doi":"10.1016/j.clsr.2025.106110","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) increasingly influences contract law. Applications like virtual home assistants can form contracts on behalf of users, while other AI tools can assist parties in deciding whether to contract. The advent of Generative AI has further accelerated and broadened the proliferation of such applications. However, AI systems are inherently imperfect, sometimes leading to unexpected or undesirable contracts, raising concerns about the legal protection of AI deployers.</div><div>Some authors have suggested that autonomous AI deployment cannot lead to a legally binding contract in the absence of a human “intent”. Others have argued that the system deployer is completely unprotected in cases of undesirable AI output. They argue that that deployment implies that the deployer should bear the risk of any mistake.</div><div>This article challenges these views by leveraging existing contract formation and mistake frameworks. Traditional analysis demonstrates that AI deployment can produce valid contracts. It also suggests that deployers may invoke the unilateral mistake doctrine, drawing parallels to clerical errors in human contracts. While AI outputs are probabilistic and unpredictable, similar characteristics apply to human decision-making. The potential benefits of AI development justify affording AI deployers protections analogous to those provided in traditional scenarios.</div><div>To enhance protection, deployers should use high-performing systems with safeguards such as oversight mechanisms and registration tools. As industry standards evolve, these safeguards will become more defined. The analysis concludes that current contract law frameworks are flexible enough to accommodate AI systems, negating the need for a complete overhaul.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106110"},"PeriodicalIF":3.3000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364925000056","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) increasingly influences contract law. Applications like virtual home assistants can form contracts on behalf of users, while other AI tools can assist parties in deciding whether to contract. The advent of Generative AI has further accelerated and broadened the proliferation of such applications. However, AI systems are inherently imperfect, sometimes leading to unexpected or undesirable contracts, raising concerns about the legal protection of AI deployers.
Some authors have suggested that autonomous AI deployment cannot lead to a legally binding contract in the absence of a human “intent”. Others have argued that the system deployer is completely unprotected in cases of undesirable AI output. They argue that that deployment implies that the deployer should bear the risk of any mistake.
This article challenges these views by leveraging existing contract formation and mistake frameworks. Traditional analysis demonstrates that AI deployment can produce valid contracts. It also suggests that deployers may invoke the unilateral mistake doctrine, drawing parallels to clerical errors in human contracts. While AI outputs are probabilistic and unpredictable, similar characteristics apply to human decision-making. The potential benefits of AI development justify affording AI deployers protections analogous to those provided in traditional scenarios.
To enhance protection, deployers should use high-performing systems with safeguards such as oversight mechanisms and registration tools. As industry standards evolve, these safeguards will become more defined. The analysis concludes that current contract law frameworks are flexible enough to accommodate AI systems, negating the need for a complete overhaul.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.