Use of personal information for artificial intelligence learning data under the Personal Information Protection Act: the case of Lee-Luda, an artificial-intelligence chatbot in South Korea
{"title":"Use of personal information for artificial intelligence learning data under the Personal Information Protection Act: the case of Lee-Luda, an artificial-intelligence chatbot in South Korea","authors":"S. Jeon, Myung Seok Go, Ju-hyun Namgung","doi":"10.1080/10192557.2022.2117483","DOIUrl":null,"url":null,"abstract":"ABSTRACT The data from 10 billion sentences, originally collected for a dating counselling service, were used to develop and operate an AI chatbot, Lee-Luda. However, the chatbot company was fined by the South Korean government for violating the Personal Information Protection Act (PIPA). The case of Lee-Luda is the first case in South Korea that raised the question as to whether the use of personal information for AI learning data falls outside the scope of the original purpose of collection. Although the Lee-Luda is a South Korean case, since the prohibition on using personal information for purposes other than the original purpose of collection is a globally accepted principle, it is expected that the Lee-Luda case will provide meaningful implications not only for South Korea but also for law enforcement in other countries. Similar ethical and legal issues will likely arise in other countries in the foreseeable future because using personal information as learning data for an AI program may conflict with the existing legal principle that requires using personal information only for the original purpose of collection. In this paper, we analyse why the Lee-Luda program’s use of personal information for AI learning data was ruled to violate the Personal Information Protection Act. In addition, we suggest alternative ways for AI services that use personal information as learning data to comply with the law. Therefore, we believe that this paper provides a useful case study for AI operators in other countries about AI programs and personal information protection.","PeriodicalId":42799,"journal":{"name":"Asia Pacific Law Review","volume":"31 1","pages":"55 - 72"},"PeriodicalIF":1.0000,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asia Pacific Law Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1080/10192557.2022.2117483","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 2
Abstract
ABSTRACT The data from 10 billion sentences, originally collected for a dating counselling service, were used to develop and operate an AI chatbot, Lee-Luda. However, the chatbot company was fined by the South Korean government for violating the Personal Information Protection Act (PIPA). The case of Lee-Luda is the first case in South Korea that raised the question as to whether the use of personal information for AI learning data falls outside the scope of the original purpose of collection. Although the Lee-Luda is a South Korean case, since the prohibition on using personal information for purposes other than the original purpose of collection is a globally accepted principle, it is expected that the Lee-Luda case will provide meaningful implications not only for South Korea but also for law enforcement in other countries. Similar ethical and legal issues will likely arise in other countries in the foreseeable future because using personal information as learning data for an AI program may conflict with the existing legal principle that requires using personal information only for the original purpose of collection. In this paper, we analyse why the Lee-Luda program’s use of personal information for AI learning data was ruled to violate the Personal Information Protection Act. In addition, we suggest alternative ways for AI services that use personal information as learning data to comply with the law. Therefore, we believe that this paper provides a useful case study for AI operators in other countries about AI programs and personal information protection.