{"title":"How to Regulate Large Language Models for Responsible AI","authors":"J. Berengueres","doi":"10.1109/TTS.2024.3403681","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"191-197"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10536000","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10536000/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.