{"title":"Prognosis of exploration on Chat GPT with artificial intelligence ethics","authors":"N. G. Vidhya, D. Devi, N. A., T. Manju","doi":"10.14295/bjs.v2i9.372","DOIUrl":null,"url":null,"abstract":"Natural language processing innovations in the past few decades have made it feasible to synthesis and comprehend coherent text in a variety of ways, turning theoretical techniques into practical implementations. Both report summarizing software and sectors like content writers have been significantly impacted by the extensive Language-model. A huge language model, however, could show evidence of social prejudice, giving moral as well as environmental hazards from negligence, according to observations. Therefore, it is necessary to develop comprehensive guidelines for responsible LLM (Large Language Models). Despite the fact that numerous empirical investigations show that sophisticated large language models has very few ethical difficulties, there isn't a thorough investigation and consumers study of the legality of present large language model use. We use a qualitative study method on OpenAI's ChatGPT3 to solution-focus the real-world ethical risks in current large language models in order to further guide ongoing efforts on responsibly constructing ethical large language models. We carefully review ChatGPT3 from the four perspectives of bias and robustness. According to our stated opinions, we objectively benchmark ChatGPT3 on a number of sample datasets. In this work, it was found that a substantial fraction of principled problems are not solved by the current benchmarks; therefore new case examples were provided to support this. Additionally discussed were the importance of the findings regarding ChatGPT3's AI ethics, potential problems in the future, and helpful design considerations for big language models. This study may provide some guidance for future investigations into and mitigation of the ethical risks offered by technology in large Language Models applications.","PeriodicalId":9244,"journal":{"name":"Brazilian Journal of Poultry Science","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brazilian Journal of Poultry Science","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.14295/bjs.v2i9.372","RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AGRICULTURE, DAIRY & ANIMAL SCIENCE","Score":null,"Total":0}
引用次数: 5
Abstract
Natural language processing innovations in the past few decades have made it feasible to synthesis and comprehend coherent text in a variety of ways, turning theoretical techniques into practical implementations. Both report summarizing software and sectors like content writers have been significantly impacted by the extensive Language-model. A huge language model, however, could show evidence of social prejudice, giving moral as well as environmental hazards from negligence, according to observations. Therefore, it is necessary to develop comprehensive guidelines for responsible LLM (Large Language Models). Despite the fact that numerous empirical investigations show that sophisticated large language models has very few ethical difficulties, there isn't a thorough investigation and consumers study of the legality of present large language model use. We use a qualitative study method on OpenAI's ChatGPT3 to solution-focus the real-world ethical risks in current large language models in order to further guide ongoing efforts on responsibly constructing ethical large language models. We carefully review ChatGPT3 from the four perspectives of bias and robustness. According to our stated opinions, we objectively benchmark ChatGPT3 on a number of sample datasets. In this work, it was found that a substantial fraction of principled problems are not solved by the current benchmarks; therefore new case examples were provided to support this. Additionally discussed were the importance of the findings regarding ChatGPT3's AI ethics, potential problems in the future, and helpful design considerations for big language models. This study may provide some guidance for future investigations into and mitigation of the ethical risks offered by technology in large Language Models applications.
期刊介绍:
A Revista Brasileira de Ciência Avícola surgiu em 1999 a partir da necessidade que a comunidade científica possuía de um periódico para veiculação e publicação de seus trabalhos, com a publicação de três números anuais.
A Revista conta hoje com um corpo editorial altamente qualificado e com artigos científicos desenvolvidos pelos maiores especialistas da área, o que a cada dia atrai mais leitores em busca de inovação e respaldo técnico.
Devido à credibilidade que conquistou pelos esforços de sus autores, relatores e revisores, a Revista ganhou caráter de coleção, sendo consultada como fonte segura de estudo desenvolvidos na Avicultura.
A partir de 2003 – volume 5 -, a Revista passou a chamar-se Brazilian Journal of Poultry Science, e todos os trabalhos passaram a ser publicados em inglês. No mesmo ano subiu para quatro o número de revistas por volume, ampliando-se assim os trabalhos publicados anualmente.