{"title":"Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion","authors":"Siddhartha Vadlamudi","doi":"10.18034/ei.v3i2.519","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.","PeriodicalId":49736,"journal":{"name":"Nuclear Engineering International","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nuclear Engineering International","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.18034/ei.v3i2.519","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 27
Abstract
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.