{"title":"AAAI 23 Spring Symposium Report on “Socially Responsible AI for Well-Bing”","authors":"Takashi Kido, Keiki Takadama","doi":"10.1002/aaai.12092","DOIUrl":null,"url":null,"abstract":"<p>The AAAI 2023 spring symposium on “Socially Responsible AI for Well-Being” was held at Hyatt Regency San Francisco Airport, California, from March 27th to 29th.</p><p>AI has great potential for human well-being but also carries the risk of unintended harm. For our well-being, AI needs to fulfill social responsibilities such as fairness, accountability, transparency, trust, privacy, safety, and security, not just productivity such as exponential growth and economic and financial supremacy. For example, AI diagnostic systems must not only provide reliable results (for example, highly accurate diagnostic results and easy-to-understand explanations) but also their results must be socially acceptable (for example, data for AI [machine learning] must not be biased (the amount of data for training must not be biased by race or location (for example, the amount of data for learning must be equal across races and locations). As in this example, AI decisions affect our well-being, suggesting the importance of discussing “what is socially responsible” in several potential well-being situations in the coming AI era.</p><p>The first perspective is “(Individual) Responsible AI” and aims to identify what mechanisms and issues should be considered to design responsible AI for well-being. One of the goals of responsible AI for well-being is to provide accountable outcomes for our ever-changing health conditions. Since our environment often drives these changes in health, Responsible AI for Well-Being is expected to offer responsible outcomes by understanding how our digital experiences affect our emotions and quality of life.</p><p>The second perspective is “Socially Responsible AI,” which aims to identify what mechanisms and issues should be considered to realize the social aspects of responsible AI for well-being. One aspect of social responsibility is fairness, that is, that the results of AI should be equally helpful to all. The problem of “bias” in AI (and humans) needs to be addressed to achieve fairness. Another aspect of social responsibility is the applicability of knowledge among people. For example, health-related knowledge found by an AI for one person (for example, tips for a good night's sleep) may not be helpful to another person, meaning that such knowledge is not socially responsible. To address these problems, we must understand how fair is fair and find ways to ensure that machines do not absorb human bias by providing socially responsible results.</p><p>Our symposium included 18 technical presentations over 2-and-a-half days. Presentation topics included (1) socially responsible AI, (2) communication and evidence for well-being, (3) face expression and impression for well-being, (4) odor for well-being, (5) ethical AI, (6) robot Interaction for social well-being, (7) communication and sleep for social well-being, (8) well-being studies, (9) information and sleep for social well-being</p><p>For example, Takashi Kido, Advanced Comprehensive Research Organization of Teikyo University in Japan, presented the challenges of socially responsible AI for well-being. Oliver Bendel, School of Business FHGW in Switzerland, presented the increasing well-being through robotic hugs. Martin D. Aleksandrov, Freie Universitat Berlin in Germany, presented the limiting inequalities in the fair division with additive value preferences for indivisible social items. Melanie Swan, University College London in the United Kingdom, presented Quantum intelligence, responsible human machine entities. Dragutin Petkovic, San Francisco State University in Unites States, presented on San Francisco State University Graduate Certificate in Ethical AI.</p><p>Our symposium provides participants unique opportunities where researchers with diverse backgrounds can develop new ideas through innovative and constructive discussions. This symposium will present significant interdisciplinary challenges for guiding future advances in the AI community.</p><p>Takashi Kido and Keiki Takadama served as co-chairs of this symposium. The papers of the symposium will be published online at CEUR-WS.org.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"44 2","pages":"211-212"},"PeriodicalIF":2.5000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12092","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12092","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The AAAI 2023 spring symposium on “Socially Responsible AI for Well-Being” was held at Hyatt Regency San Francisco Airport, California, from March 27th to 29th.
AI has great potential for human well-being but also carries the risk of unintended harm. For our well-being, AI needs to fulfill social responsibilities such as fairness, accountability, transparency, trust, privacy, safety, and security, not just productivity such as exponential growth and economic and financial supremacy. For example, AI diagnostic systems must not only provide reliable results (for example, highly accurate diagnostic results and easy-to-understand explanations) but also their results must be socially acceptable (for example, data for AI [machine learning] must not be biased (the amount of data for training must not be biased by race or location (for example, the amount of data for learning must be equal across races and locations). As in this example, AI decisions affect our well-being, suggesting the importance of discussing “what is socially responsible” in several potential well-being situations in the coming AI era.
The first perspective is “(Individual) Responsible AI” and aims to identify what mechanisms and issues should be considered to design responsible AI for well-being. One of the goals of responsible AI for well-being is to provide accountable outcomes for our ever-changing health conditions. Since our environment often drives these changes in health, Responsible AI for Well-Being is expected to offer responsible outcomes by understanding how our digital experiences affect our emotions and quality of life.
The second perspective is “Socially Responsible AI,” which aims to identify what mechanisms and issues should be considered to realize the social aspects of responsible AI for well-being. One aspect of social responsibility is fairness, that is, that the results of AI should be equally helpful to all. The problem of “bias” in AI (and humans) needs to be addressed to achieve fairness. Another aspect of social responsibility is the applicability of knowledge among people. For example, health-related knowledge found by an AI for one person (for example, tips for a good night's sleep) may not be helpful to another person, meaning that such knowledge is not socially responsible. To address these problems, we must understand how fair is fair and find ways to ensure that machines do not absorb human bias by providing socially responsible results.
Our symposium included 18 technical presentations over 2-and-a-half days. Presentation topics included (1) socially responsible AI, (2) communication and evidence for well-being, (3) face expression and impression for well-being, (4) odor for well-being, (5) ethical AI, (6) robot Interaction for social well-being, (7) communication and sleep for social well-being, (8) well-being studies, (9) information and sleep for social well-being
For example, Takashi Kido, Advanced Comprehensive Research Organization of Teikyo University in Japan, presented the challenges of socially responsible AI for well-being. Oliver Bendel, School of Business FHGW in Switzerland, presented the increasing well-being through robotic hugs. Martin D. Aleksandrov, Freie Universitat Berlin in Germany, presented the limiting inequalities in the fair division with additive value preferences for indivisible social items. Melanie Swan, University College London in the United Kingdom, presented Quantum intelligence, responsible human machine entities. Dragutin Petkovic, San Francisco State University in Unites States, presented on San Francisco State University Graduate Certificate in Ethical AI.
Our symposium provides participants unique opportunities where researchers with diverse backgrounds can develop new ideas through innovative and constructive discussions. This symposium will present significant interdisciplinary challenges for guiding future advances in the AI community.
Takashi Kido and Keiki Takadama served as co-chairs of this symposium. The papers of the symposium will be published online at CEUR-WS.org.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.