首页 > 最新文献

AI & Society最新文献

英文 中文
Examining the assumptions of AI hiring assessments and their impact on job seekers’ autonomy over self-representation 考察人工智能招聘评估的假设及其对求职者自主而非自我代表的影响
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-21 DOI: 10.1007/s00146-023-01783-1
Evgeni Aizenberg, Matthew J. Dennis, Jeroen van den Hoven
Abstract In this paper, we examine the epistemological and ontological assumptions algorithmic hiring assessments make about job seekers’ attributes (e.g., competencies, skills, abilities) and the ethical implications of these assumptions. Given that both traditional psychometric hiring assessments and algorithmic assessments share a common set of underlying assumptions from the psychometric paradigm, we turn to literature that has examined the merits and limitations of these assumptions, gathering insights across multiple disciplines and several decades. Our exploration leads us to conclude that algorithmic hiring assessments are incompatible with attributes whose meanings are context-dependent and socially constructed. Such attributes call instead for assessment paradigms that offer space for negotiation of meanings between the job seeker and the employer. We argue that in addition to questioning the validity of algorithmic hiring assessments, this raises an often overlooked ethical impact on job seekers’ autonomy over self-representation: their ability to directly represent their identity, lived experiences, and aspirations. Infringement on this autonomy constitutes an infringement on job seekers’ dignity. We suggest beginning to address these issues through epistemological and ethical reflection regarding the choice of assessment paradigm, the means to implement it, and the ethical impacts of these choices. This entails a transdisciplinary effort that would involve job seekers, hiring managers, recruiters, and other professionals and researchers. Combined with a socio-technical design perspective, this may help generate new ideas regarding appropriate roles for human-to-human and human–technology interactions in the hiring process.
在本文中,我们研究了算法招聘评估对求职者属性(如胜任力、技能、能力)的认识论和本体论假设,以及这些假设的伦理含义。鉴于传统的心理测量招聘评估和算法评估都共享一套来自心理测量范式的共同潜在假设,我们转向研究这些假设的优点和局限性的文献,收集跨多个学科和几十年的见解。我们的探索使我们得出结论,算法招聘评估与属性不相容,其含义依赖于上下文和社会构建。相反,这些属性需要评估范式,为求职者和雇主之间的意义谈判提供空间。我们认为,除了质疑算法招聘评估的有效性之外,这还提出了一个经常被忽视的伦理影响,即求职者在自我表现方面的自主权:他们直接表现自己的身份、生活经历和抱负的能力。对这种自主权的侵犯就是对求职者尊严的侵犯。我们建议开始通过认识论和伦理反思来解决这些问题,这些反思涉及评估范式的选择,实施它的手段,以及这些选择的伦理影响。这需要涉及求职者、招聘经理、招聘人员以及其他专业人士和研究人员的跨学科努力。结合社会技术设计的观点,这可能有助于在招聘过程中产生关于人与人之间和人与技术互动的适当角色的新想法。
{"title":"Examining the assumptions of AI hiring assessments and their impact on job seekers’ autonomy over self-representation","authors":"Evgeni Aizenberg, Matthew J. Dennis, Jeroen van den Hoven","doi":"10.1007/s00146-023-01783-1","DOIUrl":"https://doi.org/10.1007/s00146-023-01783-1","url":null,"abstract":"Abstract In this paper, we examine the epistemological and ontological assumptions algorithmic hiring assessments make about job seekers’ attributes (e.g., competencies, skills, abilities) and the ethical implications of these assumptions. Given that both traditional psychometric hiring assessments and algorithmic assessments share a common set of underlying assumptions from the psychometric paradigm, we turn to literature that has examined the merits and limitations of these assumptions, gathering insights across multiple disciplines and several decades. Our exploration leads us to conclude that algorithmic hiring assessments are incompatible with attributes whose meanings are context-dependent and socially constructed. Such attributes call instead for assessment paradigms that offer space for negotiation of meanings between the job seeker and the employer. We argue that in addition to questioning the validity of algorithmic hiring assessments, this raises an often overlooked ethical impact on job seekers’ autonomy over self-representation: their ability to directly represent their identity, lived experiences, and aspirations. Infringement on this autonomy constitutes an infringement on job seekers’ dignity. We suggest beginning to address these issues through epistemological and ethical reflection regarding the choice of assessment paradigm, the means to implement it, and the ethical impacts of these choices. This entails a transdisciplinary effort that would involve job seekers, hiring managers, recruiters, and other professionals and researchers. Combined with a socio-technical design perspective, this may help generate new ideas regarding appropriate roles for human-to-human and human–technology interactions in the hiring process.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"62 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135513041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online consent: how much do we need to know? 在线同意:我们需要知道多少?
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-13 DOI: 10.1007/s00146-023-01790-2
Bartlomiej Chomanski, Lode Lauwaert
Abstract This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.
摘要本文反对主流观点,认为从道德角度来看,普通互联网用户通常对隐私政策的同意在很大程度上是没有问题的。为了证实这一说法,我们依赖于生物伦理学家提出的不知情权(RNTK)的概念。在关于知情同意的生物伦理文献中,RNTK的捍卫者声称,患者通常有权拒绝提供与医学有关的信息。在本文中,我们将RNTK的应用扩展到网络隐私。然后,我们认为,如果互联网用户可以被认为是在同意隐私政策之前行使他们的RNTK,那么他们的同意应该被认为不受批评者对其提出的标准收费的影响。
{"title":"Online consent: how much do we need to know?","authors":"Bartlomiej Chomanski, Lode Lauwaert","doi":"10.1007/s00146-023-01790-2","DOIUrl":"https://doi.org/10.1007/s00146-023-01790-2","url":null,"abstract":"Abstract This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keep trusting! A plea for the notion of Trustworthy AI 保持信任!这是对“值得信赖的人工智能”概念的呼吁
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-12 DOI: 10.1007/s00146-023-01789-9
Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi, Viola Schiaffonati
Abstract A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.
近年来,人们对可信赖人工智能(TAI)的概念非常关注。然而,信任和可信赖的概念对人工智能系统的适用性受到了质疑。对信任的纯粹认知解释很难区分值得信赖的人工智能和仅仅可靠的人工智能,而有人认为,坚持受托人的动机和善意的重要性,使TAI的概念成为一个绝对错误。在概述了辩论之后,我们认为,关于信任和人工智能的主流观点未能考虑到人工智能系统设计和使用的伦理相关和价值负载方面,我们提出了对TAI概念的理解,明确旨在捕捉这些方面。将信任和可信赖性应用于人工智能系统所涉及的问题通过将人工智能系统中的信任与人际信任分开来克服。这些概念共享一个概念核心,但应被视为不同的概念。
{"title":"Keep trusting! A plea for the notion of Trustworthy AI","authors":"Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi, Viola Schiaffonati","doi":"10.1007/s00146-023-01789-9","DOIUrl":"https://doi.org/10.1007/s00146-023-01789-9","url":null,"abstract":"Abstract A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135969112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences in stakeholders’ expectations of gendered robots in the field of psychotherapy: an exploratory survey 心理治疗领域利益相关者对性别机器人期望的差异:一项探索性调查
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-06 DOI: 10.1007/s00146-023-01787-x
Tatsuya Nomura, Tomohiro Suzuki, Hirokazu Kumazaki
Abstract In the present study, qualitative and quantitative studies were conducted to explore differences between stakeholders in expectations of gendered robots, with a focus on their specific application in the field of psychotherapy. In Study I, semi-structured interviews were conducted with 18 experts in psychotherapy to extract categories of opinions regarding the use of humanoid robots in the field. Based on these extracted categories, in Study II, an online questionnaire survey was conducted to compare concrete expectations of the use of humanoid robots in psychotherapy between 50 experts and 100 nonexperts in psychotherapy. The results revealed that compared with the female participants, the male participants tended to prefer robots with a female appearance. In addition, compared with the experts, the nonexperts tended not to relate the performance of robots with their gender appearance, and compared with the other participant groups, the female expert participants had lower expectations of the use of robots in the field. These findings suggest that differences between stakeholders regarding the expectations of gendered robots should be resolved to encourage their acceptance in a specific field.
在本研究中,通过定性和定量研究来探讨利益相关者对性别机器人的期望差异,重点关注其在心理治疗领域的具体应用。在研究一中,对18位心理治疗专家进行了半结构化访谈,以提取关于在该领域使用人形机器人的意见类别。基于这些提取的类别,在研究II中,进行了一项在线问卷调查,以比较50名心理治疗专家和100名非心理治疗专家对在心理治疗中使用人形机器人的具体期望。结果显示,与女性参与者相比,男性参与者更喜欢女性外表的机器人。此外,与专家相比,非专家倾向于不将机器人的表现与其性别外貌联系起来,与其他参与者组相比,女性专家参与者对机器人在该领域的使用期望较低。这些发现表明,利益相关者之间关于性别机器人期望的差异应该得到解决,以鼓励他们在特定领域的接受。
{"title":"Differences in stakeholders’ expectations of gendered robots in the field of psychotherapy: an exploratory survey","authors":"Tatsuya Nomura, Tomohiro Suzuki, Hirokazu Kumazaki","doi":"10.1007/s00146-023-01787-x","DOIUrl":"https://doi.org/10.1007/s00146-023-01787-x","url":null,"abstract":"Abstract In the present study, qualitative and quantitative studies were conducted to explore differences between stakeholders in expectations of gendered robots, with a focus on their specific application in the field of psychotherapy. In Study I, semi-structured interviews were conducted with 18 experts in psychotherapy to extract categories of opinions regarding the use of humanoid robots in the field. Based on these extracted categories, in Study II, an online questionnaire survey was conducted to compare concrete expectations of the use of humanoid robots in psychotherapy between 50 experts and 100 nonexperts in psychotherapy. The results revealed that compared with the female participants, the male participants tended to prefer robots with a female appearance. In addition, compared with the experts, the nonexperts tended not to relate the performance of robots with their gender appearance, and compared with the other participant groups, the female expert participants had lower expectations of the use of robots in the field. These findings suggest that differences between stakeholders regarding the expectations of gendered robots should be resolved to encourage their acceptance in a specific field.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135350512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act 制度化的不信任和人类对人工智能的监督:根据欧盟人工智能法案实现人工智能治理的民主设计
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-06 DOI: 10.1007/s00146-023-01777-z
Johann Laux
Abstract Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union’s Artificial Intelligence Act (“AIA”). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.
人类的监督已经成为人工智能(AI)治理的关键机制。人类监督者应该提高人工智能系统的准确性和安全性,维护人类价值观,并建立对技术的信任。然而,实证研究表明,人类在完成监督任务时并不可靠。他们可能缺乏能力或受到有害的激励。这给人类监督的有效性带来了挑战。为了应对这一挑战,本文旨在做出三点贡献。首先,它调查了新兴的监管法律,最重要的是欧盟的人工智能法案(“AIA”)。它将表明,虽然AIA关注人类监督者的能力,但它并没有就如何实现有效的监督提供太多指导,并且对人工智能开发人员的监督义务没有明确定义。其次,本文提出了一种新的人类监督角色分类,根据人工干预是构成人工智能做出或支持的决定,还是纠正人工智能的决定来区分。分类法允许针对所讨论的监督类型提出改进效力的建议。第三,利用民主理论中的学术成果,本文制定了六项规范性原则,将人类对人工智能的监督的不信任制度化。历史上,不信任的制度化已经在民主治理中实践过。这些原则首次应用于人工智能治理,预测了人类监督者的错误,并寻求在制度设计层面减轻这种错误。它们旨在直接提高人类监督的可信度,并间接激发对人工智能治理的良好信任。
{"title":"Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act","authors":"Johann Laux","doi":"10.1007/s00146-023-01777-z","DOIUrl":"https://doi.org/10.1007/s00146-023-01777-z","url":null,"abstract":"Abstract Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union’s Artificial Intelligence Act (“AIA”). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135352026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of Carlos Montemayor’s The prospect of a humanitarian AI 回顾Carlos Montemayor的《人道主义人工智能的前景》
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-05 DOI: 10.1007/s00146-023-01788-w
Jacob Browning
{"title":"Review of Carlos Montemayor’s The prospect of a humanitarian AI","authors":"Jacob Browning","doi":"10.1007/s00146-023-01788-w","DOIUrl":"https://doi.org/10.1007/s00146-023-01788-w","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134976742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public perception of military AI in the context of techno-optimistic society 科技乐观社会背景下公众对军事人工智能的看法
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-04 DOI: 10.1007/s00146-023-01785-z
Eleri Lillemäe, Kairi Talves, Wolfgang Wagner
{"title":"Public perception of military AI in the context of techno-optimistic society","authors":"Eleri Lillemäe, Kairi Talves, Wolfgang Wagner","doi":"10.1007/s00146-023-01785-z","DOIUrl":"https://doi.org/10.1007/s00146-023-01785-z","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing safer AI–concepts from economics to the rescue 开发从经济到救援的更安全的人工智能概念
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-02 DOI: 10.1007/s00146-023-01778-y
Pankaj Kumar Maskara
{"title":"Developing safer AI–concepts from economics to the rescue","authors":"Pankaj Kumar Maskara","doi":"10.1007/s00146-023-01778-y","DOIUrl":"https://doi.org/10.1007/s00146-023-01778-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135828891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence national strategy in a developing country 发展中国家的人工智能国家战略
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-01 DOI: 10.1007/s00146-023-01779-x
Mona Nabil Demaidi
Abstract Artificial intelligence (AI) national strategies provide countries with a framework for the development and implementation of AI technologies. Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper examined AI readiness assessment in a developing country (Palestine) to help develop and identify the main pillars of the AI national strategy. AI readiness assessment was applied across education, entrepreneurship, government, and research and development sectors in Palestine (case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies. The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify the following five main pillars that Palestine’s AI national strategy should focus on: AI for Government, AI for Development, AI for Capacity Building in the private, public and technical and governmental sectors, AI and Legal Framework, and international Activities.
人工智能(AI)国家战略为各国提供了开发和实施人工智能技术的框架。全球60个国家发布了人工智能国家战略。这些国家以发达国家居多,占比超过70%。人工智能国家战略的方法在科学研究、教育、人才发展和道德等几个方面区分了发达国家和发展中国家。本文研究了一个发展中国家(巴勒斯坦)的人工智能准备情况评估,以帮助制定和确定人工智能国家战略的主要支柱。人工智能准备情况评估应用于巴勒斯坦的教育、创业、政府和研发部门(以发展中国家为例)。此外,它还审查了法律框架以及它是否正在应对趋势技术。结果显示,巴勒斯坦人对人工智能的认知度较低。此外,人工智能几乎没有在几个领域得到应用,法律框架也无法应对趋势技术。研究结果有助于制定和确定巴勒斯坦人工智能国家战略应重点关注的以下五个主要支柱:人工智能政府、人工智能发展、人工智能私营、公共、技术和政府部门的能力建设、人工智能和法律框架以及国际活动。
{"title":"Artificial intelligence national strategy in a developing country","authors":"Mona Nabil Demaidi","doi":"10.1007/s00146-023-01779-x","DOIUrl":"https://doi.org/10.1007/s00146-023-01779-x","url":null,"abstract":"Abstract Artificial intelligence (AI) national strategies provide countries with a framework for the development and implementation of AI technologies. Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper examined AI readiness assessment in a developing country (Palestine) to help develop and identify the main pillars of the AI national strategy. AI readiness assessment was applied across education, entrepreneurship, government, and research and development sectors in Palestine (case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies. The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify the following five main pillars that Palestine’s AI national strategy should focus on: AI for Government, AI for Development, AI for Capacity Building in the private, public and technical and governmental sectors, AI and Legal Framework, and international Activities.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135406529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency in AI AI的透明度
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-01 DOI: 10.1007/s00146-023-01786-y
Tolgahan Toy
{"title":"Transparency in AI","authors":"Tolgahan Toy","doi":"10.1007/s00146-023-01786-y","DOIUrl":"https://doi.org/10.1007/s00146-023-01786-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135407134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1