首页 > 最新文献

Human-Centered AI最新文献

英文 中文
How Do Rationalism and Empiricism Provide Sound Foundations? 理性主义和经验主义如何提供良好的基础?
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0002
B. Shneiderman
The contrast between AI and HCAI is a continuation of the 2,000-year-old clash between Aristotle’s rationalism, based on logical analyses, and Leonardo da Vinci’s empiricism, based on sensory exploration of the world. Both are valuable and worthy of understanding. Both philosophies, rationalism and empiricism, offer valuable insights, so I apply rational thinking for its strengths, but I know that balancing it with an empirical outlook helps me see other possibilities, which use observational strategies. Watching users of technology has always led me to fresh insights, so I am drawn to usability studies, interviews, naturalistic observations, and repeated weeks-long case studies with users doing their work to complement the rationalist approach of controlled experiments in laboratory settings.
人工智能和人工智能之间的对比是亚里士多德基于逻辑分析的理性主义和达芬奇基于感官探索世界的经验主义之间2000年冲突的延续。两者都是有价值的,值得理解。理性主义和经验主义这两种哲学都提供了有价值的见解,所以我利用理性思维的优势,但我知道,将理性思维与经验主义观点相平衡,有助于我看到使用观察策略的其他可能性。观察技术的用户总是让我有新的见解,所以我被可用性研究、访谈、自然观察和重复的长达数周的案例研究所吸引,这些案例研究是对实验室环境中受控实验的理性主义方法的补充。
{"title":"How Do Rationalism and Empiricism Provide Sound Foundations?","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0002","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0002","url":null,"abstract":"The contrast between AI and HCAI is a continuation of the 2,000-year-old clash between Aristotle’s rationalism, based on logical analyses, and Leonardo da Vinci’s empiricism, based on sensory exploration of the world. Both are valuable and worthy of understanding. Both philosophies, rationalism and empiricism, offer valuable insights, so I apply rational thinking for its strengths, but I know that balancing it with an empirical outlook helps me see other possibilities, which use observational strategies. Watching users of technology has always led me to fresh insights, so I am drawn to usability studies, interviews, naturalistic observations, and repeated weeks-long case studies with users doing their work to complement the rationalist approach of controlled experiments in laboratory settings.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114802695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safety Culture through Business Management Strategies 通过企业管理策略实现安全文化
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0020
B. Shneiderman
Past accidents and incidents often had narrow impacts, but today’s failures of massive technology-based interdependent organizations in globalized economies can have devastating effects for the health and economies of entire cities, regions, and continents. Safety cultures have been successful in medicine, transportation, and other industries, so they may be effective for HCAI. The methods include: (1) Leadership commitment to safety, (2) hiring and training oriented to safety, (3) extensive reporting of failures and near misses, (4) internal review boards for problems and future plans, and (5) alignment with industry standards and accepted best practices. Safety cultures are never perfect, but they can improve existing processes and reduce costs, in part by preventing damage and injury.
过去的事故和事件通常只产生有限的影响,但今天全球化经济中以技术为基础的相互依赖的大型组织的失败可能对整个城市、区域和大陆的健康和经济产生破坏性影响。安全文化在医药、交通和其他行业已经取得了成功,因此它们可能对HCAI有效。方法包括:(1)领导对安全的承诺;(2)以安全为导向的招聘和培训;(3)广泛报告失败和未遂事件;(4)针对问题和未来计划的内部审查委员会;(5)与行业标准和公认的最佳实践保持一致。安全文化从来都不是完美的,但它们可以改善现有流程并降低成本,部分原因是可以防止损害和伤害。
{"title":"Safety Culture through Business Management Strategies","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0020","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0020","url":null,"abstract":"Past accidents and incidents often had narrow impacts, but today’s failures of massive technology-based interdependent organizations in globalized economies can have devastating effects for the health and economies of entire cities, regions, and continents. Safety cultures have been successful in medicine, transportation, and other industries, so they may be effective for HCAI. The methods include: (1) Leadership commitment to safety, (2) hiring and training oriented to safety, (3) extensive reporting of failures and near misses, (4) internal review boards for problems and future plans, and (5) alignment with industry standards and accepted best practices. Safety cultures are never perfect, but they can improve existing processes and reduce costs, in part by preventing damage and injury.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133205651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Government Interventions and Regulations 政府干预与监管
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0022
B. Shneiderman
While many industry leaders argue that government interventions and regulation will limit innovation, this is not always true. Government regulation of automobile safety and fuel efficiency was an enormous stimulant to innovation, producing benefits for the public and the car manufacturing companies. Government projects and funding of academic research can do much to accelerate development of new technologies such as HCAI. International competition to lead AI and HCAI research generates large national investments directed at national priorities like security, healthcare, business success, and societal problems. European efforts lead the way in protecting the right to an explanation, triggering extensive research in explaninable AI, while China’s Social Credit System tracks individual reputations. Although past US White House reports sought to avoid regulation, federal agencies have increasingly found useful ways to regulate and support key industries.
虽然许多行业领袖认为政府干预和监管将限制创新,但事实并非总是如此。政府对汽车安全和燃油效率的监管是对创新的巨大刺激,为公众和汽车制造公司带来了利益。政府项目和学术研究资助可以大大加快HCAI等新技术的发展。引领人工智能和HCAI研究的国际竞争催生了大量针对国家优先事项的国家投资,如安全、医疗保健、商业成功和社会问题。欧洲在保护解释权方面走在了前列,引发了对可解释人工智能的广泛研究,而中国的社会信用体系(Social Credit System)则追踪个人声誉。尽管美国白宫过去的报告试图避免监管,但联邦机构越来越多地找到了监管和支持关键行业的有用方法。
{"title":"Government Interventions and Regulations","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0022","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0022","url":null,"abstract":"While many industry leaders argue that government interventions and regulation will limit innovation, this is not always true. Government regulation of automobile safety and fuel efficiency was an enormous stimulant to innovation, producing benefits for the public and the car manufacturing companies. Government projects and funding of academic research can do much to accelerate development of new technologies such as HCAI. International competition to lead AI and HCAI research generates large national investments directed at national priorities like security, healthcare, business success, and societal problems. European efforts lead the way in protecting the right to an explanation, triggering extensive research in explaninable AI, while China’s Social Credit System tracks individual reputations. Although past US White House reports sought to avoid regulation, federal agencies have increasingly found useful ways to regulate and support key industries.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"65 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132570862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Science and Innovation Goals 科学与创新目标
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0012
B. Shneiderman
The science goal is to understand computational agents and human perceptual, cognitive, and motor abilities so as to build computers that perform tasks as well as or better than humans. The innovation goal, some would call it the engineering goal, drives researchers to develop widely used products and services by applying HCAI methods. Lewis Mumford’s early writings are a guide to getting beyond the “obstacle of animism.” Successful designerss avoid mimicking human models and pursue supertools, tele-bots, active appliances, and control centers that support human control over technology, while ensuring high levels of automation. Another goal is to support human collaboration through shared documents and conferencing systems that bring people together.
科学的目标是了解计算代理和人类的感知、认知和运动能力,从而构建执行任务和人类一样或比人类更好的计算机。创新目标,有些人称之为工程目标,推动研究人员通过应用HCAI方法开发广泛使用的产品和服务。刘易斯·芒福德(Lewis Mumford)的早期著作是一本超越“万物有灵论障碍”的指南。成功的设计师避免模仿人类模型,追求超级工具、远程机器人、主动设备和控制中心,这些都支持人类对技术的控制,同时确保高水平的自动化。另一个目标是通过将人们聚集在一起的共享文档和会议系统来支持人类协作。
{"title":"Science and Innovation Goals","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0012","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0012","url":null,"abstract":"The science goal is to understand computational agents and human perceptual, cognitive, and motor abilities so as to build computers that perform tasks as well as or better than humans. The innovation goal, some would call it the engineering goal, drives researchers to develop widely used products and services by applying HCAI methods. Lewis Mumford’s early writings are a guide to getting beyond the “obstacle of animism.” Successful designerss avoid mimicking human models and pursue supertools, tele-bots, active appliances, and control centers that support human control over technology, while ensuring high levels of automation. Another goal is to support human collaboration through shared documents and conferencing systems that bring people together.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124314716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustworthy Certification by Independent Oversight 独立监督的可信认证
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0021
B. Shneiderman
The key to independent oversight is to support the legal, moral, and ethical principles of human or organizational responsibility and liability for their products and services. Three common forms of independent oversight are planning oversight, continuous monitoring, and retrospective analysis of disasters. These may be carried out by accounting firms, insurance companies, non-governmental and civil society organizations, and professional organizations and research institutes. Skeptics delight in pointing out that each of these approaches is imperfect, but the evidence shows that overall they are helpful in promoting trustworthy systems by way of continuous improvement to methods and improved training for leaders and staff members. Use of these approaches can be a competitive advantage that is featured in marketing new products and services.
独立监督的关键是支持人类或组织对其产品和服务的责任和责任的法律、道德和伦理原则。独立监督的三种常见形式是计划监督、持续监测和灾害回顾性分析。这些可以由会计师事务所、保险公司、非政府组织和民间社会组织、专业组织和研究机构进行。怀疑论者乐于指出,这些方法都不完美,但证据表明,总的来说,通过不断改进方法和改进对领导者和员工的培训,它们有助于促进值得信赖的系统。使用这些方法可以成为营销新产品和服务的竞争优势。
{"title":"Trustworthy Certification by Independent Oversight","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0021","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0021","url":null,"abstract":"The key to independent oversight is to support the legal, moral, and ethical principles of human or organizational responsibility and liability for their products and services. Three common forms of independent oversight are planning oversight, continuous monitoring, and retrospective analysis of disasters. These may be carried out by accounting firms, insurance companies, non-governmental and civil society organizations, and professional organizations and research institutes. Skeptics delight in pointing out that each of these approaches is imperfect, but the evidence shows that overall they are helpful in promoting trustworthy systems by way of continuous improvement to methods and improved training for leaders and staff members. Use of these approaches can be a competitive advantage that is featured in marketing new products and services.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130295379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Robots and Active Appliances 社交机器人和主动设备
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0016
B. Shneiderman
This chapter covers the many attempts by science goal advocates to build social robots over hundreds of years, which have attracted widespread interest. At the same time, active appliances, mobile devices, and kiosks are widespread consumer successes. Innovation goal champions prefer designs that are seen as steerable instruments, which increase flexibility or mobility, while being expendable in rescue, disaster, and military situations. The combined design could be to start with human-like services which have proven acceptance, such as voice-operated virtual assistants. These services could be embedded in active appliances which give users control of features that are important to them. Innovation goal thinking also leads to better than human performance in active appliances, such as using 4-wheeled or treaded robots to provide the mobility over rough terrain or floods, maneuverability in tight spaces, and heavy lifting capacity. Active appliances can also have superhuman sensors, such as infrared cameras or sensitive microphones, and specialized effectors, such as drills on Mars Rovers and cauterizing tools on surgical robots.
本章涵盖了数百年来科学目标倡导者构建社交机器人的许多尝试,这些尝试引起了广泛的兴趣。与此同时,主动设备、移动设备和售货亭在消费者中获得了广泛的成功。创新目标的拥护者更喜欢那些被视为可操纵工具的设计,这些设计可以增加灵活性或机动性,同时在救援、灾难和军事情况下也可以牺牲。合并后的设计可以从已经被证明可以接受的类人服务开始,比如语音操作的虚拟助手。这些服务可以嵌入到活动设备中,让用户控制对他们重要的功能。创新目标思维也导致在主动设备上优于人类的表现,例如使用四轮或脚踏机器人在崎岖地形或洪水中提供机动性,在狭窄空间中的机动性以及重型起重能力。主动设备也可以具有超人的传感器,如红外摄像机或敏感麦克风,以及专门的效应器,如火星探测器上的钻头和手术机器人上的烧灼工具。
{"title":"Social Robots and Active Appliances","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0016","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0016","url":null,"abstract":"This chapter covers the many attempts by science goal advocates to build social robots over hundreds of years, which have attracted widespread interest. At the same time, active appliances, mobile devices, and kiosks are widespread consumer successes. Innovation goal champions prefer designs that are seen as steerable instruments, which increase flexibility or mobility, while being expendable in rescue, disaster, and military situations. The combined design could be to start with human-like services which have proven acceptance, such as voice-operated virtual assistants. These services could be embedded in active appliances which give users control of features that are important to them. Innovation goal thinking also leads to better than human performance in active appliances, such as using 4-wheeled or treaded robots to provide the mobility over rough terrain or floods, maneuverability in tight spaces, and heavy lifting capacity. Active appliances can also have superhuman sensors, such as infrared cameras or sensitive microphones, and specialized effectors, such as drills on Mars Rovers and cauterizing tools on surgical robots.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121330951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Systems Based on Sound Software Engineering Practices 基于可靠软件工程实践的可靠系统
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0019
B. Shneiderman
Reliable HCAI systems are produced by applying sound technical practices to software engineering teams. These technical practices clarify human responsibility, such as audit trails for accurate records of who did what and when, and histories of who contributed to design, coding, testing, and revisions. Other technical practices are improved software engineering workflows that are tuned to the tasks and application domain. Then when prototype systems are ready, verification and validation testing of the programs, and bias testing of the training data can begin. Software engineering practices also include the user experience design processes that lead to explainable user interfaces for HCAI systems.
可靠的HCAI系统是通过对软件工程团队应用良好的技术实践而产生的。这些技术实践阐明了人类的责任,例如审计跟踪,以准确记录谁在何时做了什么,以及谁对设计、编码、测试和修订做出了贡献的历史。其他技术实践是针对任务和应用程序领域进行优化的软件工程工作流。然后,当原型系统准备好后,程序的验证和验证测试以及训练数据的偏差测试就可以开始了。软件工程实践还包括导致HCAI系统可解释的用户界面的用户体验设计过程。
{"title":"Reliable Systems Based on Sound Software Engineering Practices","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0019","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0019","url":null,"abstract":"Reliable HCAI systems are produced by applying sound technical practices to software engineering teams. These technical practices clarify human responsibility, such as audit trails for accurate records of who did what and when, and histories of who contributed to design, coding, testing, and revisions. Other technical practices are improved software engineering workflows that are tuned to the tasks and application domain. Then when prototype systems are ready, verification and validation testing of the programs, and bias testing of the training data can begin. Software engineering practices also include the user experience design processes that lead to explainable user interfaces for HCAI systems.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Agents and Supertools 智能代理和超级工具
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0013
B. Shneiderman
Those who pursue the science goal build cognitive computers that they describe as smart, intelligent, knowledgeable, and capable of thinking. The resulting human-like products may carry out narrow tasks successfully, but these designs can exacerbate the distrust, fears, and anxiety that many users have about their computers. The innovation goal community believes that computers are best designed to be supertools that amplify, augment, empower, and enhance humans. Supertools suggest human control of potent devices that amplify human performance. The combined strategy could be to design familiar HCAI user interfaces with AI technologies for services such as text messaging suggestions and internal operations to transmit optimally across complex networks.
那些追求科学目标的人建造了认知计算机,他们将其描述为聪明、聪明、知识渊博、能够思考。由此产生的类人产品可能会成功地执行一些狭窄的任务,但这些设计可能会加剧许多用户对计算机的不信任、恐惧和焦虑。创新目标社区认为,计算机最好被设计成能够放大、增强、授权和增强人类的超级工具。超级工具意味着人类可以控制那些能增强人类表现的强大设备。联合策略可以是设计熟悉的HCAI用户界面和人工智能技术,用于文本消息建议和内部操作等服务,以在复杂网络中进行最佳传输。
{"title":"Intelligent Agents and Supertools","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0013","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0013","url":null,"abstract":"Those who pursue the science goal build cognitive computers that they describe as smart, intelligent, knowledgeable, and capable of thinking. The resulting human-like products may carry out narrow tasks successfully, but these designs can exacerbate the distrust, fears, and anxiety that many users have about their computers. The innovation goal community believes that computers are best designed to be supertools that amplify, augment, empower, and enhance humans. Supertools suggest human control of potent devices that amplify human performance. The combined strategy could be to design familiar HCAI user interfaces with AI technologies for services such as text messaging suggestions and internal operations to transmit optimally across complex networks.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134456111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defining Reliable, Safe, and Trustworthy Systems 定义可靠、安全、可信的系统
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0007
B. Shneiderman
While machine autonomy remains a popular goal, the goal of human autonomy should remain equally strong in designers’ minds. Machine and human autonomy are both valuable in certain contexts, but a combined strategy uses automation when it is reliable and human control when it is necessary. To guide design improvements it will be helpful to focus on the attributes that make HCAI systems reliable, safe, and trustworthy. Designers of reliable, safe, and trustworthy systems will also promote resilience, clarify responsibility, increase quality, and encourage creativity. Still broader goals are to ensure privacy, increase cybersecurity, support social justice, and protect the environment.
虽然机器自主仍然是一个流行的目标,但在设计师的心中,人类自主的目标也应该同样强烈。在某些情况下,机器和人的自主性都很有价值,但综合策略在可靠时使用自动化,在必要时使用人工控制。为了指导设计改进,将重点放在使HCAI系统可靠、安全和值得信赖的属性上是有帮助的。可靠、安全和值得信赖的系统的设计者也将促进弹性、明确责任、提高质量和鼓励创造力。更广泛的目标是确保隐私、增强网络安全、支持社会正义和保护环境。
{"title":"Defining Reliable, Safe, and Trustworthy Systems","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0007","url":null,"abstract":"While machine autonomy remains a popular goal, the goal of human autonomy should remain equally strong in designers’ minds. Machine and human autonomy are both valuable in certain contexts, but a combined strategy uses automation when it is reliable and human control when it is necessary. To guide design improvements it will be helpful to focus on the attributes that make HCAI systems reliable, safe, and trustworthy. Designers of reliable, safe, and trustworthy systems will also promote resilience, clarify responsibility, increase quality, and encourage creativity. Still broader goals are to ensure privacy, increase cybersecurity, support social justice, and protect the environment.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116115125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Are People and Computers in the Same Category? 人和电脑是同一类吗?
Pub Date : 2022-01-13 DOI: 10.1093/oso/9780192845290.003.0003
B. Shneiderman
A second contrast between AI and HCAI advocates is the issue of whether people are in the same category as computers or if they are distinct. The Stanford University AI-100 report states that “the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality,” which suggests that humans and computers are in the same category. Some researchers claim that AI technologies do even more than empower people; these new technologies are the creators themselves. In contrast, many HCAI sympathizers believe that “People are not computers. Computers are not people.” Humans have bodies. Having a body makes you human. It puts us in touch with pain and pleasure, with sadness and joy. Crying and laughing, dancing and eating, lovemaking and thinking are all parts of being human. Emotions and passions are worth celebrating and fearing.
人工智能和HCAI倡导者之间的第二个对比是,人是否与计算机属于同一类别,或者他们是否不同。斯坦福大学的AI-100报告指出,“算术计算器和人脑的区别不在于种类,而在于规模、速度、自主程度和通用性”,这表明人类和计算机属于同一类别。一些研究人员声称,人工智能技术不仅仅赋予人们权力;这些新技术本身就是创造者。相反,许多HCAI的同情者认为“人不是计算机”。电脑不是人。”人类有身体。拥有身体使你成为人类。它让我们接触到痛苦和快乐,悲伤和快乐。哭和笑,跳舞和吃饭,做爱和思考都是人类的一部分。情感和激情值得庆祝,也值得恐惧。
{"title":"Are People and Computers in the Same Category?","authors":"B. Shneiderman","doi":"10.1093/oso/9780192845290.003.0003","DOIUrl":"https://doi.org/10.1093/oso/9780192845290.003.0003","url":null,"abstract":"A second contrast between AI and HCAI advocates is the issue of whether people are in the same category as computers or if they are distinct. The Stanford University AI-100 report states that “the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality,” which suggests that humans and computers are in the same category. Some researchers claim that AI technologies do even more than empower people; these new technologies are the creators themselves. In contrast, many HCAI sympathizers believe that “People are not computers. Computers are not people.” Humans have bodies. Having a body makes you human. It puts us in touch with pain and pleasure, with sadness and joy. Crying and laughing, dancing and eating, lovemaking and thinking are all parts of being human. Emotions and passions are worth celebrating and fearing.","PeriodicalId":159193,"journal":{"name":"Human-Centered AI","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122145360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Human-Centered AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1