{"title":"设计和评估以人为中心的交互式机器学习的简要指南","authors":"Kory W. Mathewson, Patrick M. Pilarski","doi":"arxiv-2204.09622","DOIUrl":null,"url":null,"abstract":"Interactive machine learning (IML) is a field of research that explores how\nto leverage both human and computational abilities in decision making systems.\nIML represents a collaboration between multiple complementary human and machine\nintelligent systems working as a team, each with their own unique abilities and\nlimitations. This teamwork might mean that both systems take actions at the\nsame time, or in sequence. Two major open research questions in the field of\nIML are: \"How should we design systems that can learn to make better decisions\nover time with human interaction?\" and \"How should we evaluate the design and\ndeployment of such systems?\" A lack of appropriate consideration for the humans\ninvolved can lead to problematic system behaviour, and issues of fairness,\naccountability, and transparency. Thus, our goal with this work is to present a\nhuman-centred guide to designing and evaluating IML systems while mitigating\nrisks. This guide is intended to be used by machine learning practitioners who\nare responsible for the health, safety, and well-being of interacting humans.\nAn obligation of responsibility for public interaction means acting with\nintegrity, honesty, fairness, and abiding by applicable legal statutes. With\nthese values and principles in mind, we as a machine learning research\ncommunity can better achieve goals of augmenting human skills and abilities.\nThis practical guide therefore aims to support many of the responsible\ndecisions necessary throughout the iterative design, development, and\ndissemination of IML systems.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Brief Guide to Designing and Evaluating Human-Centered Interactive Machine Learning\",\"authors\":\"Kory W. Mathewson, Patrick M. Pilarski\",\"doi\":\"arxiv-2204.09622\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Interactive machine learning (IML) is a field of research that explores how\\nto leverage both human and computational abilities in decision making systems.\\nIML represents a collaboration between multiple complementary human and machine\\nintelligent systems working as a team, each with their own unique abilities and\\nlimitations. This teamwork might mean that both systems take actions at the\\nsame time, or in sequence. Two major open research questions in the field of\\nIML are: \\\"How should we design systems that can learn to make better decisions\\nover time with human interaction?\\\" and \\\"How should we evaluate the design and\\ndeployment of such systems?\\\" A lack of appropriate consideration for the humans\\ninvolved can lead to problematic system behaviour, and issues of fairness,\\naccountability, and transparency. Thus, our goal with this work is to present a\\nhuman-centred guide to designing and evaluating IML systems while mitigating\\nrisks. This guide is intended to be used by machine learning practitioners who\\nare responsible for the health, safety, and well-being of interacting humans.\\nAn obligation of responsibility for public interaction means acting with\\nintegrity, honesty, fairness, and abiding by applicable legal statutes. With\\nthese values and principles in mind, we as a machine learning research\\ncommunity can better achieve goals of augmenting human skills and abilities.\\nThis practical guide therefore aims to support many of the responsible\\ndecisions necessary throughout the iterative design, development, and\\ndissemination of IML systems.\",\"PeriodicalId\":501533,\"journal\":{\"name\":\"arXiv - CS - General Literature\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - General Literature\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2204.09622\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - General Literature","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2204.09622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Brief Guide to Designing and Evaluating Human-Centered Interactive Machine Learning
Interactive machine learning (IML) is a field of research that explores how
to leverage both human and computational abilities in decision making systems.
IML represents a collaboration between multiple complementary human and machine
intelligent systems working as a team, each with their own unique abilities and
limitations. This teamwork might mean that both systems take actions at the
same time, or in sequence. Two major open research questions in the field of
IML are: "How should we design systems that can learn to make better decisions
over time with human interaction?" and "How should we evaluate the design and
deployment of such systems?" A lack of appropriate consideration for the humans
involved can lead to problematic system behaviour, and issues of fairness,
accountability, and transparency. Thus, our goal with this work is to present a
human-centred guide to designing and evaluating IML systems while mitigating
risks. This guide is intended to be used by machine learning practitioners who
are responsible for the health, safety, and well-being of interacting humans.
An obligation of responsibility for public interaction means acting with
integrity, honesty, fairness, and abiding by applicable legal statutes. With
these values and principles in mind, we as a machine learning research
community can better achieve goals of augmenting human skills and abilities.
This practical guide therefore aims to support many of the responsible
decisions necessary throughout the iterative design, development, and
dissemination of IML systems.