{"title":"注释任务设计中的社会和伦理规范","authors":"Razvan Amironesei;Mark Díaz","doi":"10.1109/TTS.2024.3374639","DOIUrl":null,"url":null,"abstract":"The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"45-47"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Social and Ethical Norms in Annotation Task Design\",\"authors\":\"Razvan Amironesei;Mark Díaz\",\"doi\":\"10.1109/TTS.2024.3374639\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"5 1\",\"pages\":\"45-47\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10462544/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10462544/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Social and Ethical Norms in Annotation Task Design
The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.