{"title":"完成率难题:减少单一可用性度量的偏差","authors":"Carl J. Pearson","doi":"10.1177/21695067231194328","DOIUrl":null,"url":null,"abstract":"In the Single Usability Metric benchmarking method, the calculation of completion rates creates a bias for completion rates by ignoring the z-score transform that is conducted for satisfaction and time-on-task measures. This artificially inflates all ‘good’ scores and marks some ‘poor’ scores as ‘good’. This paper discusses two methods to augment the SUM so that it will accurately calculate completion rates into the final SUM score.","PeriodicalId":74544,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting","volume":"26 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Completion Rate Conundrum: Reducing bias in the Single Usability Metric\",\"authors\":\"Carl J. Pearson\",\"doi\":\"10.1177/21695067231194328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the Single Usability Metric benchmarking method, the calculation of completion rates creates a bias for completion rates by ignoring the z-score transform that is conducted for satisfaction and time-on-task measures. This artificially inflates all ‘good’ scores and marks some ‘poor’ scores as ‘good’. This paper discusses two methods to augment the SUM so that it will accurately calculate completion rates into the final SUM score.\",\"PeriodicalId\":74544,\"journal\":{\"name\":\"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting\",\"volume\":\"26 3\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/21695067231194328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/21695067231194328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Completion Rate Conundrum: Reducing bias in the Single Usability Metric
In the Single Usability Metric benchmarking method, the calculation of completion rates creates a bias for completion rates by ignoring the z-score transform that is conducted for satisfaction and time-on-task measures. This artificially inflates all ‘good’ scores and marks some ‘poor’ scores as ‘good’. This paper discusses two methods to augment the SUM so that it will accurately calculate completion rates into the final SUM score.