J. Scholtz, Oriana Love, M. Whiting, Duncan Hodges, Lia Emanuel, D. Fraser
{"title":"模型效用评价","authors":"J. Scholtz, Oriana Love, M. Whiting, Duncan Hodges, Lia Emanuel, D. Fraser","doi":"10.1145/2669557.2669562","DOIUrl":null,"url":null,"abstract":"In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three cases used a different approach to evaluating the model and each had challenges to overcome in designing and implementing the evaluation. We describe the methods we used and challenges faced in designing the evaluation procedures, summarize the lessons learned, enumerate considerations for those undertaking such evaluations, and present directions for future work.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Utility evaluation of models\",\"authors\":\"J. Scholtz, Oriana Love, M. Whiting, Duncan Hodges, Lia Emanuel, D. Fraser\",\"doi\":\"10.1145/2669557.2669562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three cases used a different approach to evaluating the model and each had challenges to overcome in designing and implementing the evaluation. We describe the methods we used and challenges faced in designing the evaluation procedures, summarize the lessons learned, enumerate considerations for those undertaking such evaluations, and present directions for future work.\",\"PeriodicalId\":179584,\"journal\":{\"name\":\"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2669557.2669562\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2669557.2669562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three cases used a different approach to evaluating the model and each had challenges to overcome in designing and implementing the evaluation. We describe the methods we used and challenges faced in designing the evaluation procedures, summarize the lessons learned, enumerate considerations for those undertaking such evaluations, and present directions for future work.