Kristen Cibelli Hibben, Zachary Smith, Benjamin Rogers, Valerie Ryan, Paul Scanlon, Travis Hoppe
{"title":"开放文本调查数据的半自动无响应检测","authors":"Kristen Cibelli Hibben, Zachary Smith, Benjamin Rogers, Valerie Ryan, Paul Scanlon, Travis Hoppe","doi":"10.1177/08944393241249720","DOIUrl":null,"url":null,"abstract":"Open-ended survey questions can enable researchers to gain insights beyond more commonly used closed-ended question formats by allowing respondents an opportunity to provide information with few constraints and in their own words. Open-ended web probes are also increasingly used to inform the design and evaluation of survey questions. However, open-ended questions are more susceptible to insufficient or irrelevant responses that can be burdensome and time-consuming to identify and remove manually, often resulting in underuse of open-ended questions and, when used, potential inclusion of poor-quality data. To address these challenges, we developed and publicly released the Semi-Automated Nonresponse Detection for Survey text (SANDS), an item nonresponse detection approach based on a Bidirectional Transformer for Language Understanding model, fine-tuned using Simple Contrastive Sentence Embedding and targeted human coding, to categorize open-ended text data as valid or likely nonresponse. This approach is powerful in that it uses natural language processing as opposed to existing nonresponse detection approaches that have relied exclusively on rules or regular expressions or used bag-of-words approaches that tend to perform less well on short pieces of text, typos, or uncommon words, often prevalent in open-text survey data. This paper presents the development of SANDS and a quantitative evaluation of its performance and potential bias using open-text responses from a series of web probes as case studies. Overall, the SANDS model performed well in identifying a dataset of likely valid results to be used for quantitative or qualitative analysis, particularly on health-related data. Developed for generalizable use and accessible to others, the SANDS model can greatly improve the efficiency of identifying inadequate and irrelevant open-text responses, offering expanded opportunities for the use of open-text data to inform question design and improve survey data quality.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"12 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semi-Automated Nonresponse Detection for Open-Text Survey Data\",\"authors\":\"Kristen Cibelli Hibben, Zachary Smith, Benjamin Rogers, Valerie Ryan, Paul Scanlon, Travis Hoppe\",\"doi\":\"10.1177/08944393241249720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Open-ended survey questions can enable researchers to gain insights beyond more commonly used closed-ended question formats by allowing respondents an opportunity to provide information with few constraints and in their own words. Open-ended web probes are also increasingly used to inform the design and evaluation of survey questions. However, open-ended questions are more susceptible to insufficient or irrelevant responses that can be burdensome and time-consuming to identify and remove manually, often resulting in underuse of open-ended questions and, when used, potential inclusion of poor-quality data. To address these challenges, we developed and publicly released the Semi-Automated Nonresponse Detection for Survey text (SANDS), an item nonresponse detection approach based on a Bidirectional Transformer for Language Understanding model, fine-tuned using Simple Contrastive Sentence Embedding and targeted human coding, to categorize open-ended text data as valid or likely nonresponse. This approach is powerful in that it uses natural language processing as opposed to existing nonresponse detection approaches that have relied exclusively on rules or regular expressions or used bag-of-words approaches that tend to perform less well on short pieces of text, typos, or uncommon words, often prevalent in open-text survey data. This paper presents the development of SANDS and a quantitative evaluation of its performance and potential bias using open-text responses from a series of web probes as case studies. Overall, the SANDS model performed well in identifying a dataset of likely valid results to be used for quantitative or qualitative analysis, particularly on health-related data. Developed for generalizable use and accessible to others, the SANDS model can greatly improve the efficiency of identifying inadequate and irrelevant open-text responses, offering expanded opportunities for the use of open-text data to inform question design and improve survey data quality.\",\"PeriodicalId\":49509,\"journal\":{\"name\":\"Social Science Computer Review\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-05-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Science Computer Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/08944393241249720\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Science Computer Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/08944393241249720","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Semi-Automated Nonresponse Detection for Open-Text Survey Data
Open-ended survey questions can enable researchers to gain insights beyond more commonly used closed-ended question formats by allowing respondents an opportunity to provide information with few constraints and in their own words. Open-ended web probes are also increasingly used to inform the design and evaluation of survey questions. However, open-ended questions are more susceptible to insufficient or irrelevant responses that can be burdensome and time-consuming to identify and remove manually, often resulting in underuse of open-ended questions and, when used, potential inclusion of poor-quality data. To address these challenges, we developed and publicly released the Semi-Automated Nonresponse Detection for Survey text (SANDS), an item nonresponse detection approach based on a Bidirectional Transformer for Language Understanding model, fine-tuned using Simple Contrastive Sentence Embedding and targeted human coding, to categorize open-ended text data as valid or likely nonresponse. This approach is powerful in that it uses natural language processing as opposed to existing nonresponse detection approaches that have relied exclusively on rules or regular expressions or used bag-of-words approaches that tend to perform less well on short pieces of text, typos, or uncommon words, often prevalent in open-text survey data. This paper presents the development of SANDS and a quantitative evaluation of its performance and potential bias using open-text responses from a series of web probes as case studies. Overall, the SANDS model performed well in identifying a dataset of likely valid results to be used for quantitative or qualitative analysis, particularly on health-related data. Developed for generalizable use and accessible to others, the SANDS model can greatly improve the efficiency of identifying inadequate and irrelevant open-text responses, offering expanded opportunities for the use of open-text data to inform question design and improve survey data quality.
期刊介绍:
Unique Scope Social Science Computer Review is an interdisciplinary journal covering social science instructional and research applications of computing, as well as societal impacts of informational technology. Topics included: artificial intelligence, business, computational social science theory, computer-assisted survey research, computer-based qualitative analysis, computer simulation, economic modeling, electronic modeling, electronic publishing, geographic information systems, instrumentation and research tools, public administration, social impacts of computing and telecommunications, software evaluation, world-wide web resources for social scientists. Interdisciplinary Nature Because the Uses and impacts of computing are interdisciplinary, so is Social Science Computer Review. The journal is of direct relevance to scholars and scientists in a wide variety of disciplines. In its pages you''ll find work in the following areas: sociology, anthropology, political science, economics, psychology, computer literacy, computer applications, and methodology.