{"title":"在线调查中发现被调查者的欺骗和不感兴趣。使用面部表情分析的案例研究","authors":"R. W. Hammond, Claudia Parvanta, R. Zemen","doi":"10.1177/15245004221074403","DOIUrl":null,"url":null,"abstract":"Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.","PeriodicalId":46085,"journal":{"name":"Social Marketing Quarterly","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2022-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis\",\"authors\":\"R. W. Hammond, Claudia Parvanta, R. Zemen\",\"doi\":\"10.1177/15245004221074403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.\",\"PeriodicalId\":46085,\"journal\":{\"name\":\"Social Marketing Quarterly\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2022-02-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Marketing Quarterly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/15245004221074403\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Marketing Quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15245004221074403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS","Score":null,"Total":0}
Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis
Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.