{"title":"NCSA 2021 Presidential Address: Discovery, Disenchantment, and Recovery: Finding Sociology that Matters in Amish Country","authors":"Rachel E. Stein","doi":"10.1080/00380237.2021.1987075","DOIUrl":null,"url":null,"abstract":"ABSTRACT Popular text-matching software generates a percentage of similarity – called a “similarity score” or “Similarity Index” – that quantifies the matching text between a particular manuscript and content in the software’s archives, on the Internet and in electronic databases. Many evaluators rely on these simple figures as a proxy for plagiarism and thus avoid the burdensome task of inspecting the longer Similarity Reports that show the matching in detail. Yet similarity scores, though alluringly straightforward, are never enough to judge the presence (or absence) of plagiarism. Ideally, evaluators should always examine the Similarity Reports. Given the persistent use of simplistic similarity score thresholds at some academic journals and educational institutions, however, and the time that can be saved by relying on the scores, a method is arguably needed that encourages examination of the Similarity Reports but still also allows evaluators to choose to rely on the similarity scores in some instances. This article proposes a four-band method to accomplish this. Used together, the bands oblige evaluators to acknowledge the risk they take in relying on the similarity scores yet still allow them to ultimately determine whether they wish to accept that risk. The bands – for most rigor, high rigor, moderate rigor and less rigor – should be tailored to an evaluator’s particular needs.","PeriodicalId":39368,"journal":{"name":"Sociological Focus","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sociological Focus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00380237.2021.1987075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 4
Abstract
ABSTRACT Popular text-matching software generates a percentage of similarity – called a “similarity score” or “Similarity Index” – that quantifies the matching text between a particular manuscript and content in the software’s archives, on the Internet and in electronic databases. Many evaluators rely on these simple figures as a proxy for plagiarism and thus avoid the burdensome task of inspecting the longer Similarity Reports that show the matching in detail. Yet similarity scores, though alluringly straightforward, are never enough to judge the presence (or absence) of plagiarism. Ideally, evaluators should always examine the Similarity Reports. Given the persistent use of simplistic similarity score thresholds at some academic journals and educational institutions, however, and the time that can be saved by relying on the scores, a method is arguably needed that encourages examination of the Similarity Reports but still also allows evaluators to choose to rely on the similarity scores in some instances. This article proposes a four-band method to accomplish this. Used together, the bands oblige evaluators to acknowledge the risk they take in relying on the similarity scores yet still allow them to ultimately determine whether they wish to accept that risk. The bands – for most rigor, high rigor, moderate rigor and less rigor – should be tailored to an evaluator’s particular needs.