{"title":"潜在伪造数据的统计检测","authors":"H. Helene, P. Joel","doi":"10.6084/M9.FIGSHARE.858921.V2","DOIUrl":null,"url":null,"abstract":"Scientific fraud is an increasingly vexing problem. Many current programs for fraud detection focus on image manipulation, while techniques for detection based on anomalous patterns that may be discoverable in the underlying numerical data get much less attention, even though these techniques are often easy to apply. We employed three such techniques in a case study in which we considered data sets from several hundred experiments. We compared patterns in the data sets from one research teaching specialist (RTS), to those of 9 other members of the same laboratory and from 3 outside laboratories. Application of two conventional statistical tests and a newly developed test for anomalous patterns in the triplicate data commonly produced in such research to various data sets reported by the RTS resulted in repeated rejection of the hypotheses (often at p-levels well below 0.001) that anomalous patterns in his data may have occurred by chance. This analysis emphasizes the importance of access to raw data that form the bases of publications, reports and grant applications in order to evaluate the correctness of the conclusions, as well as the utility of methods for detecting anomalous, especially fabricated, numerical results.","PeriodicalId":119149,"journal":{"name":"arXiv: Quantitative Methods","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Statistical Detection of Potentially Fabricated Data\",\"authors\":\"H. Helene, P. Joel\",\"doi\":\"10.6084/M9.FIGSHARE.858921.V2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scientific fraud is an increasingly vexing problem. Many current programs for fraud detection focus on image manipulation, while techniques for detection based on anomalous patterns that may be discoverable in the underlying numerical data get much less attention, even though these techniques are often easy to apply. We employed three such techniques in a case study in which we considered data sets from several hundred experiments. We compared patterns in the data sets from one research teaching specialist (RTS), to those of 9 other members of the same laboratory and from 3 outside laboratories. Application of two conventional statistical tests and a newly developed test for anomalous patterns in the triplicate data commonly produced in such research to various data sets reported by the RTS resulted in repeated rejection of the hypotheses (often at p-levels well below 0.001) that anomalous patterns in his data may have occurred by chance. This analysis emphasizes the importance of access to raw data that form the bases of publications, reports and grant applications in order to evaluate the correctness of the conclusions, as well as the utility of methods for detecting anomalous, especially fabricated, numerical results.\",\"PeriodicalId\":119149,\"journal\":{\"name\":\"arXiv: Quantitative Methods\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv: Quantitative Methods\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6084/M9.FIGSHARE.858921.V2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv: Quantitative Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6084/M9.FIGSHARE.858921.V2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Statistical Detection of Potentially Fabricated Data
Scientific fraud is an increasingly vexing problem. Many current programs for fraud detection focus on image manipulation, while techniques for detection based on anomalous patterns that may be discoverable in the underlying numerical data get much less attention, even though these techniques are often easy to apply. We employed three such techniques in a case study in which we considered data sets from several hundred experiments. We compared patterns in the data sets from one research teaching specialist (RTS), to those of 9 other members of the same laboratory and from 3 outside laboratories. Application of two conventional statistical tests and a newly developed test for anomalous patterns in the triplicate data commonly produced in such research to various data sets reported by the RTS resulted in repeated rejection of the hypotheses (often at p-levels well below 0.001) that anomalous patterns in his data may have occurred by chance. This analysis emphasizes the importance of access to raw data that form the bases of publications, reports and grant applications in order to evaluate the correctness of the conclusions, as well as the utility of methods for detecting anomalous, especially fabricated, numerical results.