Research funder mandates, such as those from the U.S. National Science Foundation (2011), the Canadian Tri-Agency (draft, 2018), and the UK Economic and Social Research Council (2018) now often include requirements for data curation, including where possible data sharing in an approved archive. Data curators need to be prepared for the potential that researchers who have not previously shared data will need assistance with cleaning and depositing datasets so that they can meet these requirements and maintain funding. Data de-identification or anonymization is a major ethical concern in cases where survey data is to be shared, and one which data professionals may find themselves ill-equipped to deal with. This article is intended to provide an accessible and practical introduction to the theory and concepts behind data anonymization and risk assessment, will describe a couple of case studies that demonstrate how these methods were carried out on actual datasets requiring anonymization, and discuss some of the difficulties encountered. Much of the literature dealing with statistical risk assessment of anonymized data is abstract and aimed at computer scientists and mathematicians, while material aimed at practitioners often does not consider more recent developments in the theory of data anonymization. We hope that this article will help bridge this gap.
{"title":"Mathematics, risk, and messy survey data","authors":"Kristi Thompson, C. Sullivan","doi":"10.29173/iq979","DOIUrl":"https://doi.org/10.29173/iq979","url":null,"abstract":"Research funder mandates, such as those from the U.S. National Science Foundation (2011), the Canadian Tri-Agency (draft, 2018), and the UK Economic and Social Research Council (2018) now often include requirements for data curation, including where possible data sharing in an approved archive. Data curators need to be prepared for the potential that researchers who have not previously shared data will need assistance with cleaning and depositing datasets so that they can meet these requirements and maintain funding. Data de-identification or anonymization is a major ethical concern in cases where survey data is to be shared, and one which data professionals may find themselves ill-equipped to deal with. This article is intended to provide an accessible and practical introduction to the theory and concepts behind data anonymization and risk assessment, will describe a couple of case studies that demonstrate how these methods were carried out on actual datasets requiring anonymization, and discuss some of the difficulties encountered. Much of the literature dealing with statistical risk assessment of anonymized data is abstract and aimed at computer scientists and mathematicians, while material aimed at practitioners often does not consider more recent developments in the theory of data anonymization. We hope that this article will help bridge this gap.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47316922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a social science data archive, we focus on collecting research data and archiving it. However, there are more responsibilities that come with data archiving: cooperation on international social surveys (ISSP, ESS), supporting secondary data analysis and much more. Significant part of our work is to communicate with students and researchers, to educate them about data management and data analysis. Although the relationship we have is functional and seems sufficient, we tend to ask ourselves: who are the data archive users and what do they expect from us? We decided to employ user-centered design methods and tools to define a typical user of our services and to find out what their motivations for using our data archive are and what specific functions they use and (do not) appreciate, so we would have a better image of their needs. Moreover, we wondered about the role of open science and its impact on the users’ needs and future requirements arising from the open science environment. Obtained information is a point of departure for redesigning archival services to satisfy new demands our users have regarding more data resources, new techniques of scientific work and better interconnection between different platforms.
{"title":"Sustainability through the liaison with data archive users","authors":"Michaela Kudrnáčová, Ilona Trtíková","doi":"10.29173/iq976","DOIUrl":"https://doi.org/10.29173/iq976","url":null,"abstract":"As a social science data archive, we focus on collecting research data and archiving it. However, there are more responsibilities that come with data archiving: cooperation on international social surveys (ISSP, ESS), supporting secondary data analysis and much more. Significant part of our work is to communicate with students and researchers, to educate them about data management and data analysis. Although the relationship we have is functional and seems sufficient, we tend to ask ourselves: who are the data archive users and what do they expect from us?\u0000We decided to employ user-centered design methods and tools to define a typical user of our services and to find out what their motivations for using our data archive are and what specific functions they use and (do not) appreciate, so we would have a better image of their needs. Moreover, we wondered about the role of open science and its impact on the users’ needs and future requirements arising from the open science environment. Obtained information is a point of departure for redesigning archival services to satisfy new demands our users have regarding more data resources, new techniques of scientific work and better interconnection between different platforms.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47028009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data set accompanying theses is a valuable intellectual property asset, both from the viewpoint of the PhD student, who can procure employment and build publications and research grants from the work for years to come, and the university, which owns the data and has invested in the work. However, the data set has generally not been captured as a finished product in a similar manner to the published thesis. A course has been developed which walks PhD students through the process of identifying an archival data set, selecting a repository or long term storage location, creating metadata and documentation for the data package, and the deposit process. A pre- and post assessment has been designed to ascertain the level of data literacy the students gain through curating their own dataset. PIs for the projects have input into the repositories and metadata standards selected. The university thesis office was consulted as the course was developed, so that accurate procedures and practices are reflected throughout the course. This first of a kind class is open to students of any discipline at a Research-1 university. The resulting mixture of data types creates a unique course every time it is offered.
{"title":"Capturing their “first” dataset: A graduate course to walk PhD students through the curation of their dissertation data","authors":"Megan Sapp Nelson, N. Kong","doi":"10.29173/iq971","DOIUrl":"https://doi.org/10.29173/iq971","url":null,"abstract":"The data set accompanying theses is a valuable intellectual property asset, both from the viewpoint of the PhD student, who can procure employment and build publications and research grants from the work for years to come, and the university, which owns the data and has invested in the work. However, the data set has generally not been captured as a finished product in a similar manner to the published thesis. A course has been developed which walks PhD students through the process of identifying an archival data set, selecting a repository or long term storage location, creating metadata and documentation for the data package, and the deposit process. A pre- and post assessment has been designed to ascertain the level of data literacy the students gain through curating their own dataset. PIs for the projects have input into the repositories and metadata standards selected. The university thesis office was consulted as the course was developed, so that accurate procedures and practices are reflected throughout the course. This first of a kind class is open to students of any discipline at a Research-1 university. The resulting mixture of data types creates a unique course every time it is offered.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45324013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Förster, Kerrin Borschewski, S. Bolton, Taina Jääskeläinen
Accompanying the growing importance of research data management, the provision and maintenance of metadata – understood as data about (research) data – have obtained a key role in contextualizing, understanding, and preserving research data. Acknowledging the importance of metadata in the social sciences, the Consortium of European Social Science Data Archives started the Metadata Office project in 2019. This project report presents the various activities of the Metadata Office (MDO). Metadata models, schema, controlled vocabularies and thesauri are covered, including the MDO’s collaboration with the DDI Alliance on multilingual translations of DDI vocabularies for CESSDA Service Providers. The report also summarizes the communication, training and advice provided by MDO, including DDI use across CESSDA, illustrates the impact of the project for the social science and research data management community, and offers an outline regarding future plans of the project.
{"title":"The matter of meta in research data management: Introducing the CESSDA Metadata Office Project","authors":"André Förster, Kerrin Borschewski, S. Bolton, Taina Jääskeläinen","doi":"10.29173/iq970","DOIUrl":"https://doi.org/10.29173/iq970","url":null,"abstract":"Accompanying the growing importance of research data management, the provision and maintenance of metadata – understood as data about (research) data – have obtained a key role in contextualizing, understanding, and preserving research data. Acknowledging the importance of metadata in the social sciences, the Consortium of European Social Science Data Archives started the Metadata Office project in 2019. This project report presents the various activities of the Metadata Office (MDO). Metadata models, schema, controlled vocabularies and thesauri are covered, including the MDO’s collaboration with the DDI Alliance on multilingual translations of DDI vocabularies for CESSDA Service Providers. The report also summarizes the communication, training and advice provided by MDO, including DDI use across CESSDA, illustrates the impact of the project for the social science and research data management community, and offers an outline regarding future plans of the project.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42000753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Alter, Darrell Donakowski, J. Gager, P. Heus, Carson Hunter, Sanda Ionescu, J. Iverson, H. Jagadish, C. Lagoze, Jared Lyle, Alexander Mueller, Sigbjørn Revheim, M. Richardson, Ørnulf Risnes, Karunakara Seelam, Dan J. Smith, T. Smith, Jie Song, Y. Vaidya, Ole Voldsater
Structured Data Transformation Language (SDTL) provides structured, machine actionable representations of data transformation commands found in statistical analysis software. The Continuous Capture of Metadata for Statistical Data Project (C2Metadata) created SDTL as part of an automated system that captures provenance metadata from data transformation scripts and adds variable derivations to standard metadata files. SDTL also has potential for auditing scripts and for translating scripts between languages. SDTL is expressed in a set of JSON schemas, which are machine actionable and easily serialized to other formats. Statistical software languages have a number of special features that have been carried into SDTL. We explain how SDTL handles differences among statistical languages and complex operations, such as merging files and reshaping data tables from “wide” to “long”.
{"title":"Provenance metadata for statistical data: An introduction to Structured Data Transformation Language (SDTL)","authors":"George Alter, Darrell Donakowski, J. Gager, P. Heus, Carson Hunter, Sanda Ionescu, J. Iverson, H. Jagadish, C. Lagoze, Jared Lyle, Alexander Mueller, Sigbjørn Revheim, M. Richardson, Ørnulf Risnes, Karunakara Seelam, Dan J. Smith, T. Smith, Jie Song, Y. Vaidya, Ole Voldsater","doi":"10.29173/iq983","DOIUrl":"https://doi.org/10.29173/iq983","url":null,"abstract":"Structured Data Transformation Language (SDTL) provides structured, machine actionable representations of data transformation commands found in statistical analysis software. The Continuous Capture of Metadata for Statistical Data Project (C2Metadata) created SDTL as part of an automated system that captures provenance metadata from data transformation scripts and adds variable derivations to standard metadata files. SDTL also has potential for auditing scripts and for translating scripts between languages. SDTL is expressed in a set of JSON schemas, which are machine actionable and easily serialized to other formats. Statistical software languages have a number of special features that have been carried into SDTL. We explain how SDTL handles differences among statistical languages and complex operations, such as merging files and reshaping data tables from “wide” to “long”. ","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47288015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: Center for Open Science (COS) / Open Science Framework (OSF)[i] Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] CUrating for REproducibility (CURE)[iii] Project Tier[iv] Data Curation Network[v] UK Reproducibility Network[vi] While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). The articles in this issue comprise several very important aspects of reproducible research: Identification of barriers to reproducibility and solutions to such barriers Evidence synthesis as related to transparent reporting and reproducibility Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. Systematic reviews are a research approach that involves the quantitative and/or qualitati
{"title":"Advocating for reproducibility","authors":"H. Dekker, Amy Riegelman","doi":"10.29173/iq982","DOIUrl":"https://doi.org/10.29173/iq982","url":null,"abstract":"As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. \u0000In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. \u0000We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: \u0000 \u0000Center for Open Science (COS) / Open Science Framework (OSF)[i] \u0000Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] \u0000CUrating for REproducibility (CURE)[iii] \u0000Project Tier[iv] \u0000Data Curation Network[v] \u0000UK Reproducibility Network[vi] \u0000 \u0000While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). \u0000The articles in this issue comprise several very important aspects of reproducible research: \u0000 \u0000Identification of barriers to reproducibility and solutions to such barriers \u0000Evidence synthesis as related to transparent reporting and reproducibility \u0000Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. \u0000 \u0000The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. \u0000The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. \u0000Systematic reviews are a research approach that involves the quantitative and/or qualitati","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43698988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Antognoli, R. Avila, J. Sears, L. Christiansen, J. Tieman, Jacquelyn Hart
This article examines a cross-section of literature and other resources to reveal common reproducibility issues faced by stakeholders regardless of subject area or focus. We identify a variety of issues named as reproducibility barriers, the solutions to such barriers, and reflect on how researchers and information professionals can act to address the ‘reproducibility crisis.’ The finished products of this work include an annotated list of 122 published resources and a primer that identifies and defines key concepts from the resources that contribute to the crisis.
{"title":"Reproducibility literature analysis - a federal information professional perspective","authors":"E. Antognoli, R. Avila, J. Sears, L. Christiansen, J. Tieman, Jacquelyn Hart","doi":"10.29173/iq967","DOIUrl":"https://doi.org/10.29173/iq967","url":null,"abstract":"This article examines a cross-section of literature and other resources to reveal common reproducibility issues faced by stakeholders regardless of subject area or focus. We identify a variety of issues named as reproducibility barriers, the solutions to such barriers, and reflect on how researchers and information professionals can act to address the ‘reproducibility crisis.’ The finished products of this work include an annotated list of 122 published resources and a primer that identifies and defines key concepts from the resources that contribute to the crisis.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46110142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper illustrates a large research university library experience in reusing the data for research collected both within and outside of the library to demonstrate data reuse practice. The purpose of the paper is to 1) demonstrate when and how data are reused in a large public research university library, 2) share tips on what to consider when reusing data, and 3) share challenges and lessons learned from data reuse experiences. This paper presents five proposed opportunities for data reuse conducted by three researchers at the institution’s library which resulted in three successful instances of data reuses and two failed data reuses. Learning from successful and failed experiences is critical to understand what works and what does not work in order to identify best practices for data reuse. This paper will be helpful for librarians who intend to reuse data for publication.
{"title":"Learning from data reuse: successful and failed experiences in a large public research university library","authors":"J. Scoulas","doi":"10.29173/iq966","DOIUrl":"https://doi.org/10.29173/iq966","url":null,"abstract":"This paper illustrates a large research university library experience in reusing the data for research collected both within and outside of the library to demonstrate data reuse practice. The purpose of the paper is to 1) demonstrate when and how data are reused in a large public research university library, 2) share tips on what to consider when reusing data, and 3) share challenges and lessons learned from data reuse experiences. This paper presents five proposed opportunities for data reuse conducted by three researchers at the institution’s library which resulted in three successful instances of data reuses and two failed data reuses. Learning from successful and failed experiences is critical to understand what works and what does not work in order to identify best practices for data reuse. This paper will be helpful for librarians who intend to reuse data for publication.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48328845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent discussions and research in psychology show a significant emphasis on reproducibility. Concerns for reproducibility pertain to methods as well as results. We evaluated the reporting of the electronic search methods used for systematic reviews (SR) published in psychology. Such reports are key for determining the reproducibility of electronic searches. The use of SR has been increasing in psychology, and we report on the status of reporting of electronic searches in recent SR in psychology. We used 12 checklist items to evaluate reporting for basic electronic strategies. Kappa results for those items developed from evidence-based recommendations ranged from fair to almost perfect. Additionally, using a set of those items to represent a “PRISMA” type of recommended reporting showed that only one of the 25 randomly selected psychology SR from 2009-2012 reported recommended information for all items in the set, and none of the 25 psychology SR from 2014-2016 did so. Using a second less stringent set of items found that only 36% of the psychology SR reported basic information that supports confidence in the reproducibility of electronic searches. Similar results were found for a set of psychology SR published in 2017. An area for improvements in SR in psychology involves fuller and clearer reporting of the steps used for electronic searches in SR. Such improvements will provide a strong basis for confidence in the reproducibility of searches. That confidence, in turn, can strengthen reader confidence more generally in the results and conclusions reached in SR in psychology.
{"title":"Methods reporting that supports reader confidence for systematic reviews in psychology: assessing the reproducibility of electronic searches and first-level screening decisions.","authors":"P. Fehrmann, M. Mamolen","doi":"10.29173/iq968","DOIUrl":"https://doi.org/10.29173/iq968","url":null,"abstract":"Recent discussions and research in psychology show a significant emphasis on reproducibility. Concerns for reproducibility pertain to methods as well as results. We evaluated the reporting of the electronic search methods used for systematic reviews (SR) published in psychology. Such reports are key for determining the reproducibility of electronic searches. The use of SR has been increasing in psychology, and we report on the status of reporting of electronic searches in recent SR in psychology. \u0000We used 12 checklist items to evaluate reporting for basic electronic strategies. Kappa results for those items developed from evidence-based recommendations ranged from fair to almost perfect. Additionally, using a set of those items to represent a “PRISMA” type of recommended reporting showed that only one of the 25 randomly selected psychology SR from 2009-2012 reported recommended information for all items in the set, and none of the 25 psychology SR from 2014-2016 did so. Using a second less stringent set of items found that only 36% of the psychology SR reported basic information that supports confidence in the reproducibility of electronic searches. Similar results were found for a set of psychology SR published in 2017. \u0000An area for improvements in SR in psychology involves fuller and clearer reporting of the steps used for electronic searches in SR. Such improvements will provide a strong basis for confidence in the reproducibility of searches. That confidence, in turn, can strengthen reader confidence more generally in the results and conclusions reached in SR in psychology.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47970993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hettne, Ricarda K. K. Proppert, L. Nab, L. P. R. Saunero, Daniela Gawehns
University Libraries play a crucial role in moving towards Open Science, contributing to more transparent, reproducible and reusable research. The Center for Digital Scholarship (CDS) at Leiden University (LU) library is a scholarly lab that promotes open science literacy among Leiden’s scholars by two complementary strategies: existing top-down structures are used to provide training and services, while bottom-up initiatives from the research community are actively supported by offering the CDS’s expertise and facilities. An example of how bottom-up initiatives can blossom with the help of library structures such as the CDS is ReproHack. ReproHack – a reproducibility hackathon – is a grass-root initiative by young scholars with the goal of improving research reproducibility in three ways. First, hackathon attendees learn about reproducibility tools and challenges by reproducing published results and providing feedback to authors on their attempt. Second, authors can nominate their work and receive feedback on their reproducibility efforts. Third, the collaborative atmosphere helps building a community interested in making their own research reproducible. A first ReproHack in the Netherlands took place on November 30th, 2019, co-organised by the CDS at the LU Library with 44 participants from the fields of psychology, engineering, biomedicine, and computer science. For 19 papers, 24 feedback forms were returned and five papers were reported as successfully reproduced. Besides the researchers’ learning experience, the event led to recommendations on how to enhance research reproducibility. The ReproHack format therefore provides an opportunity for libraries to improve scientific reproducibility through community engagement.
{"title":"ReprohackNL 2019: how libraries can promote research reproducibility through community engagement","authors":"K. Hettne, Ricarda K. K. Proppert, L. Nab, L. P. R. Saunero, Daniela Gawehns","doi":"10.31235/osf.io/6f4zv","DOIUrl":"https://doi.org/10.31235/osf.io/6f4zv","url":null,"abstract":"University Libraries play a crucial role in moving towards Open Science, contributing to more transparent, reproducible and reusable research. The Center for Digital Scholarship (CDS) at Leiden University (LU) library is a scholarly lab that promotes open science literacy among Leiden’s scholars by two complementary strategies: existing top-down structures are used to provide training and services, while bottom-up initiatives from the research community are actively supported by offering the CDS’s expertise and facilities. An example of how bottom-up initiatives can blossom with the help of library structures such as the CDS is ReproHack. ReproHack – a reproducibility hackathon – is a grass-root initiative by young scholars with the goal of improving research reproducibility in three ways. First, hackathon attendees learn about reproducibility tools and challenges by reproducing published results and providing feedback to authors on their attempt. Second, authors can nominate their work and receive feedback on their reproducibility efforts. Third, the collaborative atmosphere helps building a community interested in making their own research reproducible. \u0000A first ReproHack in the Netherlands took place on November 30th, 2019, co-organised by the CDS at the LU Library with 44 participants from the fields of psychology, engineering, biomedicine, and computer science. For 19 papers, 24 feedback forms were returned and five papers were reported as successfully reproduced. Besides the researchers’ learning experience, the event led to recommendations on how to enhance research reproducibility. The ReproHack format therefore provides an opportunity for libraries to improve scientific reproducibility through community engagement.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48489831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}