Background: Multi-disciplinary and interdisciplinary research centers are central to the concept of collaborative science. A challenge to the success of these research centers is their "data sharing readiness" as a foundation of science integrity that underpins policy decisions and directions. To achieve data sharing readiness, understanding researchers' level of data maturity (their data management practices and understanding) is critical to improvement in data management and resultant research outcomes.
Methods: A mixed methods approach of survey, focus group, and interviews was undertaken to understand where participants in a soil research collaboration were in their data maturity. The survey included a combination of multiple choice, open, and closed-ended questions linked to a set of overarching data management topics. Voluntary participants include farmer groups, universities, industry partners, and state government agencies from Australia and New Zealand.
Results: Key findings were that researchers were largely unfamiliar with core concepts of data management and exhibit a low overall level of data maturity and data readiness. This was echoed by the desire of survey respondents to obtain further training, education, and support in data management.
Conclusion: Data readiness and providing researchers with a core set of data management skills will provide a pathway for high quality and enduring research.
Background: Artificial intelligence (AI) is increasingly integrated into research, significantly challenging established scholarly norms around originality, contribution, and authorship. While policies are developing, there is a gap in understanding how individual researchers subjectively perceive and navigate these ambiguities in practice, impacting research integrity.
Methods: To explore researchers' perspectives on distinguishing human versus AI contributions, we conducted semi-structured interviews with 18 researchers (PhD student, Postdoctoral Researcher, Faculty) across diverse disciplines (STEM, Social Sciences, Humanities). Data were analyzed via reflexive thematic analysis, informed by Attribution Theory.
Results: Researchers predominantly conceptualize AI as a sophisticated tool requiring significant human direction, rather than a genuine collaborator. To navigate attributional ambiguity, they rely on subjective heuristics - such as "gut feelings" of ownership and using the labor of the research process as a proxy for conceptual contribution. This creates significant ethical tensions and a desire for clearer, more nuanced guidelines.
Conclusion: They face cognitive and practical challenges applying traditional integrity norms to AI-assisted work. Findings highlight the need for critical dialogue, reflective practices, and nuanced guidelines to uphold research integrity and thoughtfully integrate human value with machine capabilities.
Background: The integration of generative artificial intelligence (GAI) in research raises concerns about transparency, accountability, and task delegation. While frameworks such as CRediT and the NIST AI Use Taxonomy address contributions to research, they either exclude AI-assisted input (CRediT) or do not provide a stage-specific approach (NIST). A structured taxonomy is needed to delineate GAI's contributions across research stages while preserving human oversight and research integrity.
Methods: This study introduces the Generative AI Delegation Taxonomy (GAIDeT), informed by existing contributor role taxonomies, peer-reviewed literature, and an iterative consensus-building approach. It categorizes GAI's contributions at macro and corresponding micro levels, specifying the degree of human oversight required.
Results: GAIDeT provides a structured framework for documenting GAI's role in scholarly research. It classifies research activities into key domains - conceptualization, literature review, methodology, data analysis, writing, supervision, and ethical review - ensuring transparency and human accountability. A GitHub-based interactive tool - the GAIDeT Declaration Generator - was developed to help researchers document delegation choices transparently.
Conclusions: By standardizing GAI task delegation, GAIDeT enhances research integrity and transparency. Future work should focus on empirical validation, cross-disciplinary adaptability, and policy implications for GAI governance.

