{"title":"评估用于验证高等教育学术诚信的人工智能内容生成工具","authors":"Muhammad Bilal Saqib, Saba Zia","doi":"10.1108/jarhe-10-2023-0470","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>The notion of using a generative artificial intelligence (AI) engine for text composition has gained excessive popularity among students, educators and researchers, following the introduction of ChatGPT. However, this has added another dimension to the daunting task of verifying originality in academic writing. Consequently, the market for detecting artificially generated content has seen a mushroom growth of tools that claim to be more than 90% accurate in sensing artificially written content.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>This research evaluates the capabilities of some highly mentioned AI detection tools to separate reality from their hyperbolic claims. For this purpose, eight AI engines have been tested on four different types of data, which cover the different ways of using ChatGPT. These types are Original, Paraphrased by AI, 100% AI generated and 100% AI generated with Contextual Information. The AI index recorded by these tools against the datasets was evaluated as an indicator of their performance.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>The resulting figures of cumulative mean validate that these tools excel at identifying human generated content (1.71% AI content) and perform reasonably well in labelling AI generated content (76.85% AI content). However, they are perplexed by the scenarios where the content is either paraphrased by the AI (39.42% AI content) or generated by giving a precise context for the output (60.1% AI content).</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>This paper evaluates different services for the detection of AI-generated content to verify academic integrity in research work and higher education and provides new insights into their performance.</p><!--/ Abstract__block -->","PeriodicalId":45508,"journal":{"name":"Journal of Applied Research in Higher Education","volume":"78 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation of AI content generation tools for verification of academic integrity in higher education\",\"authors\":\"Muhammad Bilal Saqib, Saba Zia\",\"doi\":\"10.1108/jarhe-10-2023-0470\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Purpose</h3>\\n<p>The notion of using a generative artificial intelligence (AI) engine for text composition has gained excessive popularity among students, educators and researchers, following the introduction of ChatGPT. However, this has added another dimension to the daunting task of verifying originality in academic writing. Consequently, the market for detecting artificially generated content has seen a mushroom growth of tools that claim to be more than 90% accurate in sensing artificially written content.</p><!--/ Abstract__block -->\\n<h3>Design/methodology/approach</h3>\\n<p>This research evaluates the capabilities of some highly mentioned AI detection tools to separate reality from their hyperbolic claims. For this purpose, eight AI engines have been tested on four different types of data, which cover the different ways of using ChatGPT. These types are Original, Paraphrased by AI, 100% AI generated and 100% AI generated with Contextual Information. The AI index recorded by these tools against the datasets was evaluated as an indicator of their performance.</p><!--/ Abstract__block -->\\n<h3>Findings</h3>\\n<p>The resulting figures of cumulative mean validate that these tools excel at identifying human generated content (1.71% AI content) and perform reasonably well in labelling AI generated content (76.85% AI content). However, they are perplexed by the scenarios where the content is either paraphrased by the AI (39.42% AI content) or generated by giving a precise context for the output (60.1% AI content).</p><!--/ Abstract__block -->\\n<h3>Originality/value</h3>\\n<p>This paper evaluates different services for the detection of AI-generated content to verify academic integrity in research work and higher education and provides new insights into their performance.</p><!--/ Abstract__block -->\",\"PeriodicalId\":45508,\"journal\":{\"name\":\"Journal of Applied Research in Higher Education\",\"volume\":\"78 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Research in Higher Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/jarhe-10-2023-0470\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Research in Higher Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/jarhe-10-2023-0470","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Evaluation of AI content generation tools for verification of academic integrity in higher education
Purpose
The notion of using a generative artificial intelligence (AI) engine for text composition has gained excessive popularity among students, educators and researchers, following the introduction of ChatGPT. However, this has added another dimension to the daunting task of verifying originality in academic writing. Consequently, the market for detecting artificially generated content has seen a mushroom growth of tools that claim to be more than 90% accurate in sensing artificially written content.
Design/methodology/approach
This research evaluates the capabilities of some highly mentioned AI detection tools to separate reality from their hyperbolic claims. For this purpose, eight AI engines have been tested on four different types of data, which cover the different ways of using ChatGPT. These types are Original, Paraphrased by AI, 100% AI generated and 100% AI generated with Contextual Information. The AI index recorded by these tools against the datasets was evaluated as an indicator of their performance.
Findings
The resulting figures of cumulative mean validate that these tools excel at identifying human generated content (1.71% AI content) and perform reasonably well in labelling AI generated content (76.85% AI content). However, they are perplexed by the scenarios where the content is either paraphrased by the AI (39.42% AI content) or generated by giving a precise context for the output (60.1% AI content).
Originality/value
This paper evaluates different services for the detection of AI-generated content to verify academic integrity in research work and higher education and provides new insights into their performance.
期刊介绍:
Higher education around the world has become a major topic of discussion, debate, and controversy, as a range of political, economic, social, and technological pressures result in a myriad of changes at all levels. But the quality and quantity of critical dialogue and research and their relationship with practice remains limited. This internationally peer-reviewed journal addresses this shortfall by focusing on the scholarship and practice of teaching and learning and higher education and covers: - Higher education teaching, learning, curriculum, assessment, policy, management, leadership, and related areas - Digitization, internationalization, and democratization of higher education, and related areas such as lifelong and lifewide learning - Innovation, change, and reflections on current practices