Analyzing Test Performance of BSIT Students and Question Quality: A Study on Item Difficulty Index and Item Discrimination Index for Test Question Improvement
{"title":"Analyzing Test Performance of BSIT Students and Question Quality: A Study on Item Difficulty Index and Item Discrimination Index for Test Question Improvement","authors":"C. P. Olipas, Ruth G. Luciano","doi":"10.5815/ijitcs.2024.03.01","DOIUrl":null,"url":null,"abstract":"This study presents a comprehensive assessment of the test performance of Bachelor of Science in Information Technology (BSIT) students in the System Integration and Architecture (SIA) course, coupled with a meticulous examination of the quality of test questions, aiming to lay the groundwork for enhancing the assessment tool. Employing a cross-sectional research design, the study involved 200 fourth-year students enrolled in the course. The results illuminated a significant discrepancy in scores between upper and lower student cohorts, highlighting the necessity for targeted interventions, curriculum enhancements, and assessment refinements, particularly for those in the lower-performing group. Further examination of the item difficulty index of the assessment tool unveiled the need to fine-tune certain items to better suit a broader spectrum of students. Nevertheless, the majority of items were deemed adequately aligned with their respective difficulty levels. Additionally, an analysis of the item discrimination index identified 25 items suitable for retention, while 27 items warranted revision, and 3 items were suitable for removal, as per the analysis outcomes. These insights provide a valuable foundation for improving the assessment tool, thereby optimizing its capacity to evaluate students' acquired knowledge effectively. The study's novel contribution lies in its integration of both student performance assessment and evaluation of assessment tool quality within the BSIT program, offering actionable insights for improving educational outcomes. By identifying challenges faced by BSIT students and proposing targeted interventions, curriculum enhancements, and assessment refinements, the research advances our understanding of effective assessment practices. Furthermore, the detailed analysis of item difficulty and discrimination indices offers practical guidance for enhancing the reliability and validity of assessment tools in the BSIT program. Overall, this research contributes to the existing body of knowledge by providing empirical evidence and actionable recommendations tailored to the needs of BSIT students, promoting educational quality and student success in Information Technology.","PeriodicalId":130361,"journal":{"name":"International Journal of Information Technology and Computer Science","volume":" 18","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Technology and Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5815/ijitcs.2024.03.01","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study presents a comprehensive assessment of the test performance of Bachelor of Science in Information Technology (BSIT) students in the System Integration and Architecture (SIA) course, coupled with a meticulous examination of the quality of test questions, aiming to lay the groundwork for enhancing the assessment tool. Employing a cross-sectional research design, the study involved 200 fourth-year students enrolled in the course. The results illuminated a significant discrepancy in scores between upper and lower student cohorts, highlighting the necessity for targeted interventions, curriculum enhancements, and assessment refinements, particularly for those in the lower-performing group. Further examination of the item difficulty index of the assessment tool unveiled the need to fine-tune certain items to better suit a broader spectrum of students. Nevertheless, the majority of items were deemed adequately aligned with their respective difficulty levels. Additionally, an analysis of the item discrimination index identified 25 items suitable for retention, while 27 items warranted revision, and 3 items were suitable for removal, as per the analysis outcomes. These insights provide a valuable foundation for improving the assessment tool, thereby optimizing its capacity to evaluate students' acquired knowledge effectively. The study's novel contribution lies in its integration of both student performance assessment and evaluation of assessment tool quality within the BSIT program, offering actionable insights for improving educational outcomes. By identifying challenges faced by BSIT students and proposing targeted interventions, curriculum enhancements, and assessment refinements, the research advances our understanding of effective assessment practices. Furthermore, the detailed analysis of item difficulty and discrimination indices offers practical guidance for enhancing the reliability and validity of assessment tools in the BSIT program. Overall, this research contributes to the existing body of knowledge by providing empirical evidence and actionable recommendations tailored to the needs of BSIT students, promoting educational quality and student success in Information Technology.