{"title":"LASR: A tool for large scale annotation of software requirements","authors":"I. Hussain, O. Ormandjieva, Leila Kosseim","doi":"10.1109/EmpiRE.2012.6347683","DOIUrl":null,"url":null,"abstract":"Annotation of software requirements documents is performed by experts during the requirements analysis phase to extract crucial knowledge from informally written textual requirements. Different annotation tasks target the extraction of different types of information and require the availability of experts specialized in the field. Large scale annotation tasks require multiple experts where the limited number of experts can make the tasks overwhelming and very costly without proper tool support. In this paper, we present our annotation tool, LASR, that can aid the tasks of requirements analysis by attaining more accurate annotations. Our evaluation of the tool demonstrate that the annotation data collected by LASR from the trained non-experts can help compute gold-standard annotations that strongly agree with the true gold-standards set by the experts, and therefore eliminate the need of conducting costly adjudication sessions for large scale annotation work.","PeriodicalId":335310,"journal":{"name":"2012 Second IEEE International Workshop on Empirical Requirements Engineering (EmpiRE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Second IEEE International Workshop on Empirical Requirements Engineering (EmpiRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EmpiRE.2012.6347683","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Annotation of software requirements documents is performed by experts during the requirements analysis phase to extract crucial knowledge from informally written textual requirements. Different annotation tasks target the extraction of different types of information and require the availability of experts specialized in the field. Large scale annotation tasks require multiple experts where the limited number of experts can make the tasks overwhelming and very costly without proper tool support. In this paper, we present our annotation tool, LASR, that can aid the tasks of requirements analysis by attaining more accurate annotations. Our evaluation of the tool demonstrate that the annotation data collected by LASR from the trained non-experts can help compute gold-standard annotations that strongly agree with the true gold-standards set by the experts, and therefore eliminate the need of conducting costly adjudication sessions for large scale annotation work.