Software testing manipulates various artifacts, such as application and test code, requirements, test objectives and results, that are important to stakeholders such as developers, testers, and the QA team. To reason and make decisions from these varied artifacts, information needs to be recorded in a structured, computer-readable format. This highlights the need for effective knowledge management, for which ontologies are an ideal solution.
We aim to answer two key questions for software testing practitioners who wish to use ontology-based knowledge management: what criteria can one rely on to decide which testing ontology to use, and how do existing ontologies fare when using these criteria?
We coalesced several notions of the quality of an ontology under the umbrella of the concept of a “beautiful ontology” others have introduced. In doing so we focussed on ontology evaluation criteria that are sufficiently well-defined to lead to repeatable assessments. Relying on published systematic literature reviews we selected four testing reference ontologies for assessment, namely STOWS, OntoTest, ROoST, and TestTDO.
Results indicate that a small number of published ontology assessment criteria have been defined with sufficient formality to allow their non-biased use. Nevertheless, we primarily based our assessment of the four selected ontologies on the necessary isomorphic mapping between an ontology and the domain it represents. Results indicate that none of the selected ontologies is designed with sufficient rigour. One observation we make is that published ontologies ought to be described more rigorously with a necessary complete dictionary describing all concepts, properties, relations and axioms.
扫码关注我们
求助内容:
应助结果提醒方式:
