Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.
{"title":"Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling","authors":"V. Romanuke","doi":"10.2478/acss-2019-0019","DOIUrl":"https://doi.org/10.2478/acss-2019-0019","url":null,"abstract":"Abstract Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"83 1","pages":"150 - 160"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85554943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.
{"title":"Results From Expert Survey on System Analysis Process Activities","authors":"Laima Leimane, O. Ņikiforova","doi":"10.2478/acss-2019-0018","DOIUrl":"https://doi.org/10.2478/acss-2019-0018","url":null,"abstract":"Abstract System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"23 1","pages":"141 - 149"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80226462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.
{"title":"Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language","authors":"Erika Nazaruka, J. Osis, Viktorija Gribermane","doi":"10.2478/acss-2019-0012","DOIUrl":"https://doi.org/10.2478/acss-2019-0012","url":null,"abstract":"Abstract Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"1 1","pages":"103 - 94"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88198157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.
{"title":"Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data","authors":"Tariq Ali, Asif Nawaz, H. Sadia","doi":"10.2478/acss-2019-0015","DOIUrl":"https://doi.org/10.2478/acss-2019-0015","url":null,"abstract":"Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"25 1","pages":"119 - 127"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72890196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.
{"title":"A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques","authors":"Mahdi Khazaiepoor, A. K. Bardsiri, F. Keynia","doi":"10.2478/acss-2019-0011","DOIUrl":"https://doi.org/10.2478/acss-2019-0011","url":null,"abstract":"Abstract During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"61 1","pages":"82 - 93"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84573420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.
{"title":"Fuzzy Expert System Generalised Model for Medical Applications","authors":"Osée Muhindo Masivi","doi":"10.2478/acss-2019-0016","DOIUrl":"https://doi.org/10.2478/acss-2019-0016","url":null,"abstract":"Abstract Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"61 7 1","pages":"128 - 133"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90567134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.
{"title":"A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem","authors":"Duygu Sinanc Terzi, Ş. Sağiroğlu","doi":"10.2478/acss-2019-0013","DOIUrl":"https://doi.org/10.2478/acss-2019-0013","url":null,"abstract":"Abstract The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"32 1","pages":"104 - 110"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.
{"title":"Affective State Based Anomaly Detection in Crowd","authors":"Glorija Baliniskite, E. Lavendelis, Mara Pudane","doi":"10.2478/acss-2019-0017","DOIUrl":"https://doi.org/10.2478/acss-2019-0017","url":null,"abstract":"Abstract To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"83 1","pages":"134 - 140"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90141210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike
Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.
{"title":"Development of Ontology Based Competence Management Model for Non-Formal Education Services","authors":"Uldis Zandbergs, J. Grundspeņķis, Janis Judrups, Signe Brike","doi":"10.2478/acss-2019-0014","DOIUrl":"https://doi.org/10.2478/acss-2019-0014","url":null,"abstract":"Abstract Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another. The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"26 1","pages":"111 - 118"},"PeriodicalIF":1.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85087438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Legal reasoning, the core of legal practice in many countries, is “stare decisis” and its soundness is usually strengthened by relevant case law consulted. However, the task of relevant case law access and retrieval is tiring to legal practitioners and constitutes a serious drain on their productivity. Existing efforts at addressing this problem are conceptional, restrictive or unreliable. Specifically, existing semantic retrieval (SR) systems for case law are desirous of exceptional retrieval precision. Ontology promises to meet this desire, if introduced to the SR system. As a consequence, an ontology-based SR system for case law has been built using the systems analysis and design methodology. In particular, the component-based software engineering and the agile methodologies are employed to implement the system. Finally, the search and retrieval performance of the resultant SR system has been evaluated using the heuristics evaluation method. The retrieval system has shown to have a search and retrieval performance of about 94 % precision, 80 % recall and 84 % F-measure. Overall, the paper implements the SR system for case law with excellent precision and affirms the superiority of ontology approach over other semantic approaches to SR systems for document retrieval in the legal domain.
{"title":"A Semantic Retrieval System for Case Law","authors":"E. P. Ebietomere, G. Ekuobase","doi":"10.2478/acss-2019-0006","DOIUrl":"https://doi.org/10.2478/acss-2019-0006","url":null,"abstract":"Abstract Legal reasoning, the core of legal practice in many countries, is “stare decisis” and its soundness is usually strengthened by relevant case law consulted. However, the task of relevant case law access and retrieval is tiring to legal practitioners and constitutes a serious drain on their productivity. Existing efforts at addressing this problem are conceptional, restrictive or unreliable. Specifically, existing semantic retrieval (SR) systems for case law are desirous of exceptional retrieval precision. Ontology promises to meet this desire, if introduced to the SR system. As a consequence, an ontology-based SR system for case law has been built using the systems analysis and design methodology. In particular, the component-based software engineering and the agile methodologies are employed to implement the system. Finally, the search and retrieval performance of the resultant SR system has been evaluated using the heuristics evaluation method. The retrieval system has shown to have a search and retrieval performance of about 94 % precision, 80 % recall and 84 % F-measure. Overall, the paper implements the SR system for case law with excellent precision and affirms the superiority of ontology approach over other semantic approaches to SR systems for document retrieval in the legal domain.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"22 1","pages":"38 - 48"},"PeriodicalIF":1.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78482998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}