Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397249
Marwa Moustafa, H. M. Ebeid, A. Helmy, Taymoor M. Nazamy, M. Tolba
Single image super resolution (SISR) is the process that obtains a high resolution image from a single low resolution (LR) image by increasing the high frequency information and removing the degradation of the noise. Sparse representation of signal assumes linear combinations of a few atoms from a pre -specified dictionary. Sparse representation has been used successfully as a prior in signal reconstruction. Dictionary design is crucial for the success of reconstruction high resolution images. This paper evaluates the performance of dictionary design models in both mathematical and learning based models, it also compares the wavelet method, Haar method, DCT method, MOD method and K-SVD method. Various experiments are conducted using a real SPOT-4 satellite image. Experimental results demonstrate that the learning based approaches are very effective in increasing resolution and compare favorably to mathematical based approaches.
{"title":"Super-resolution: Sparse dictionary design method using quantitative comparison","authors":"Marwa Moustafa, H. M. Ebeid, A. Helmy, Taymoor M. Nazamy, M. Tolba","doi":"10.1109/INTELCIS.2015.7397249","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397249","url":null,"abstract":"Single image super resolution (SISR) is the process that obtains a high resolution image from a single low resolution (LR) image by increasing the high frequency information and removing the degradation of the noise. Sparse representation of signal assumes linear combinations of a few atoms from a pre -specified dictionary. Sparse representation has been used successfully as a prior in signal reconstruction. Dictionary design is crucial for the success of reconstruction high resolution images. This paper evaluates the performance of dictionary design models in both mathematical and learning based models, it also compares the wavelet method, Haar method, DCT method, MOD method and K-SVD method. Various experiments are conducted using a real SPOT-4 satellite image. Experimental results demonstrate that the learning based approaches are very effective in increasing resolution and compare favorably to mathematical based approaches.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"100 1","pages":"383-389"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79497371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397211
Wael Zakaria, Y. Kotb, F. Ghaleb
DNA microarray technology assists researchers to learn more about different diseases especially the study of the cancer diseases. Using the microarray technology, it will be possible for the researchers to further classify the types of cancer on the basis of the patterns of gene activity (gene expression) in the tumor cells. This will tremendously help the pharmaceutical community to develop more effective drugs as the treatment strategies will be targeted directly to the specific type of cancer. The classification technique is one of the important data mining techniques that is used for classifying the DNA microarray datasets. The aim of this paper is to build an accurate classifier framework called MinCAR-Classifier that mines all minimal high confident class association rules, MinCAR, from cancer microarray datasets. Based on lung cancer microarray dataset, the comparative studies show that our proposed MinCAR-Classifier framework is more accurate than other well-known classifier frameworks.
{"title":"MinCAR-Classifier for classifying lung cancer gene expression dataset","authors":"Wael Zakaria, Y. Kotb, F. Ghaleb","doi":"10.1109/INTELCIS.2015.7397211","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397211","url":null,"abstract":"DNA microarray technology assists researchers to learn more about different diseases especially the study of the cancer diseases. Using the microarray technology, it will be possible for the researchers to further classify the types of cancer on the basis of the patterns of gene activity (gene expression) in the tumor cells. This will tremendously help the pharmaceutical community to develop more effective drugs as the treatment strategies will be targeted directly to the specific type of cancer. The classification technique is one of the important data mining techniques that is used for classifying the DNA microarray datasets. The aim of this paper is to build an accurate classifier framework called MinCAR-Classifier that mines all minimal high confident class association rules, MinCAR, from cancer microarray datasets. Based on lung cancer microarray dataset, the comparative studies show that our proposed MinCAR-Classifier framework is more accurate than other well-known classifier frameworks.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"26 1","pages":"143-148"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86097641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397277
A. Kleschev, S. Smagin
The paper provides an introduction into the area of inductive formation of knowledge bases. It presents traditional definitions of main problems in this area and highlights the current topical questions including the interpretability of the results. For solving of current problems in defined area the method of inductive formation of easily interpretable medical diagnostic knowledge bases is suggested. It includes the new definitions of classification and clustering problems for dependence models with parameters and the learning algorithm (solving mentioned problems in their new definitions) developed for the practically useful and easily interpretable mathematical dependence model with parameters which is a near real-life ontology of medical diagnostics (defined by a system of logical relationships with parameters). Also it includes the software package InForMedKB (INductive FORmation of MEDical Knowledge Bases) which implements above mentioned learning algorithm. InForMedKB allows to create training sets (consisting of clinical histories from various therapeutic areas) and to use them for inductive formation of medical diagnostic knowledge bases. These knowledge bases are presented in form accepted in the medical literature and contain descriptions of diseases (from specified therapeutic areas) as well as an explanation of these knowledge bases based on descriptions of clinical histories from used training sets. The formal representation of medical knowledge bases enables their usage for intelligent systems for medical diagnostics.
本文对知识库的归纳形成领域进行了介绍。它提出了该领域主要问题的传统定义,并强调了当前的主题问题,包括结果的可解释性。针对目前在限定区域内存在的问题,提出了归纳形成易解释医学诊断知识库的方法。它包括对参数依赖模型的分类和聚类问题的新定义,以及为具有实际用途且易于解释的具有参数的数学依赖模型(由具有参数的逻辑关系的系统定义)开发的学习算法(解决新定义中提到的问题),该模型是接近现实生活的医学诊断本体(由参数逻辑关系系统定义)。其中还包括实现上述学习算法的软件包InForMedKB (induction FORmation of MEDical Knowledge Bases)。InForMedKB允许创建训练集(由来自不同治疗领域的临床病史组成),并使用它们归纳形成医学诊断知识库。这些知识库以医学文献中可接受的形式呈现,并包含疾病的描述(来自特定的治疗领域),以及基于使用的训练集的临床病史描述对这些知识库的解释。医学知识库的形式化表示使其能够用于医疗诊断的智能系统。
{"title":"The way of inductive formation of medical diagnostic knowledge bases","authors":"A. Kleschev, S. Smagin","doi":"10.1109/INTELCIS.2015.7397277","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397277","url":null,"abstract":"The paper provides an introduction into the area of inductive formation of knowledge bases. It presents traditional definitions of main problems in this area and highlights the current topical questions including the interpretability of the results. For solving of current problems in defined area the method of inductive formation of easily interpretable medical diagnostic knowledge bases is suggested. It includes the new definitions of classification and clustering problems for dependence models with parameters and the learning algorithm (solving mentioned problems in their new definitions) developed for the practically useful and easily interpretable mathematical dependence model with parameters which is a near real-life ontology of medical diagnostics (defined by a system of logical relationships with parameters). Also it includes the software package InForMedKB (INductive FORmation of MEDical Knowledge Bases) which implements above mentioned learning algorithm. InForMedKB allows to create training sets (consisting of clinical histories from various therapeutic areas) and to use them for inductive formation of medical diagnostic knowledge bases. These knowledge bases are presented in form accepted in the medical literature and contain descriptions of diseases (from specified therapeutic areas) as well as an explanation of these knowledge bases based on descriptions of clinical histories from used training sets. The formal representation of medical knowledge bases enables their usage for intelligent systems for medical diagnostics.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"10 1","pages":"561-566"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87803511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397208
Asmaa A. Alkhouly, A. Mancy, Khaled El-Bahnasy, M. AbdEl-Azeem
Overflow of medical information, lack of knowledge about medications and clinical experience could cause health care professionals to disregard vital information which affects patient safety. Health care professionals should base their decisions on the best-updated evidence for diagnosis, prognosis, and therapeutics, and on patient values and preferences as well as on the results of the clinical experience of both the professional and patient. Such practice is called evidence-based medicine.
{"title":"Accurate individualized therapeutic plans ontology based on evidence based medicine","authors":"Asmaa A. Alkhouly, A. Mancy, Khaled El-Bahnasy, M. AbdEl-Azeem","doi":"10.1109/INTELCIS.2015.7397208","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397208","url":null,"abstract":"Overflow of medical information, lack of knowledge about medications and clinical experience could cause health care professionals to disregard vital information which affects patient safety. Health care professionals should base their decisions on the best-updated evidence for diagnosis, prognosis, and therapeutics, and on patient values and preferences as well as on the results of the clinical experience of both the professional and patient. Such practice is called evidence-based medicine.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"33 1","pages":"121-130"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87922979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397260
M. Fathy, M. Osama, M. El-Mahallawy
Cutting stock problem (CSP) affects cost of production and stock use efficiency in many industries. The majority of such industries handle stock of raw material in sheet form with the priority of waste reduction. Thus, in this paper we study the two-dimensional CSP (2-D CSP) with main goal of minimizing trim loss. Current approaches are primarily designed to deal with regular stock sheets only and do not handle irregular or defective sheets. That is why the problem is considered to be partially solved from an industrial stand point. In this paper, we introduce a novel algorithm for 2D CSP to minimize the waste and address the issue of defective and/or irregular stock sheets. The algorithm utilizes image processing, evolutionary-programming (EP), and Linear programming (LP) to form a practical solution. Detection & Isolation of sheets' defects and conversion of irregular sheets to regular is accomplished by image processing. Further processing is done by the remaining techniques to efficiently minimize the waste. Experimental results show that the proposed algorithm succeeds in achieving lower waste values compared to conventional EP algorithms.
{"title":"Evolutionary-based hybrid algorithm for 2D cutting stock problem","authors":"M. Fathy, M. Osama, M. El-Mahallawy","doi":"10.1109/INTELCIS.2015.7397260","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397260","url":null,"abstract":"Cutting stock problem (CSP) affects cost of production and stock use efficiency in many industries. The majority of such industries handle stock of raw material in sheet form with the priority of waste reduction. Thus, in this paper we study the two-dimensional CSP (2-D CSP) with main goal of minimizing trim loss. Current approaches are primarily designed to deal with regular stock sheets only and do not handle irregular or defective sheets. That is why the problem is considered to be partially solved from an industrial stand point. In this paper, we introduce a novel algorithm for 2D CSP to minimize the waste and address the issue of defective and/or irregular stock sheets. The algorithm utilizes image processing, evolutionary-programming (EP), and Linear programming (LP) to form a practical solution. Detection & Isolation of sheets' defects and conversion of irregular sheets to regular is accomplished by image processing. Further processing is done by the remaining techniques to efficiently minimize the waste. Experimental results show that the proposed algorithm succeeds in achieving lower waste values compared to conventional EP algorithms.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"29 1","pages":"454-457"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85633665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397240
Tawfik Abdel Hakam Hassan, G. Selim, R. Sadek
In Wireless Sensor Networks (WSN) with a clustered hierarchical structure, Cluster-Head (CH) nodes are considered the interface between the leaf normal sensors and the Base Station (BS). The energy dissipation of the sensors, whatever their type, can be optimized by a load balancing in the packet TX/RX process in order to prolong the network lifetime and minimize the advertisement phase time for cluster head selection in each round of the LEACH-C protocol. This paper proposes a routing protocol for LEACH_C WSN in which the system lifetime can be extended by assigning a vice cluster head (VCH) to each of the CHs. Unlike other different VCH based protocols, the assigned VCH doest not remain in the idle state and receive its responsibility by the death of the CH, rather the VCH shares the TX/RX load with its CH in order to balance the load distribution, shortly after the death of the CH a VCH is fully loaded until a new VCH receives the TX load. The simulation results confirm that the theoretically expected results. The functionality of the proposed protocol is tested under different simulation conditions such as the size of the WSN field and the results of simulation prove that the proposed protocol prolongs the network lifetime as expected.
{"title":"A novel energy efficient vice Cluster Head routing protocol in Wireless Sensor Networks","authors":"Tawfik Abdel Hakam Hassan, G. Selim, R. Sadek","doi":"10.1109/INTELCIS.2015.7397240","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397240","url":null,"abstract":"In Wireless Sensor Networks (WSN) with a clustered hierarchical structure, Cluster-Head (CH) nodes are considered the interface between the leaf normal sensors and the Base Station (BS). The energy dissipation of the sensors, whatever their type, can be optimized by a load balancing in the packet TX/RX process in order to prolong the network lifetime and minimize the advertisement phase time for cluster head selection in each round of the LEACH-C protocol. This paper proposes a routing protocol for LEACH_C WSN in which the system lifetime can be extended by assigning a vice cluster head (VCH) to each of the CHs. Unlike other different VCH based protocols, the assigned VCH doest not remain in the idle state and receive its responsibility by the death of the CH, rather the VCH shares the TX/RX load with its CH in order to balance the load distribution, shortly after the death of the CH a VCH is fully loaded until a new VCH receives the TX load. The simulation results confirm that the theoretically expected results. The functionality of the proposed protocol is tested under different simulation conditions such as the size of the WSN field and the results of simulation prove that the proposed protocol prolongs the network lifetime as expected.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"26 1","pages":"313-320"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84824082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397269
K. Ntalianis, Abdel-badeeh M. Salem
In this paper an innovative social media news items ranking scheme is proposed. The proposed unsupervised architecture takes into consideration user-content interactions, since social media posts receive likes, comments and shares from friends and other users. Additionally the importance of each user is modeled, based on an innovative algorithm that borrows ideas from the PageRank algorithm. Finally, a novel content ranking component is introduced, which ranks posted news items based on a social computing method, driven by the importance of the social network users that interact with them. Initial experiments on real life social networks news items illustrate the promising performance of the proposed architecture. Additionally comparisons with three different ranking ways are provided (SUMF, RSN-CO and RSN-nCO), in terms of user satisfaction.
{"title":"Ranking of news items in rule-stringent social media based on users' importance: A social computing approach","authors":"K. Ntalianis, Abdel-badeeh M. Salem","doi":"10.1109/INTELCIS.2015.7397269","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397269","url":null,"abstract":"In this paper an innovative social media news items ranking scheme is proposed. The proposed unsupervised architecture takes into consideration user-content interactions, since social media posts receive likes, comments and shares from friends and other users. Additionally the importance of each user is modeled, based on an innovative algorithm that borrows ideas from the PageRank algorithm. Finally, a novel content ranking component is introduced, which ranks posted news items based on a social computing method, driven by the importance of the social network users that interact with them. Initial experiments on real life social networks news items illustrate the promising performance of the proposed architecture. Additionally comparisons with three different ranking ways are provided (SUMF, RSN-CO and RSN-nCO), in terms of user satisfaction.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"45 1","pages":"27-33"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88307671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397289
I. Ismail, Walaa K. Gad, M. Hamdy, Khaled Bahnsy
A huge number of documents are available that may cause information overload problems. Text document annotation provides a solution to such type of problems. Text annotation is the process of attaching comments, notes, or explanations to text documents. From a technical point of view, annotations are usually seen as metadata, as they give additional information about an existing piece of data. Annotations facilitate the task of finding the document topic and assist the reader to quickly overview and understand document. In this paper, we study different methods of text document annotation and comparisons among different methods are shown indicating advantages and disadvantages of each method used in annotation.
{"title":"Text document annotation methods: Stat of art","authors":"I. Ismail, Walaa K. Gad, M. Hamdy, Khaled Bahnsy","doi":"10.1109/INTELCIS.2015.7397289","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397289","url":null,"abstract":"A huge number of documents are available that may cause information overload problems. Text document annotation provides a solution to such type of problems. Text annotation is the process of attaching comments, notes, or explanations to text documents. From a technical point of view, annotations are usually seen as metadata, as they give additional information about an existing piece of data. Annotations facilitate the task of finding the document topic and assist the reader to quickly overview and understand document. In this paper, we study different methods of text document annotation and comparisons among different methods are shown indicating advantages and disadvantages of each method used in annotation.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"12 1","pages":"634-640"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91027048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397247
S. Rady
People express emotions in response to everyday situation and personal communication. With diversity of language expressions, it is challenging to provide an accurate estimation of emotion or sentiment. This paper proposes intelligent technique and system for sentiment estimation and prediction in the business domain. It is useful for management sectors where tools can automatically analyze collected data and reveal employees' opinion about their organization, or any ongoing topic. The challenge in this work is to detect sentiment classes from relatively long text, where writers merge sentences and expressions when asked to write reviews, instead of being directly asked to write their sentiment degree. The approach is data-driven, which uses machine learning to train classifier features to recognize the sentiment. A system is implemented and tested (on real data collected from employee reviews at big IT organizations) towards two and five classification degrees problems. Recorded results prove efficiency of the technique.
{"title":"A business intelligent technique for sentiment estimation by management sectors","authors":"S. Rady","doi":"10.1109/INTELCIS.2015.7397247","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397247","url":null,"abstract":"People express emotions in response to everyday situation and personal communication. With diversity of language expressions, it is challenging to provide an accurate estimation of emotion or sentiment. This paper proposes intelligent technique and system for sentiment estimation and prediction in the business domain. It is useful for management sectors where tools can automatically analyze collected data and reveal employees' opinion about their organization, or any ongoing topic. The challenge in this work is to detect sentiment classes from relatively long text, where writers merge sentences and expressions when asked to write reviews, instead of being directly asked to write their sentiment degree. The approach is data-driven, which uses machine learning to train classifier features to recognize the sentiment. A system is implemented and tested (on real data collected from employee reviews at big IT organizations) towards two and five classification degrees problems. Recorded results prove efficiency of the technique.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"71 1","pages":"370-376"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89079324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397280
Vera G. Meister
The paper describes the first phases of a development and implementation project of a semantic-web-based information system. It shall support standard decision use cases for different stakeholders of specific degree programs in the German-speaking world. The goal is to enhance the provision of information via program websites by standardized semantic annotation based on a domain-specific ontology. Starting with a description of the knowledge domain, the paper discusses different solution approaches and outlines a system design for the chosen approach. The basic technical feasibility of the system is shown in a proof of concept. The paper ends with a short survey of further work, comprising research and development tasks, as well as social and management tasks.
{"title":"A semantic-web-based decision support system for stakeholders of specific degree programs","authors":"Vera G. Meister","doi":"10.1109/INTELCIS.2015.7397280","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397280","url":null,"abstract":"The paper describes the first phases of a development and implementation project of a semantic-web-based information system. It shall support standard decision use cases for different stakeholders of specific degree programs in the German-speaking world. The goal is to enhance the provision of information via program websites by standardized semantic annotation based on a domain-specific ontology. Starting with a description of the knowledge domain, the paper discusses different solution approaches and outlines a system design for the chosen approach. The basic technical feasibility of the system is shown in a proof of concept. The paper ends with a short survey of further work, comprising research and development tasks, as well as social and management tasks.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"26 1","pages":"34-38"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82940754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}