Pub Date : 2012-02-09DOI: 10.1109/ISIICT.2011.6149602
M. Bouarioua, A. Chaoui, R. Elmansouri
This paper presents an approach for transforming UML Statecharts to General Stochastics Petri Nets. Unified Modelling Language (UML) is considered to be the standarditized language for modelling and describing systems behaviours for analysis. In other hand, Petri Net models are tools for the performance analysis of distributed systems. Graph grammars aims to bridge the gap between semi formal models generated by UML and formal notation as LGSPN models by means of transformations. Since the models wihch are concerned by this transformation are both graphs, we use Java based graph transformation and Eclipse tool to perform this process automatically.
{"title":"From UML statecharts diagrams to labeled Generalized Stochastic Petri Net models using graph transformation","authors":"M. Bouarioua, A. Chaoui, R. Elmansouri","doi":"10.1109/ISIICT.2011.6149602","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149602","url":null,"abstract":"This paper presents an approach for transforming UML Statecharts to General Stochastics Petri Nets. Unified Modelling Language (UML) is considered to be the standarditized language for modelling and describing systems behaviours for analysis. In other hand, Petri Net models are tools for the performance analysis of distributed systems. Graph grammars aims to bridge the gap between semi formal models generated by UML and formal notation as LGSPN models by means of transformations. Since the models wihch are concerned by this transformation are both graphs, we use Java based graph transformation and Eclipse tool to perform this process automatically.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129689336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149609
T. Sari, Chaouki Chemam
Web users want a quick and accurate access to images. The method currently used by search engines is the analysis of text surrounding an image which usually causes errors. Since there is a huge gap between the content of the image and the textual description associated. Hence, realizing a search engine for images in the web considering their contents became therefore mandatory. In this paper, we propose a method for collecting images of old Arabic documents from the Web. This work focuses mainly on content based image retrieval by texture feature using a neural network for classification and trying to integrate the user in the search loop. The system begins with the formulation of a query text, which is expanded and sent to a conventional search engine. Then, the obtained results are filtered by a neural network and finally displayed to the user for agreement. The experiments with various query texts shown good performances and hundreds of old Arabic documents were collected.
{"title":"A neural re-ranking method for searching ancient Arabic documents on the Web","authors":"T. Sari, Chaouki Chemam","doi":"10.1109/ISIICT.2011.6149609","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149609","url":null,"abstract":"Web users want a quick and accurate access to images. The method currently used by search engines is the analysis of text surrounding an image which usually causes errors. Since there is a huge gap between the content of the image and the textual description associated. Hence, realizing a search engine for images in the web considering their contents became therefore mandatory. In this paper, we propose a method for collecting images of old Arabic documents from the Web. This work focuses mainly on content based image retrieval by texture feature using a neural network for classification and trying to integrate the user in the search loop. The system begins with the formulation of a query text, which is expanded and sent to a conventional search engine. Then, the obtained results are filtered by a neural network and finally displayed to the user for agreement. The experiments with various query texts shown good performances and hundreds of old Arabic documents were collected.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130424746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149601
O. M. Rijal, N. Noor, Chang Yun Fah
Correlation generally shows the relationship between variables. A judicious use of this relationship may yield a measure of performance for a given algorithm. In this study the correlation measure RP2 derived from the Unreplicated Linear Functional relationship (ULFR) model will be shown to be a useful measure of performance in selected procedure or algorithm for a particular image registration method, a medical treatment, a character recognition method, and a compression method. The main result of these numerical studies strongly suggests that RP2 is potentially useful as a performance measure in a wide range of imaging problem.
{"title":"Application of correlation as a measure of performance","authors":"O. M. Rijal, N. Noor, Chang Yun Fah","doi":"10.1109/ISIICT.2011.6149601","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149601","url":null,"abstract":"Correlation generally shows the relationship between variables. A judicious use of this relationship may yield a measure of performance for a given algorithm. In this study the correlation measure RP2 derived from the Unreplicated Linear Functional relationship (ULFR) model will be shown to be a useful measure of performance in selected procedure or algorithm for a particular image registration method, a medical treatment, a character recognition method, and a compression method. The main result of these numerical studies strongly suggests that RP2 is potentially useful as a performance measure in a wide range of imaging problem.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121217869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149610
Rashid Al-Zubaidy, M. Y. Shambour
The Dead Sea (DS) basin plays a major role for regional economic development (industry, tourism and agriculture) in Jordan. Different studies stated that the water level of the DS is dropping an average of 3 feet per year. Accordingly there is a need to provide accurate and reliable estimates for the water level to help the researchers and geologists of the DS to make different kind of studies giving results. This achieved by a applying three Artificial Neural Networks (ANN) algorithms for the meteorological data recorded from different stations and resources inside and outside of Jordan, The models are trained and tested by BackPropagation (BP), Levenberg-Marquardt (L-M), and Generalized Regression Neural Networks (GRNN) and the results of models are verified with untrained data. The results from the different algorithms are compared with each other. The criteria of performance evaluation are calculated in order to evaluate and compare the performances of models. Finally, we can say that the proposed GRNN model provides best significant performance results comparing with other NN models using Mean Square Error (MSE).
{"title":"Prediction of the Dead Sea water level using neural networks","authors":"Rashid Al-Zubaidy, M. Y. Shambour","doi":"10.1109/ISIICT.2011.6149610","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149610","url":null,"abstract":"The Dead Sea (DS) basin plays a major role for regional economic development (industry, tourism and agriculture) in Jordan. Different studies stated that the water level of the DS is dropping an average of 3 feet per year. Accordingly there is a need to provide accurate and reliable estimates for the water level to help the researchers and geologists of the DS to make different kind of studies giving results. This achieved by a applying three Artificial Neural Networks (ANN) algorithms for the meteorological data recorded from different stations and resources inside and outside of Jordan, The models are trained and tested by BackPropagation (BP), Levenberg-Marquardt (L-M), and Generalized Regression Neural Networks (GRNN) and the results of models are verified with untrained data. The results from the different algorithms are compared with each other. The criteria of performance evaluation are calculated in order to evaluate and compare the performances of models. Finally, we can say that the proposed GRNN model provides best significant performance results comparing with other NN models using Mean Square Error (MSE).","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122618222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149596
Fayçal Bachtarzi, Sofiane Chemaa, A. Chaoui
With the emergence of e-business technology and the increase of competition between the companies, Web services became gradually very used and popular. Composition of Web services constitutes a natural evolution of this technology. It designs and builds complex inter-enterprise business applications out of single Web-based software components. However, this task remains highly complex and requires formal techniques for its completion. In this paper, we show how basic and existent services can be composed to create a composite service which offers a new functionality. To this end, we propose an expressive G-Net Based algebra that successfully solves complex Web service composition. Basic and advanced constructs which are supported by the proposed algebra are syntactically and semantically defined.
{"title":"A G-Net based approach for Web service composition","authors":"Fayçal Bachtarzi, Sofiane Chemaa, A. Chaoui","doi":"10.1109/ISIICT.2011.6149596","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149596","url":null,"abstract":"With the emergence of e-business technology and the increase of competition between the companies, Web services became gradually very used and popular. Composition of Web services constitutes a natural evolution of this technology. It designs and builds complex inter-enterprise business applications out of single Web-based software components. However, this task remains highly complex and requires formal techniques for its completion. In this paper, we show how basic and existent services can be composed to create a composite service which offers a new functionality. To this end, we propose an expressive G-Net Based algebra that successfully solves complex Web service composition. Basic and advanced constructs which are supported by the proposed algebra are syntactically and semantically defined.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134533068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149603
Zhengxin Chen, Jeff Torson, Santosh Servisetti
As an active research field, database keyword search (KWS) has put much emphasis on the performance issues, due to its high computational cost. However, a closer examination on KWS reveals other interesting aspects worth noting. In this paper, we examine KWS from a broader perspective, particularly from its relationship with data mining. Freed from syntax-related considerations, KWS users now have better opportunities to explore the data in the way as they wish, and such exploration may reveal useful hindsight for what to be done in data mining. Recently we have conducted our KWS research from this unique perspective. We propose a software environment which offers a dual-mode approach to explore KWS: the database mode allows the implementation of database KWS directly by incorporating various KWS algorithms, while the XML mode converts the database contents to an XML document on which KWS is conducted. The dual mode approach not only has the potential of achieving integrated KWS on both structured and semistructured data, but also facilitates query relaxation by incorporating ontologies in the XML mode. The software environment still allows us to observe performance related issues of KWS; but more importantly, it offers a freehand approach for users to explore the data, thus has the potential of aiding data mining. Component design and experimental studies are described.
{"title":"Reexamining database keyword search: Beyond performance","authors":"Zhengxin Chen, Jeff Torson, Santosh Servisetti","doi":"10.1109/ISIICT.2011.6149603","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149603","url":null,"abstract":"As an active research field, database keyword search (KWS) has put much emphasis on the performance issues, due to its high computational cost. However, a closer examination on KWS reveals other interesting aspects worth noting. In this paper, we examine KWS from a broader perspective, particularly from its relationship with data mining. Freed from syntax-related considerations, KWS users now have better opportunities to explore the data in the way as they wish, and such exploration may reveal useful hindsight for what to be done in data mining. Recently we have conducted our KWS research from this unique perspective. We propose a software environment which offers a dual-mode approach to explore KWS: the database mode allows the implementation of database KWS directly by incorporating various KWS algorithms, while the XML mode converts the database contents to an XML document on which KWS is conducted. The dual mode approach not only has the potential of achieving integrated KWS on both structured and semistructured data, but also facilitates query relaxation by incorporating ontologies in the XML mode. The software environment still allows us to observe performance related issues of KWS; but more importantly, it offers a freehand approach for users to explore the data, thus has the potential of aiding data mining. Component design and experimental studies are described.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114689603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149597
H. Belleili-Souici, Betouil Ali Abdelatif
In web service composition, there exist various web services that provide the same functionalities but differ in QoS parameters. Hence there exist several execution plans with different QoS attributes which fulfill user constraints. We propose in this paper an algorithm for selecting an execution plan which satisfies as much as possible user end-to-end QoS requirements and preferences. The algorithm we propose is based on pareto search. An execution plan is pareto optimal for a given qos user requirements and preferences if it is not possible to improve a qos attribute without deteriorating at least one qos attribute. Unlike existing propositions which bind to a service qos vector a unique value (utility function), our approach relies on an utility vector to each execution plan where each element of a vector represents the utility of the plan to the corresponding QoS attribute. However, since we can be faced with incomparable utility vectors, we propose to use a lexicographic preorder for pareto search which corresponds to a preference pre-order among quality attributes given in user request. Experiments on large problem instances demonstrate the scalability and the effectiveness of the approach.
{"title":"Using lexicographic preorder for pareto search in QoS-aware web service composition","authors":"H. Belleili-Souici, Betouil Ali Abdelatif","doi":"10.1109/ISIICT.2011.6149597","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149597","url":null,"abstract":"In web service composition, there exist various web services that provide the same functionalities but differ in QoS parameters. Hence there exist several execution plans with different QoS attributes which fulfill user constraints. We propose in this paper an algorithm for selecting an execution plan which satisfies as much as possible user end-to-end QoS requirements and preferences. The algorithm we propose is based on pareto search. An execution plan is pareto optimal for a given qos user requirements and preferences if it is not possible to improve a qos attribute without deteriorating at least one qos attribute. Unlike existing propositions which bind to a service qos vector a unique value (utility function), our approach relies on an utility vector to each execution plan where each element of a vector represents the utility of the plan to the corresponding QoS attribute. However, since we can be faced with incomparable utility vectors, we propose to use a lexicographic preorder for pareto search which corresponds to a preference pre-order among quality attributes given in user request. Experiments on large problem instances demonstrate the scalability and the effectiveness of the approach.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121021390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149606
J. Abdul-Jabbar, R. W. Hmad
In this paper, Bireciprocal Lattice Wave Digital Filters (BLWDFs) are utilized in an approximate linear phase design of 9th order IIR wavelet filter banks (FBs). Each of the two branches in the structure of the BLWDF realizes an allpass filter. The low-coefficient sensitivity, good dynamic range and good stability properties of such filters allow their realization with short coefficient wordlengths. Suitable coefficient wordlength representations are estimated for best selection of some prescribed performance measures. The quantized coefficients are then realized in a multiplierless manner and implemented on Xilinx FPGA device. Therefore, less-complex infinite impulse response (IIR) wavelet filter bank structures are obtained with linear phase processing.
{"title":"Allpass-based design, multiplierless realization and implementation of IIR wavelet filter banks with approximate linear phase","authors":"J. Abdul-Jabbar, R. W. Hmad","doi":"10.1109/ISIICT.2011.6149606","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149606","url":null,"abstract":"In this paper, Bireciprocal Lattice Wave Digital Filters (BLWDFs) are utilized in an approximate linear phase design of 9th order IIR wavelet filter banks (FBs). Each of the two branches in the structure of the BLWDF realizes an allpass filter. The low-coefficient sensitivity, good dynamic range and good stability properties of such filters allow their realization with short coefficient wordlengths. Suitable coefficient wordlength representations are estimated for best selection of some prescribed performance measures. The quantized coefficients are then realized in a multiplierless manner and implemented on Xilinx FPGA device. Therefore, less-complex infinite impulse response (IIR) wavelet filter bank structures are obtained with linear phase processing.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114449623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149600
M. Bakar., S. Ghoul
An agent can play one or more roles. A role is a specific behavior to be played by an agent, defined in terms of permission, responsibilities, activities, and of its interactions with other roles. An agent plays a role by actualizing the behavior in terms of services to be activated and de-activated in dependence of specific pre-condition and post-conditions. So the need to develop a model representing formally role of an agent and its interaction with other agent is valuable. The need for a role model that expresses how agent assumes and change roles is essential and AUML is not completely supportive. The core parts of AUML are interaction protocol diagrams and agent class diagrams, which are extensions of UML's sequence diagrams and class diagrams, respectively. Agents are assigned to roles, belong to classes and an Interaction Protocol diagram shows interactions between these agents roles along a timeline. So, the majority of AUML problems are the lack of: (1) agent role definition methodology, (2) formal semantics to AUML role diagrams, (3) responsibilities (internal) agent role definition (only the external role is defined by sequence diagram), and the agent role control over the time. In this paper, we propose a solution to the above problems. We start by analyzing some significant actual approaches to agent role modeling, we introduce an enhancement to AUML by an agent role definition methodology, and we end by comparing our contribution with similar works.
{"title":"A methodology for AUML role modeling","authors":"M. Bakar., S. Ghoul","doi":"10.1109/ISIICT.2011.6149600","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149600","url":null,"abstract":"An agent can play one or more roles. A role is a specific behavior to be played by an agent, defined in terms of permission, responsibilities, activities, and of its interactions with other roles. An agent plays a role by actualizing the behavior in terms of services to be activated and de-activated in dependence of specific pre-condition and post-conditions. So the need to develop a model representing formally role of an agent and its interaction with other agent is valuable. The need for a role model that expresses how agent assumes and change roles is essential and AUML is not completely supportive. The core parts of AUML are interaction protocol diagrams and agent class diagrams, which are extensions of UML's sequence diagrams and class diagrams, respectively. Agents are assigned to roles, belong to classes and an Interaction Protocol diagram shows interactions between these agents roles along a timeline. So, the majority of AUML problems are the lack of: (1) agent role definition methodology, (2) formal semantics to AUML role diagrams, (3) responsibilities (internal) agent role definition (only the external role is defined by sequence diagram), and the agent role control over the time. In this paper, we propose a solution to the above problems. We start by analyzing some significant actual approaches to agent role modeling, we introduce an enhancement to AUML by an agent role definition methodology, and we end by comparing our contribution with similar works.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116099712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISIICT.2011.6149593
Samir Tartir, I. Arpinar, Bobby McKnight
As more data is being semantically annotated, it is getting more common that researchers in multiple disciplines rely on semantic repositories that contain large amounts of data in the form of ontologies as a compact source of information. One of the main issues currently facing these researchers is the lack of easy-to-use interfaces for data retrieval, due to the need to use special query languages or applications. In addition, the knowledge in these repositories might not be comprehensive or up-to-date due to several reasons, such as the discovery of new knowledge in the field after the repositories was created. In this paper, we introduce an enhanced version of our SemanticQA system that allows users to query semantic data repositories using natural language questions. If a user question cannot be answered solely from the ontology, SemanticQA detects the failing parts and attempts to answer these parts from web documents and plugs in the answers to answer the whole questions, which might involve a repetition of the same process if other parts fail.
{"title":"SemanticQA: Exploiting semantic associations for cross-document question answering","authors":"Samir Tartir, I. Arpinar, Bobby McKnight","doi":"10.1109/ISIICT.2011.6149593","DOIUrl":"https://doi.org/10.1109/ISIICT.2011.6149593","url":null,"abstract":"As more data is being semantically annotated, it is getting more common that researchers in multiple disciplines rely on semantic repositories that contain large amounts of data in the form of ontologies as a compact source of information. One of the main issues currently facing these researchers is the lack of easy-to-use interfaces for data retrieval, due to the need to use special query languages or applications. In addition, the knowledge in these repositories might not be comprehensive or up-to-date due to several reasons, such as the discovery of new knowledge in the field after the repositories was created. In this paper, we introduce an enhanced version of our SemanticQA system that allows users to query semantic data repositories using natural language questions. If a user question cannot be answered solely from the ontology, SemanticQA detects the failing parts and attempts to answer these parts from web documents and plugs in the answers to answer the whole questions, which might involve a repetition of the same process if other parts fail.","PeriodicalId":266498,"journal":{"name":"International Symposium on Innovations in Information and Communications Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125110626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}