Pub Date : 2014-10-01DOI: 10.1504/IJKWI.2014.065063
Brendan Flanagan, Chengjiu Yin, Takahiko Suzuki, S. Hirokawa
In order to overcome mistakes, learners need feedback to prompt reflection on their errors. This is a particularly important issue in education systems as the system effectiveness in finding errors or mistakes could have an impact on learning. Finding errors is essential to providing appropriate guidance in order for learners to overcome their flaws. Traditionally the task of finding errors in writing takes time and effort. The authors of this paper have a long-term research goal of creating tools for learners, especially autonomous learners, to enable them to be more aware of their errors and provide a way to reflect on the errors. As a part of this research, we propose the use of a classifier to automatically analyse and determine the errors in foreign language writing. For the experiment in this paper, we collected random sentences from the Lang-8 website that had been written by foreign language learners. Using predefined error categories, we manually classified the sentences to use as machine learning training data. This was then used to train a classifier by applying SVM machine learning to the training data. As the manual classification of training data takes time, it is intended that the classifier would be used to accelerate the process used for generating further training data.
{"title":"Classification of English language learner writing errors using a parallel corpus with SVM","authors":"Brendan Flanagan, Chengjiu Yin, Takahiko Suzuki, S. Hirokawa","doi":"10.1504/IJKWI.2014.065063","DOIUrl":"https://doi.org/10.1504/IJKWI.2014.065063","url":null,"abstract":"In order to overcome mistakes, learners need feedback to prompt reflection on their errors. This is a particularly important issue in education systems as the system effectiveness in finding errors or mistakes could have an impact on learning. Finding errors is essential to providing appropriate guidance in order for learners to overcome their flaws. Traditionally the task of finding errors in writing takes time and effort. The authors of this paper have a long-term research goal of creating tools for learners, especially autonomous learners, to enable them to be more aware of their errors and provide a way to reflect on the errors. As a part of this research, we propose the use of a classifier to automatically analyse and determine the errors in foreign language writing. For the experiment in this paper, we collected random sentences from the Lang-8 website that had been written by foreign language learners. Using predefined error categories, we manually classified the sentences to use as machine learning training data. This was then used to train a classifier by applying SVM machine learning to the training data. As the manual classification of training data takes time, it is intended that the classifier would be used to accelerate the process used for generating further training data.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127413706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056372
Kenji Matsuura, Hiroki Moriguchi, K. Kanenishi
This study proposes a system to support self-controlled motor-skill development in a web-community environment. We discuss the difficulties involved in sustaining self-controlled training without systematic supports. The system proposed provides a function to suggest an appropriate range of target goals based on data from previous training sessions. A prototype system has been designed and developed. This study reports a case study based on a trial use of the system. The results suggest that our approach contributes to each user's ability to achieve target goals.
{"title":"Supporting self-control of individual training for motor-skill development with a social web environment","authors":"Kenji Matsuura, Hiroki Moriguchi, K. Kanenishi","doi":"10.1504/IJKWI.2013.056372","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056372","url":null,"abstract":"This study proposes a system to support self-controlled motor-skill development in a web-community environment. We discuss the difficulties involved in sustaining self-controlled training without systematic supports. The system proposed provides a function to suggest an appropriate range of target goals based on data from previous training sessions. A prototype system has been designed and developed. This study reports a case study based on a trial use of the system. The results suggest that our approach contributes to each user's ability to achieve target goals.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127875226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056366
S. Shaila, A. Vadivel
Suitable architecture specification of a deep web crawler with surface web crawler as well as indexer is proposed for fetching large number of documents from deep web using rules. The functional dependency of core and allied fields in the FORM are identified for generating rules using SVM classifier and classifies them as most preferable, least preferable and mutually exclusive. The FORMs are filled with values from most preferable class for fetching large number of documents. The extracted document is indexed for information retrieval applications. The architecture is extended to distributed crawler using web services. The proposed crawler fetches large number of documents while using the values in most preferable class. This architecture has higher coverage rate and reduces fetching time. The retrieval performance is encouraging and achieves similar precision of retrieval as Google search engine system.
{"title":"Architecture specification of rule-based deep web crawler with indexer","authors":"S. Shaila, A. Vadivel","doi":"10.1504/IJKWI.2013.056366","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056366","url":null,"abstract":"Suitable architecture specification of a deep web crawler with surface web crawler as well as indexer is proposed for fetching large number of documents from deep web using rules. The functional dependency of core and allied fields in the FORM are identified for generating rules using SVM classifier and classifies them as most preferable, least preferable and mutually exclusive. The FORMs are filled with values from most preferable class for fetching large number of documents. The extracted document is indexed for information retrieval applications. The architecture is extended to distributed crawler using web services. The proposed crawler fetches large number of documents while using the values in most preferable class. This architecture has higher coverage rate and reduces fetching time. The retrieval performance is encouraging and achieves similar precision of retrieval as Google search engine system.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114318266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056362
V. S. Dixit, Punam Bedi, Harita Mehta
The knowledge base of a traditional web recommender system is constructed from web logs, reflecting past user preferences which may change over time. In this paper, an algorithm, based on implicit user feedback on top N recommendations and normalised mutual information, is proposed for collaborative personalised web recommender system. The proposed algorithm updates the knowledge base taking into account the changing user preferences, in order to generate better recommendations in future. The proposed approach and collaborative personalised web recommender systems without feedback are compared. Significant improvements are observed in precision, recall and F1 measure for proposed approach.
{"title":"Generation of web recommendations using implicit user feedback and normalised mutual information","authors":"V. S. Dixit, Punam Bedi, Harita Mehta","doi":"10.1504/IJKWI.2013.056362","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056362","url":null,"abstract":"The knowledge base of a traditional web recommender system is constructed from web logs, reflecting past user preferences which may change over time. In this paper, an algorithm, based on implicit user feedback on top N recommendations and normalised mutual information, is proposed for collaborative personalised web recommender system. The proposed algorithm updates the knowledge base taking into account the changing user preferences, in order to generate better recommendations in future. The proposed approach and collaborative personalised web recommender systems without feedback are compared. Significant improvements are observed in precision, recall and F1 measure for proposed approach.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114522399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056373
Kazuhisa Seta, Liang Cui, M. Ikeda, Noriyuki Matsuda, M. Okamoto
We think that to acquire a skill for co-creating knowledge with others cooperatively, it is important to develop meta-cognitive skill, but to do so is not straightforward. In this paper, we attempt to design a thinking skill particularly meta-cognitive skill development curriculum for first-year bachelor-degree students on the basis of the results obtained in two preceding studies that we have performed for postgraduate education and those engaged in medical services. To deal effectively with new learning for first-year bachelor students - 'thinking about thinking' - we designed a curriculum that gives students a knowledge co-creation programme and thinking externalisation tool to devote attention to the thinking process and to have the bodily sensation of its meaning, and we put it into practice. This paper describes a learning model for thinking skill, which is fundamental to appropriate curriculum design, and discusses the design intention of a curriculum conforming to it and the usefulness of the learning programme through examples of using the thinking externalisation tool in practice. Results show that the learning programme developed in this study is useful for cultivating the meta-cognitive skill of bachelor students.
{"title":"Meta-cognitive skill training programme for first-year bachelor students using thinking process externalisation environment","authors":"Kazuhisa Seta, Liang Cui, M. Ikeda, Noriyuki Matsuda, M. Okamoto","doi":"10.1504/IJKWI.2013.056373","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056373","url":null,"abstract":"We think that to acquire a skill for co-creating knowledge with others cooperatively, it is important to develop meta-cognitive skill, but to do so is not straightforward. In this paper, we attempt to design a thinking skill particularly meta-cognitive skill development curriculum for first-year bachelor-degree students on the basis of the results obtained in two preceding studies that we have performed for postgraduate education and those engaged in medical services. To deal effectively with new learning for first-year bachelor students - 'thinking about thinking' - we designed a curriculum that gives students a knowledge co-creation programme and thinking externalisation tool to devote attention to the thinking process and to have the bodily sensation of its meaning, and we put it into practice. This paper describes a learning model for thinking skill, which is fundamental to appropriate curriculum design, and discusses the design intention of a curriculum conforming to it and the usefulness of the learning programme through examples of using the thinking externalisation tool in practice. Results show that the learning programme developed in this study is useful for cultivating the meta-cognitive skill of bachelor students.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117134084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056374
F. Durão, M. Caraciolo, Bruno J. M. Melo, S. Meira
In this paper, we propose a recommendation model to assist users find relevant courses for public tenders. The recommendations are computed based on the user study activity at Atepassar.com, a web-based learning environment for public tender candidates. Unlike traditional academic-oriented recommender systems, our approach takes into account crucial information for public tender candidates such as salary offered by public tenders and location where the exams take place. Technically, our recommendations rely on content-based techniques and a location reasoning method in order to provide users with most feasible courses. Results from a real-world dataset indicate reasonable improvement in recommendation quality over compared baseline models - we observed about 11 precision improvement and 12.7% of recall gain over the best model compared - demonstrating the potential of our approach in recommending personalised courses.
{"title":"Towards effective course-based recommendations for public tenders","authors":"F. Durão, M. Caraciolo, Bruno J. M. Melo, S. Meira","doi":"10.1504/IJKWI.2013.056374","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056374","url":null,"abstract":"In this paper, we propose a recommendation model to assist users find relevant courses for public tenders. The recommendations are computed based on the user study activity at Atepassar.com, a web-based learning environment for public tender candidates. Unlike traditional academic-oriented recommender systems, our approach takes into account crucial information for public tender candidates such as salary offered by public tenders and location where the exams take place. Technically, our recommendations rely on content-based techniques and a location reasoning method in order to provide users with most feasible courses. Results from a real-world dataset indicate reasonable improvement in recommendation quality over compared baseline models - we observed about 11 precision improvement and 12.7% of recall gain over the best model compared - demonstrating the potential of our approach in recommending personalised courses.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115638387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1504/IJKWI.2013.056370
S. Chawla
In this paper the information scent is used as pheromone in ACO for optimising the personalised web search based on clustered user search query sessions. It is realised that user's search for information on the web using information scent of the clicked URLs is analogous to that of ants searching for food using the pheromone in ACO. The user clicks to the personalised search results is used to update increase/decrease the information scent of the corresponding clicked URLs of the stored query sessions and the associated clusters dynamically. This regular updation of information scent optimises the relevant set of clicked URLs associated with the clusters and hence personalises the user's search effectively. An experimental study was conducted on the web user query sessions to test the effectiveness of the proposed approach and results shows the effective improvement in the precision of search results.
{"title":"Personalised web search using ACO with information scent","authors":"S. Chawla","doi":"10.1504/IJKWI.2013.056370","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.056370","url":null,"abstract":"In this paper the information scent is used as pheromone in ACO for optimising the personalised web search based on clustered user search query sessions. It is realised that user's search for information on the web using information scent of the clicked URLs is analogous to that of ants searching for food using the pheromone in ACO. The user clicks to the personalised search results is used to update increase/decrease the information scent of the corresponding clicked URLs of the stored query sessions and the associated clusters dynamically. This regular updation of information scent optimises the relevant set of clicked URLs associated with the clusters and hence personalises the user's search effectively. An experimental study was conducted on the web user query sessions to test the effectiveness of the proposed approach and results shows the effective improvement in the precision of search results.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130265555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-01DOI: 10.1504/IJKWI.2013.060276
Kazuhisa Todayama, K. Karasawa
In this paper we examine eight popular books published immediately after the Fukushima Daiichi nuclear disaster. Our aim is to clarify the characteristics, problems in the scientific communication related to the health effects of radiation. The eight books are compared from the aspects of: 1 how units such as Bq, Gy and Sv are defined; 2 how the dose limits are explained; 3 how deterministic and stochastic effects of radiation are differentiated; 4 how LNT model is explained and evaluated; 5 how the fact that we evolved in the midst of natural background radiation is treated. The main finding of our survey is that although the authors of the examined texts start from the same 'scientific facts', in trying to make these facts easily understandable, they adopt different rhetorical strategies and eventually they end up delivering quite different and conflicting messages to the people.
{"title":"How radiation and its effect were explained?: Scientific communication after the Fukushima Daiichi nuclear disaster","authors":"Kazuhisa Todayama, K. Karasawa","doi":"10.1504/IJKWI.2013.060276","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.060276","url":null,"abstract":"In this paper we examine eight popular books published immediately after the Fukushima Daiichi nuclear disaster. Our aim is to clarify the characteristics, problems in the scientific communication related to the health effects of radiation. The eight books are compared from the aspects of: 1 how units such as Bq, Gy and Sv are defined; 2 how the dose limits are explained; 3 how deterministic and stochastic effects of radiation are differentiated; 4 how LNT model is explained and evaluated; 5 how the fact that we evolved in the midst of natural background radiation is treated. The main finding of our survey is that although the authors of the examined texts start from the same 'scientific facts', in trying to make these facts easily understandable, they adopt different rhetorical strategies and eventually they end up delivering quite different and conflicting messages to the people.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115489041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-01DOI: 10.1504/IJKWI.2013.060266
K. Tsukamoto, Y. Koizumi, H. Ohsaki, K. Hato, J. Murayama
Internet users devote considerable time and effort to collecting information from the web. To do so efficiently, after following a hyperlink, a user must be able to rapidly determine whether the desired information is contained on the destination web page. In this paper, therefore, we propose a method called hyperlink referring block estimation HERB, which infers the existence and location of relevant contents on destination web pages. HERB utilises user context in web browsing, in particular, the selected hyperlink and the text around it. Through experiments simulating ordinary web browsing, we quantitatively investigate the effectiveness of HERB. Our experiments show that HERB can infer blocks relevant to a hyperlink with approximately 65% precision and 70% recall. Furthermore, we design two HERB implementations, namely, a web proxy and a web browser, and we present an overview of a web proxy prototype and an example use case.
{"title":"Inferring relevant blocks on hyperlinked web page based on block-to-block similarity","authors":"K. Tsukamoto, Y. Koizumi, H. Ohsaki, K. Hato, J. Murayama","doi":"10.1504/IJKWI.2013.060266","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.060266","url":null,"abstract":"Internet users devote considerable time and effort to collecting information from the web. To do so efficiently, after following a hyperlink, a user must be able to rapidly determine whether the desired information is contained on the destination web page. In this paper, therefore, we propose a method called hyperlink referring block estimation HERB, which infers the existence and location of relevant contents on destination web pages. HERB utilises user context in web browsing, in particular, the selected hyperlink and the text around it. Through experiments simulating ordinary web browsing, we quantitatively investigate the effectiveness of HERB. Our experiments show that HERB can infer blocks relevant to a hyperlink with approximately 65% precision and 70% recall. Furthermore, we design two HERB implementations, namely, a web proxy and a web browser, and we present an overview of a web proxy prototype and an example use case.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125886472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-01DOI: 10.1504/IJKWI.2013.060275
H. Sato, Ryoichi Narita
Processing aggregate range queries on remote spatial databases suffers from accessing huge and/or large number of databases that operate autonomously and simple and/or restrictive web API interfaces. To overcome these difficulties, this paper applies a revised version of regular polygon-based search algorithm RPSA to approximately search aggregate range query results over remote spatial databases. The algorithm requests a series of k-NN queries to obtain aggregate range query results. The query point of a subsequent k-NN query is chosen from among the vertices of a regular polygon inscribed in a previously searched circle. Experimental results show that precision is over 0.97 with regard to sum range query results and NOR is at most 4.3. On the other hand, precision is over 0.87 with regard to maximum range query results and NOR is at most 4.9.
{"title":"Approximately processing aggregate range queries on remote spatial databases","authors":"H. Sato, Ryoichi Narita","doi":"10.1504/IJKWI.2013.060275","DOIUrl":"https://doi.org/10.1504/IJKWI.2013.060275","url":null,"abstract":"Processing aggregate range queries on remote spatial databases suffers from accessing huge and/or large number of databases that operate autonomously and simple and/or restrictive web API interfaces. To overcome these difficulties, this paper applies a revised version of regular polygon-based search algorithm RPSA to approximately search aggregate range query results over remote spatial databases. The algorithm requests a series of k-NN queries to obtain aggregate range query results. The query point of a subsequent k-NN query is chosen from among the vertices of a regular polygon inscribed in a previously searched circle. Experimental results show that precision is over 0.97 with regard to sum range query results and NOR is at most 4.3. On the other hand, precision is over 0.87 with regard to maximum range query results and NOR is at most 4.9.","PeriodicalId":113936,"journal":{"name":"Int. J. Knowl. Web Intell.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115988836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}