Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.54
Shunsuke Aoki, M. Iwai, K. Sezaki
Community sensing is an emerging system which allows the increasing number of mobile phone users to share effectively minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data through mobile phone's applications, e.g. Facebook, Twitter and Linkdin. However, a number of privacy concerns will hinder the spread of community sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption scheme. We should prepare a privacy-preserving community sensing scheme with less computational-complexity. Moreover, an environment that is reassuring for participants to conduct community sensing is strongly required because the quality of the statistical data is depending on general users' active contribution. In this article, we suggest a privacy-preserving community sensing scheme for human-centric data such as profile information by using the combination of negative surveys and randomized response techniques. By using our method described in this paper, the server can reconstruct the probability distributions of the original distributions of sensed values without violating the privacy of users. Especially, we can protect sensitive information from malicious tracking attacks. We evaluated how this scheme can preserve the privacy while keeping the integrity of aggregated information.
{"title":"Privacy-Aware Community Sensing Using Randomized Response","authors":"Shunsuke Aoki, M. Iwai, K. Sezaki","doi":"10.1109/COMPSACW.2013.54","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.54","url":null,"abstract":"Community sensing is an emerging system which allows the increasing number of mobile phone users to share effectively minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data through mobile phone's applications, e.g. Facebook, Twitter and Linkdin. However, a number of privacy concerns will hinder the spread of community sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption scheme. We should prepare a privacy-preserving community sensing scheme with less computational-complexity. Moreover, an environment that is reassuring for participants to conduct community sensing is strongly required because the quality of the statistical data is depending on general users' active contribution. In this article, we suggest a privacy-preserving community sensing scheme for human-centric data such as profile information by using the combination of negative surveys and randomized response techniques. By using our method described in this paper, the server can reconstruct the probability distributions of the original distributions of sensed values without violating the privacy of users. Especially, we can protect sensitive information from malicious tracking attacks. We evaluated how this scheme can preserve the privacy while keeping the integrity of aggregated information.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121663970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.63
P. Wrzeciono, W. Karwowski
This paper presents an automatic indexing system, created on the basis of text analysis, which involves grouping words and reducing them to their dictionary form. The system, developed with the help of an inflection dictionary of the Polish language, is designed to store and retrieve scientific papers dedicated to agriculture. During the analysis, auxiliary words such as pronouns, conjunctions, etc. were omitted. The words which are not present in the inflection dictionary, were used to create a dictionary of new terms. The words stored in the dictionary of new terms were used for the extraction of agricultural terms, which then could be located in the AGROVOC thesaurus. For each of the analyzed papers, a set of concepts with assigned weights was created. For each of the stored scientific papers, an "artificial sentence" was generated. An "artificial sentence" was created on the basis of the frequency of occurrence of dictionary forms of a word appearing in the texts and the word's grammatical category. This "artificial sentence" as well as sets of terms were used to find relationships between the papers stored in the system. These dependencies are used in an algorithm of searching for articles matching a query. It was observed that the number of correct results depends on the number of words in the paper. If a work consisted of at least a thousand words, the probability of misdiagnosis of content was not higher than 5%. In the case of short texts, such as abstracts, the probability of misdiagnosis was much higher, approximately 23%. Results obtained in the presented system are more accurate than those obtained by standard search engines. This method can also be applied to other natural languages with extensive inflection systems. The presented solution is a continuation of the work carried out under a grant [N N310 038538].
{"title":"Automatic Indexing and Creating Semantic Networks for Agricultural Science Papers in the Polish Language","authors":"P. Wrzeciono, W. Karwowski","doi":"10.1109/COMPSACW.2013.63","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.63","url":null,"abstract":"This paper presents an automatic indexing system, created on the basis of text analysis, which involves grouping words and reducing them to their dictionary form. The system, developed with the help of an inflection dictionary of the Polish language, is designed to store and retrieve scientific papers dedicated to agriculture. During the analysis, auxiliary words such as pronouns, conjunctions, etc. were omitted. The words which are not present in the inflection dictionary, were used to create a dictionary of new terms. The words stored in the dictionary of new terms were used for the extraction of agricultural terms, which then could be located in the AGROVOC thesaurus. For each of the analyzed papers, a set of concepts with assigned weights was created. For each of the stored scientific papers, an \"artificial sentence\" was generated. An \"artificial sentence\" was created on the basis of the frequency of occurrence of dictionary forms of a word appearing in the texts and the word's grammatical category. This \"artificial sentence\" as well as sets of terms were used to find relationships between the papers stored in the system. These dependencies are used in an algorithm of searching for articles matching a query. It was observed that the number of correct results depends on the number of words in the paper. If a work consisted of at least a thousand words, the probability of misdiagnosis of content was not higher than 5%. In the case of short texts, such as abstracts, the probability of misdiagnosis was much higher, approximately 23%. Results obtained in the presented system are more accurate than those obtained by standard search engines. This method can also be applied to other natural languages with extensive inflection systems. The presented solution is a continuation of the work carried out under a grant [N N310 038538].","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121837576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.119
Nao Maeda, H. Miwa
Large-volume contents distributed by a content delivery network (CDN) on the Internet increase load of content delivery servers and networks, which may degrade the quality of service. As a method to keep high quality service for CDN, some mirror servers providing the same content are located on a network and a request is navigated to one of the mirror servers. The network must offer connectivity between a user and servers with small distance even during link failures. In this paper, we address a network design method by protection of critical links whose failures significantly degrade the performance. The objective is to find the smallest number of the links to be protected so that a user can access servers with small increase of distance even if non-protected links fail. First, we formulate this problem and prove that it is NP-hard. Second, we present a polynomial-time algorithm to solve this problem when the number of simultaneously failed links is restricted to one. Furthermore, we evaluate the number of protected links of actual ISP network topologies by the algorithm and show the relationship between the number of protected links and the values of the parameters.
{"title":"Method for Keeping Small Distance from Users to Servers during Failures by Link Protection","authors":"Nao Maeda, H. Miwa","doi":"10.1109/COMPSACW.2013.119","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.119","url":null,"abstract":"Large-volume contents distributed by a content delivery network (CDN) on the Internet increase load of content delivery servers and networks, which may degrade the quality of service. As a method to keep high quality service for CDN, some mirror servers providing the same content are located on a network and a request is navigated to one of the mirror servers. The network must offer connectivity between a user and servers with small distance even during link failures. In this paper, we address a network design method by protection of critical links whose failures significantly degrade the performance. The objective is to find the smallest number of the links to be protected so that a user can access servers with small increase of distance even if non-protected links fail. First, we formulate this problem and prove that it is NP-hard. Second, we present a polynomial-time algorithm to solve this problem when the number of simultaneously failed links is restricted to one. Furthermore, we evaluate the number of protected links of actual ISP network topologies by the algorithm and show the relationship between the number of protected links and the values of the parameters.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121939453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.34
Akihito Nakamura
Continuous and comprehensive vulnerability management is a difficult task for administrators. The difficulties are not because of a lack of tools, but because they are designed without service-oriented architecture viewpoint and there is insufficient trustworthy machine-readable input data. This paper presents a service-oriented architecture for vulnerability assessment systems based on the open security standards and related contents. If the functions are provided as a service, various kinds of security applications can be interoperated and integrated in loosely-coupled way. We also studied the effectiveness of the available public data for automated vulnerability assessment. Despite the large amount of efforts that goes toward describing machine-readable assessment test in conformity to the OVAL standard, the evaluation result proves inadequate for comprehensive vulnerability assessment. Only about 12% of all the known vulnerabilities are covered by existing OVAL tests, while some popular client applications in the Top 30 with most unique vulnerabilities are covered more than 90%.
{"title":"Towards Unified Vulnerability Assessment with Open Data","authors":"Akihito Nakamura","doi":"10.1109/COMPSACW.2013.34","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.34","url":null,"abstract":"Continuous and comprehensive vulnerability management is a difficult task for administrators. The difficulties are not because of a lack of tools, but because they are designed without service-oriented architecture viewpoint and there is insufficient trustworthy machine-readable input data. This paper presents a service-oriented architecture for vulnerability assessment systems based on the open security standards and related contents. If the functions are provided as a service, various kinds of security applications can be interoperated and integrated in loosely-coupled way. We also studied the effectiveness of the available public data for automated vulnerability assessment. Despite the large amount of efforts that goes toward describing machine-readable assessment test in conformity to the OVAL standard, the evaluation result proves inadequate for comprehensive vulnerability assessment. Only about 12% of all the known vulnerabilities are covered by existing OVAL tests, while some popular client applications in the Top 30 with most unique vulnerabilities are covered more than 90%.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.49
Behnam Rahnama, Makbule Canan Ozdemir, Y. Kiran, Atilla Elçi
This research presents design and implementation of the shortest path algorithm for labyrinth discovery application in a multi-agent environment. Robot agents are unaware of the maze at the beginning, they learn as they discover it. Each agent solves a part of the maze and updates the shared memory so that other robots also benefit from each other's' discovery. Finding of the destination cell by an agent helps others to interconnect their discovered paths to the one ending with the destination cell. The proposed shortest path algorithm considers the cost for not only coordinate distance but also number of turns and moves required to traverse the path. The Shortest Path algorithm is compared against various available maze solving algorithms including Flood-Fill, Modified Flood-Fill and ALCKEF. The presented algorithm can be used also as an additional layer to enhance the available methods at second and subsequent runs.
{"title":"Design and Implementation of a Novel Weighted Shortest Path Algorithm for Maze Solving Robots","authors":"Behnam Rahnama, Makbule Canan Ozdemir, Y. Kiran, Atilla Elçi","doi":"10.1109/COMPSACW.2013.49","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.49","url":null,"abstract":"This research presents design and implementation of the shortest path algorithm for labyrinth discovery application in a multi-agent environment. Robot agents are unaware of the maze at the beginning, they learn as they discover it. Each agent solves a part of the maze and updates the shared memory so that other robots also benefit from each other's' discovery. Finding of the destination cell by an agent helps others to interconnect their discovered paths to the one ending with the destination cell. The proposed shortest path algorithm considers the cost for not only coordinate distance but also number of turns and moves required to traverse the path. The Shortest Path algorithm is compared against various available maze solving algorithms including Flood-Fill, Modified Flood-Fill and ALCKEF. The presented algorithm can be used also as an additional layer to enhance the available methods at second and subsequent runs.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126594289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.92
S. Zheng, Hongji Yang
Computing paradigms are becoming ubiquitous and complex in computing tasks, as a computing environment is changing from one dimension to three dimensions to reflect computing tasks changing from scientific computing to ubiquitous computing. By the same principle, software evolution approach should also be a three-dimensional one. Hence a three-dimensional evolution approach is proposed by addressing the relationships among software functions, software qualities and software models. Experiments were carried out to prove the proposed concept, which projects a number of concluding remarks for this study.
{"title":"A Three-Dimensional Approach to Evolving Software","authors":"S. Zheng, Hongji Yang","doi":"10.1109/COMPSACW.2013.92","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.92","url":null,"abstract":"Computing paradigms are becoming ubiquitous and complex in computing tasks, as a computing environment is changing from one dimension to three dimensions to reflect computing tasks changing from scientific computing to ubiquitous computing. By the same principle, software evolution approach should also be a three-dimensional one. Hence a three-dimensional evolution approach is proposed by addressing the relationships among software functions, software qualities and software models. Experiments were carried out to prove the proposed concept, which projects a number of concluding remarks for this study.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128935841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.124
Amina Souag, C. Salinesi, I. Comyn-Wattiau, H. Mouratidis
Recent research has argued about the importance of considering security during Requirements Engineering (RE) stage. Literature also emphasizes the importance of using ontologies to facilitate requirements elicitation. Ontologies are known to be rich sources of knowledge, and, being structured and equipped with reasoning features, they form a powerful tool to handle requirements. We believe that security being a multi-faceted problem, a single security ontology is not enough to guide SR Engineering (SRE) efficiently. Indeed, security ontologies only focus on technical and domain independent aspects of security. Therefore, one can hypothesize that domain knowledge is needed too. Our question is "how to combine the use of security ontologies and domain ontologies to guide requirements elicitation efficiently and effectively?" We propose a method that exploits both types of ontologies dynamically through a collection of heuristic production rules. We demonstrate that the combined use of security ontologies with domain ontologies to guide SR elicitation is more effective than just relying on security ontologies. This paper presents our method and reports a preliminary evaluation conducted through critical analysis by experts. The evaluation shows that the method provides a good balance between the genericity with respect to the ontologies (which do not need to be selected in advance), and the specificity of the elicited requirements with respect to the domain at hand.
{"title":"Using Security and Domain Ontologies for Security Requirements Analysis","authors":"Amina Souag, C. Salinesi, I. Comyn-Wattiau, H. Mouratidis","doi":"10.1109/COMPSACW.2013.124","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.124","url":null,"abstract":"Recent research has argued about the importance of considering security during Requirements Engineering (RE) stage. Literature also emphasizes the importance of using ontologies to facilitate requirements elicitation. Ontologies are known to be rich sources of knowledge, and, being structured and equipped with reasoning features, they form a powerful tool to handle requirements. We believe that security being a multi-faceted problem, a single security ontology is not enough to guide SR Engineering (SRE) efficiently. Indeed, security ontologies only focus on technical and domain independent aspects of security. Therefore, one can hypothesize that domain knowledge is needed too. Our question is \"how to combine the use of security ontologies and domain ontologies to guide requirements elicitation efficiently and effectively?\" We propose a method that exploits both types of ontologies dynamically through a collection of heuristic production rules. We demonstrate that the combined use of security ontologies with domain ontologies to guide SR elicitation is more effective than just relying on security ontologies. This paper presents our method and reports a preliminary evaluation conducted through critical analysis by experts. The evaluation shows that the method provides a good balance between the genericity with respect to the ontologies (which do not need to be selected in advance), and the specificity of the elicited requirements with respect to the domain at hand.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.87
Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang
Packet Classification is an enabling technique for the future Internet by classifying incoming packets into forwarding classes to fulfill different service requirements. It is necessary for IP routers to provide network security and differentiated services. Recursive Flow Classification (RFC) is a notable high-speed scheme for packet classification. However, it may incur high memory consumption in generating the pre-computed cross-product tables. In this paper, we propose a new scheme to reduce the memory consumption by partitioning a rule database into several subsets. The rules of each subset are stored in an independent RFC data structure to significantly alleviate overall memory consumption. We also present several refinements for these RFC data structures to significantly improve the search speed. The experimental results show that our scheme dramatically improves the storage performance of RFC.
{"title":"Packet Classification Using Multi-iteration RFC","authors":"Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang","doi":"10.1109/COMPSACW.2013.87","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.87","url":null,"abstract":"Packet Classification is an enabling technique for the future Internet by classifying incoming packets into forwarding classes to fulfill different service requirements. It is necessary for IP routers to provide network security and differentiated services. Recursive Flow Classification (RFC) is a notable high-speed scheme for packet classification. However, it may incur high memory consumption in generating the pre-computed cross-product tables. In this paper, we propose a new scheme to reduce the memory consumption by partitioning a rule database into several subsets. The rules of each subset are stored in an independent RFC data structure to significantly alleviate overall memory consumption. We also present several refinements for these RFC data structures to significantly improve the search speed. The experimental results show that our scheme dramatically improves the storage performance of RFC.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133348947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an indexing method called UBI-Tree for improving the efficiency of a new type of data search called schema-less search. Schema-less search is a multi-dimensional range search from a wide variety of data, such as sensor data, collected through participatory sensing. Such data have different types and number of dimensions because a participant uses various devices. Therefore, applications must search for their target data within the sensor data in a cross-schema manner. UBI-Tree is a tree-structured index based on R-Tree. The insert algorithm classifies various data into nodes according to newly introduced scores to estimate the inefficiency of classification. The score can uniformly represent the difference in the types of dimensions between data as well as the difference in dimension values. By classifying data that have a similar dimension set into the same node, UBI-Tree suppresses the curse of dimensionality and makes schema-less searches efficient. The validity of UBI-Tree was evaluated through experiments.
{"title":"UBI-Tree: Indexing Method for Schema-Less Search","authors":"Yutaka Arakawa, Takayuki Nakamura, Motonori Nakamura, Hajime Matsumura","doi":"10.1109/COMPSACW.2013.58","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.58","url":null,"abstract":"We propose an indexing method called UBI-Tree for improving the efficiency of a new type of data search called schema-less search. Schema-less search is a multi-dimensional range search from a wide variety of data, such as sensor data, collected through participatory sensing. Such data have different types and number of dimensions because a participant uses various devices. Therefore, applications must search for their target data within the sensor data in a cross-schema manner. UBI-Tree is a tree-structured index based on R-Tree. The insert algorithm classifies various data into nodes according to newly introduced scores to estimate the inefficiency of classification. The score can uniformly represent the difference in the types of dimensions between data as well as the difference in dimension values. By classifying data that have a similar dimension set into the same node, UBI-Tree suppresses the curse of dimensionality and makes schema-less searches efficient. The validity of UBI-Tree was evaluated through experiments.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128945836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-22DOI: 10.1109/COMPSACW.2013.132
Tingda Lu, Mi Lin, Chih-Ming Chen, Jhih-Hao Wu
To reduce effectively the reading anxiety of learners while reading English articles, a C4.5 decision tree, a widely used data mining technique, was used to develop a personalized reading anxiety prediction model (PRAPM) based on individual learners' reading annotation behavior in a collaborative digital reading annotation system. In addition to forecasting immediately the reading anxiety levels of learners, the proposed PRAPM can be used to identify the key factors that cause reading anxiety based on the fired prediction rules determined by the developed decision tree. By understanding these key factors that cause reading anxiety, instructors can apply reading strategies to reduce reading anxiety, thus promoting English-language reading performance. To assess whether the proposed PRAPM can assist instructors in reducing the reading anxiety of learners, this study applies the quasi-experimental method to compare the learning performance of three learning groups, which are supported by a collaborative digital reading annotation system with different learning mechanisms to reduce reading anxiety. The control group, experimental group A and experimental group B conducted the same English reading activity. However, each group was given a collaborative digital reading annotation system with individual annotations, cooperative annotations, and cooperative annotation with the instructor's support to reduce reading anxiety by proposed PRAPM. Experimental results indicate that the average correct prediction rate of the proposed PRAPM in identifying the reading anxiety levels of learners was as high as 70%. The online instructor who applied reading assistive strategies based on the mining factors that affect reading anxiety from the proposed PRAPM can significantly reduce the reading anxiety of male learners in the experimental group B, showing that gender difference existed, and the online instructor's interaction with the male learners of the experimental group B indeed helped reduce the reading anxiety.
{"title":"Forecasting Reading Anxiety to Promote Reading Performance Based on Annotation Behavior","authors":"Tingda Lu, Mi Lin, Chih-Ming Chen, Jhih-Hao Wu","doi":"10.1109/COMPSACW.2013.132","DOIUrl":"https://doi.org/10.1109/COMPSACW.2013.132","url":null,"abstract":"To reduce effectively the reading anxiety of learners while reading English articles, a C4.5 decision tree, a widely used data mining technique, was used to develop a personalized reading anxiety prediction model (PRAPM) based on individual learners' reading annotation behavior in a collaborative digital reading annotation system. In addition to forecasting immediately the reading anxiety levels of learners, the proposed PRAPM can be used to identify the key factors that cause reading anxiety based on the fired prediction rules determined by the developed decision tree. By understanding these key factors that cause reading anxiety, instructors can apply reading strategies to reduce reading anxiety, thus promoting English-language reading performance. To assess whether the proposed PRAPM can assist instructors in reducing the reading anxiety of learners, this study applies the quasi-experimental method to compare the learning performance of three learning groups, which are supported by a collaborative digital reading annotation system with different learning mechanisms to reduce reading anxiety. The control group, experimental group A and experimental group B conducted the same English reading activity. However, each group was given a collaborative digital reading annotation system with individual annotations, cooperative annotations, and cooperative annotation with the instructor's support to reduce reading anxiety by proposed PRAPM. Experimental results indicate that the average correct prediction rate of the proposed PRAPM in identifying the reading anxiety levels of learners was as high as 70%. The online instructor who applied reading assistive strategies based on the mining factors that affect reading anxiety from the proposed PRAPM can significantly reduce the reading anxiety of male learners in the experimental group B, showing that gender difference existed, and the online instructor's interaction with the male learners of the experimental group B indeed helped reduce the reading anxiety.","PeriodicalId":152957,"journal":{"name":"2013 IEEE 37th Annual Computer Software and Applications Conference Workshops","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117292195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}