Elliptic curve cryptography (ECC) provides solid potential for wireless sensor network security due to its small key size and its high security strength. However, there is a need to reduce key calculation time to satisfy the full range of potential applications, in particular those involving wireless sensor networks (WSN). Scalar multiplication operation in elliptical curve cryptography accounts for 80% of key calculation time on wireless sensor network motes. In this paper, two major contributions are made: (a) we propose an algorithm based on 1’s complement subtraction to represent scalar in scalar multiplication which offer less Hamming weight and will significantly improve the computational efficiency of scalar multiplication; and (b) we present a fuzzy controller for dynamic window sizing to allow the program to run under optimum conditions by allocating available RAM and ROM at the sensor node within a wireless sensor network. The simulation results showed that the average calculation time decreased by approximately 15% in comparison to traditional algorithms in an ECC wireless sensor network.
{"title":"Scalar Multiplication of a Dynamic Window with Fuzzy Controller for Elliptic Curve Cryptography","authors":"Xu Huang, John Campbell, Frank Gao","doi":"10.1109/NSS.2010.16","DOIUrl":"https://doi.org/10.1109/NSS.2010.16","url":null,"abstract":"Elliptic curve cryptography (ECC) provides solid potential for wireless sensor network security due to its small key size and its high security strength. However, there is a need to reduce key calculation time to satisfy the full range of potential applications, in particular those involving wireless sensor networks (WSN). Scalar multiplication operation in elliptical curve cryptography accounts for 80% of key calculation time on wireless sensor network motes. In this paper, two major contributions are made: (a) we propose an algorithm based on 1’s complement subtraction to represent scalar in scalar multiplication which offer less Hamming weight and will significantly improve the computational efficiency of scalar multiplication; and (b) we present a fuzzy controller for dynamic window sizing to allow the program to run under optimum conditions by allocating available RAM and ROM at the sensor node within a wireless sensor network. The simulation results showed that the average calculation time decreased by approximately 15% in comparison to traditional algorithms in an ECC wireless sensor network.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method for solving face gender recognition problem. This method employs 2D Principal Component Analysis, one of the prominent methods for extracting feature vectors, and Support Vector Machine, the most powerful discriminative method for classification. Experiments for the proposed approach have been conducted on FERET data set and the results show that the proposed method could improve the classification rates.
{"title":"Face Gender Recognition Based on 2D Principal Component Analysis and Support Vector Machine","authors":"L. Bui, D. Tran, Xu Huang, G. Chetty","doi":"10.1109/NSS.2010.19","DOIUrl":"https://doi.org/10.1109/NSS.2010.19","url":null,"abstract":"This paper presents a novel method for solving face gender recognition problem. This method employs 2D Principal Component Analysis, one of the prominent methods for extracting feature vectors, and Support Vector Machine, the most powerful discriminative method for classification. Experiments for the proposed approach have been conducted on FERET data set and the results show that the proposed method could improve the classification rates.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124947606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A special membership proof technique is applied to the efficiency bottleneck of homomorphic e-voting, vote validity check. Although the special membership proof technique has some limitations such that so far few appropriate applications have been found for it, it is suitable for homomorphic e-voting. As so far no efficient and secure solution has been found for vote validity check in homomorphic e-voting, this new method is very useful. It greatly improves efficiency of homomorphic e-voting.
{"title":"Efficient Proof of Validity of Votes in Homomorphic E-Voting","authors":"Kun Peng, F. Bao","doi":"10.1109/NSS.2010.25","DOIUrl":"https://doi.org/10.1109/NSS.2010.25","url":null,"abstract":"A special membership proof technique is applied to the efficiency bottleneck of homomorphic e-voting, vote validity check. Although the special membership proof technique has some limitations such that so far few appropriate applications have been found for it, it is suitable for homomorphic e-voting. As so far no efficient and secure solution has been found for vote validity check in homomorphic e-voting, this new method is very useful. It greatly improves efficiency of homomorphic e-voting.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheila Becker, H. Abdelnur, Jorge Lucángeli Obes, R. State, O. Festor
We propose a game theoretical model for fuzz testing, consisting in generating unexpected input to search for software vulnerabilities. As of today, no performance guarantees or assessment frameworks for fizzing exist. Our paper addresses these issues and describes a simple model that can be used to assess and identify optimal fizzing strategies, by leveraging game theory. In this context, payoff functions are obtained using a tainted data analysis and instrumentation of a target application to assess the impact of different fizzing strategies.
{"title":"Improving Fuzz Testing Using Game Theory","authors":"Sheila Becker, H. Abdelnur, Jorge Lucángeli Obes, R. State, O. Festor","doi":"10.1109/NSS.2010.81","DOIUrl":"https://doi.org/10.1109/NSS.2010.81","url":null,"abstract":"We propose a game theoretical model for fuzz testing, consisting in generating unexpected input to search for software vulnerabilities. As of today, no performance guarantees or assessment frameworks for fizzing exist. Our paper addresses these issues and describes a simple model that can be used to assess and identify optimal fizzing strategies, by leveraging game theory. In this context, payoff functions are obtained using a tainted data analysis and instrumentation of a target application to assess the impact of different fizzing strategies.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122709045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discovery of interesting rules describing the behavioural patterns of smokers’ quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers’ quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The cluster-based approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers’ quitting intention more accurately than a single decision tree.
{"title":"Cluster Based Rule Discovery Model for Enhancement of Government's Tobacco Control Strategy","authors":"Md. Shamsul Huda, J. Yearwood, R. Borland","doi":"10.1109/NSS.2010.14","DOIUrl":"https://doi.org/10.1109/NSS.2010.14","url":null,"abstract":"Discovery of interesting rules describing the behavioural patterns of smokers’ quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers’ quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The cluster-based approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers’ quitting intention more accurately than a single decision tree.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we developed a robust data cleaning technique, called PC-Filter+ (PC stands for partition comparison) based on its predecessor, for effective and efficient duplicate record detection in large databases. PC-Filter+ provides more flexible algorithmic options for constructing the Partition Comparison Graph (PCG). In addition, PC-Filter+ is able to deal with duplicate detection under different memory constraints.
{"title":"An Efficient and Effective Duplication Detection Method in Large Database Applications","authors":"Ji Zhang","doi":"10.1109/NSS.2010.78","DOIUrl":"https://doi.org/10.1109/NSS.2010.78","url":null,"abstract":"In this paper, we developed a robust data cleaning technique, called PC-Filter+ (PC stands for partition comparison) based on its predecessor, for effective and efficient duplicate record detection in large databases. PC-Filter+ provides more flexible algorithmic options for constructing the Partition Comparison Graph (PCG). In addition, PC-Filter+ is able to deal with duplicate detection under different memory constraints.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"16 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132797520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typical protocols for password-based authentication assumes a single server which stores all the passwords necessary to authenticate users. If the server is compromised, user passwords are disclosed. To address this issue, Yang et al. proposed a practical password-based two-server authentication and key exchange protocol, where a front-end server, keeping one share of a password, and a back-end server, holding another share of the password, cooperate in authenticating a user and, meanwhile, establishing a secret key with the user. In this paper, we present two ``half-online and half-offline'' attacks to Yang et al.'s protocol. By these attacks, user passwords can be determined once the back-end server is compromised. Therefore, Yang et al.'s protocol has no essential difference from a password-based single-server authentication protocol.
{"title":"Security Analysis of Yang et al.'s Practical Password-Based Two-Server Authentication and Key Exchange System","authors":"X. Yi","doi":"10.1109/NSS.2010.97","DOIUrl":"https://doi.org/10.1109/NSS.2010.97","url":null,"abstract":"Typical protocols for password-based authentication assumes a single server which stores all the passwords necessary to authenticate users. If the server is compromised, user passwords are disclosed. To address this issue, Yang et al. proposed a practical password-based two-server authentication and key exchange protocol, where a front-end server, keeping one share of a password, and a back-end server, holding another share of the password, cooperate in authenticating a user and, meanwhile, establishing a secret key with the user. In this paper, we present two ``half-online and half-offline'' attacks to Yang et al.'s protocol. By these attacks, user passwords can be determined once the back-end server is compromised. Therefore, Yang et al.'s protocol has no essential difference from a password-based single-server authentication protocol.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114850788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic web is gaining popularity as the candidate for next generation World Wide Web. Distribution of the data across number of physical information stores and proliferation of semantic web data brings variety of non-trivial challenges. One of such challenge is to identify information stores for a given query. This paper presents a framework to address this problem probabilistically and by exchanging the summaries of actual contents. Experimental evaluation shows promising results with high recall for probabilistic approach and lower response time for pre-processed summary exchanges.
{"title":"Resource Selection from Distributed Semantic Web Stores","authors":"A. A. Iqbal, M. Ott, A. Seneviratne","doi":"10.1109/NSS.2010.71","DOIUrl":"https://doi.org/10.1109/NSS.2010.71","url":null,"abstract":"Semantic web is gaining popularity as the candidate for next generation World Wide Web. Distribution of the data across number of physical information stores and proliferation of semantic web data brings variety of non-trivial challenges. One of such challenge is to identify information stores for a given query. This paper presents a framework to address this problem probabilistically and by exchanging the summaries of actual contents. Experimental evaluation shows promising results with high recall for probabilistic approach and lower response time for pre-processed summary exchanges.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114548479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper demonstrates how to mitigate insider threat in relational databases. Basically, it shows how the execution of the same operations in different orders poses different levels of threat. The model presented in this paper organizes accesses to data items in some sequence so that the expected threat is minimized to the lowest level. In addition, it increases the availability of data items. That is, instead of preventing insiders from getting access to some data items because of possible threat, the proposed approach reorganizes insiders’ independent requests so that they can access those data when it is determined that there is little or no threat .
{"title":"Organizing Access Privileges: Maximizing the Availability and Mitigating the Threat of Insiders' Knowledgebase","authors":"Qussai M. Yaseen, B. Panda","doi":"10.1109/NSS.2010.74","DOIUrl":"https://doi.org/10.1109/NSS.2010.74","url":null,"abstract":"This paper demonstrates how to mitigate insider threat in relational databases. Basically, it shows how the execution of the same operations in different orders poses different levels of threat. The model presented in this paper organizes accesses to data items in some sequence so that the expected threat is minimized to the lowest level. In addition, it increases the availability of data items. That is, instead of preventing insiders from getting access to some data items because of possible threat, the proposed approach reorganizes insiders’ independent requests so that they can access those data when it is determined that there is little or no threat .","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116367670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There have been several authors asserting that conceptual query languages (CQLs) perform better for querying purposes than logical query languages such as SQL. This paper proposes a query mapping algorithm for the FConQuer system. FConQuer is a framework based on object-role modeling (ORM) schemas, which allow the end-user to formulate conceptual queries through the FConQuer language. Our mapping algorithm allows the FConQuer system to process conceptual queries based on ORM schemas. More precisely, our algorithm maps FConQuer queries to OQL.
{"title":"A Query Processing Strategy for Conceptual Queries Based on Object-Role Modeling","authors":"António Rosado, João MP Cardoso","doi":"10.1109/NSS.2010.85","DOIUrl":"https://doi.org/10.1109/NSS.2010.85","url":null,"abstract":"There have been several authors asserting that conceptual query languages (CQLs) perform better for querying purposes than logical query languages such as SQL. This paper proposes a query mapping algorithm for the FConQuer system. FConQuer is a framework based on object-role modeling (ORM) schemas, which allow the end-user to formulate conceptual queries through the FConQuer language. Our mapping algorithm allows the FConQuer system to process conceptual queries based on ORM schemas. More precisely, our algorithm maps FConQuer queries to OQL.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123460186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}