Passwords are the first line of defense for many computerized systems. The quality of these passwords decides the security strength of these systems. Many studies advocate using password entropy as an indicator for password quality where lower entropy suggests a weaker or less secure password. However, a closer examination of this literature shows that password entropy is very loosely defined. In this paper, we first discuss the calculation of password entropy and explain why it is an inadequate indicator of password quality. We then establish a password quality assessment scheme: password quality indicator (PQI). The PQI of a password is a pair (D, L), where D is the Levenshtein's editing distance of the password in relation to a dictionary of words and common mnemonics, and L is the effective password length. Finally, we propose to use PQI to prescribe the characteristics of good quality passwords.
{"title":"Password Entropy and Password Quality","authors":"Wanli Ma, John Campbell, D. Tran, Dale Kleeman","doi":"10.1109/NSS.2010.18","DOIUrl":"https://doi.org/10.1109/NSS.2010.18","url":null,"abstract":"Passwords are the first line of defense for many computerized systems. The quality of these passwords decides the security strength of these systems. Many studies advocate using password entropy as an indicator for password quality where lower entropy suggests a weaker or less secure password. However, a closer examination of this literature shows that password entropy is very loosely defined. In this paper, we first discuss the calculation of password entropy and explain why it is an inadequate indicator of password quality. We then establish a password quality assessment scheme: password quality indicator (PQI). The PQI of a password is a pair (D, L), where D is the Levenshtein's editing distance of the password in relation to a dictionary of words and common mnemonics, and L is the effective password length. Finally, we propose to use PQI to prescribe the characteristics of good quality passwords.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth in the number of online services leads to an increasing number of different digital identities each user needs to manage. As a result, many people feel overloaded with credentials, which in turn negatively impacts their abilityto manage them securely. Passwords are perhaps the most common type of credential used today. To avoid the tedious task of remembering difficult passwords, users often behave less securely by using low entropy and weak passwords. Weak passwords and bad password habits represent security threats to online services. Some solutions have been developed to eliminate the need for users to create and manage passwords. A typical solution is based on giving the user a hardware token that generates one-time-passwords, i.e. passwords for single session or transaction usage. Unfortunately, most of these solutions do not satisfy scalability and/or usability requirements, or they are simply insecure. In this paper, we propose a scalable OTP solution using mobile phones and based on trusted computing technology that combines enhanced usability with strong security.
{"title":"The Mobile Phone as a Multi OTP Device Using Trusted Computing","authors":"Mohammed Al Zomai, A. Jøsang","doi":"10.1109/NSS.2010.39","DOIUrl":"https://doi.org/10.1109/NSS.2010.39","url":null,"abstract":"The rapid growth in the number of online services leads to an increasing number of different digital identities each user needs to manage. As a result, many people feel overloaded with credentials, which in turn negatively impacts their abilityto manage them securely. Passwords are perhaps the most common type of credential used today. To avoid the tedious task of remembering difficult passwords, users often behave less securely by using low entropy and weak passwords. Weak passwords and bad password habits represent security threats to online services. Some solutions have been developed to eliminate the need for users to create and manage passwords. A typical solution is based on giving the user a hardware token that generates one-time-passwords, i.e. passwords for single session or transaction usage. Unfortunately, most of these solutions do not satisfy scalability and/or usability requirements, or they are simply insecure. In this paper, we propose a scalable OTP solution using mobile phones and based on trusted computing technology that combines enhanced usability with strong security.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128482142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates distributed denial of service attacks using non-address-spoofing flood (NASF) over mobile ad hoc networks (MANET). Detection features based on statistical analysis of IDS log files and flow rate information are proposed. Detection of NASF attack is evaluated using three metrics, including detection ratio, detection time and false detection rate. Thus, the proposed framework address important issues in forensic science to identify what and when does the attack occur. Different NASF attack patterns with different network throughput degradations are simulated and examined in this paper.
{"title":"Forensic Analysis of DoS Attack Traffic in MANET","authors":"Yinghua Guo, Ivan Lee","doi":"10.1109/NSS.2010.48","DOIUrl":"https://doi.org/10.1109/NSS.2010.48","url":null,"abstract":"This paper investigates distributed denial of service attacks using non-address-spoofing flood (NASF) over mobile ad hoc networks (MANET). Detection features based on statistical analysis of IDS log files and flow rate information are proposed. Detection of NASF attack is evaluated using three metrics, including detection ratio, detection time and false detection rate. Thus, the proposed framework address important issues in forensic science to identify what and when does the attack occur. Different NASF attack patterns with different network throughput degradations are simulated and examined in this paper.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128191570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users' anonymity and privacy are among the major concerns of today's Internet. Anonymizing networks are then poised to become an important service to support anonymous-driven Internet communications and consequently enhance users' privacy protection. Indeed, Tor an example of anonymizing networks based on onion routing concept attracts more and more volunteers, and is now popular among dozens of thousands of Internet users. Surprisingly, very few researches shed light on such an anonymizing network. Beyond providing global statistics on the typical usage of Tor in the wild, we show that Tor is actually being is-used, as most of the observed traffic belongs to P2P applications. In particular, we quantify the BitTorrent traffic and show that the load of the latter on the Tor network is underestimated because of encrypted BitTorrent traffic (that can go unnoticed). Furthermore, this paper provides a deep analysis of both the HTTP and BitTorrent protocols giving a complete overview of their usage. We do not only report such usage in terms of traffic size and number of connections but also depict how users behave on top of Tor. We also show that Tor usage is now diverted from the onion routing concept and that Tor exit nodes are frequently used as 1-hop SOCKS proxies, through a so-called tunneling technique. We provide an efficient method allowing an exit node to detect such an abnormal usage. Finally, we report our experience in effectively crawling bridge nodes, supposedly revealed sparingly in Tor.
{"title":"Digging into Anonymous Traffic: A Deep Analysis of the Tor Anonymizing Network","authors":"Chaabane Abdelberi, Pere Manils, M. Kâafar","doi":"10.1109/NSS.2010.47","DOIUrl":"https://doi.org/10.1109/NSS.2010.47","url":null,"abstract":"Users' anonymity and privacy are among the major concerns of today's Internet. Anonymizing networks are then poised to become an important service to support anonymous-driven Internet communications and consequently enhance users' privacy protection. Indeed, Tor an example of anonymizing networks based on onion routing concept attracts more and more volunteers, and is now popular among dozens of thousands of Internet users. Surprisingly, very few researches shed light on such an anonymizing network. Beyond providing global statistics on the typical usage of Tor in the wild, we show that Tor is actually being is-used, as most of the observed traffic belongs to P2P applications. In particular, we quantify the BitTorrent traffic and show that the load of the latter on the Tor network is underestimated because of encrypted BitTorrent traffic (that can go unnoticed). Furthermore, this paper provides a deep analysis of both the HTTP and BitTorrent protocols giving a complete overview of their usage. We do not only report such usage in terms of traffic size and number of connections but also depict how users behave on top of Tor. We also show that Tor usage is now diverted from the onion routing concept and that Tor exit nodes are frequently used as 1-hop SOCKS proxies, through a so-called tunneling technique. We provide an efficient method allowing an exit node to detect such an abnormal usage. Finally, we report our experience in effectively crawling bridge nodes, supposedly revealed sparingly in Tor.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose novel algorithmic models based on feature transformation in cross-modal subspace and their multimodal fusion for different types of residue features extracted from several intra-frame and inter frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features – the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace.
{"title":"Digital Video Tamper Detection Based on Multimodal Fusion of Residue Features","authors":"G. Chetty, M. Biswas, Rashmi Singh","doi":"10.1109/NSS.2010.8","DOIUrl":"https://doi.org/10.1109/NSS.2010.8","url":null,"abstract":"In this paper, we propose novel algorithmic models based on feature transformation in cross-modal subspace and their multimodal fusion for different types of residue features extracted from several intra-frame and inter frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features – the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130800446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thaneswaran Velauthapillai, A. Harwood, S. Karunasekera
Flooding-based Distributed Denial of Service (DDoS) attacks present a serious threat to the stability of the Internet. Identifying the attacks rapidly and accurately is significant for the efficient operation of Internet applications and services. Recent observations in the U.S. indicate a significant increase of cyber attacks on U.S. military information systems in 2009. Current technologies are still unable to withstand large-scale DDoS attacks. Single point detection and response is a first step to defeat such distributed attacks. Distributed global defense systems, using a coordinated effort, go much further towards thwarting such attacks. In this paper, we propose a distributed defense infrastructure to detect DDoS attacks globally using a cooperative overlay network and a gossip-based information exchange protocol. Our NS2 based simulation results show that the proposed solution can detect attacks with a detection rate as high as 0.99 with false alarms below 0.01. This compares favorably against other widely known methods including change-point detection, TTL analysis and wavelet analysis.
{"title":"Global Detection of Flooding-Based DDoS Attacks Using a Cooperative Overlay Network","authors":"Thaneswaran Velauthapillai, A. Harwood, S. Karunasekera","doi":"10.1109/NSS.2010.68","DOIUrl":"https://doi.org/10.1109/NSS.2010.68","url":null,"abstract":"Flooding-based Distributed Denial of Service (DDoS) attacks present a serious threat to the stability of the Internet. Identifying the attacks rapidly and accurately is significant for the efficient operation of Internet applications and services. Recent observations in the U.S. indicate a significant increase of cyber attacks on U.S. military information systems in 2009. Current technologies are still unable to withstand large-scale DDoS attacks. Single point detection and response is a first step to defeat such distributed attacks. Distributed global defense systems, using a coordinated effort, go much further towards thwarting such attacks. In this paper, we propose a distributed defense infrastructure to detect DDoS attacks globally using a cooperative overlay network and a gossip-based information exchange protocol. Our NS2 based simulation results show that the proposed solution can detect attacks with a detection rate as high as 0.99 with false alarms below 0.01. This compares favorably against other widely known methods including change-point detection, TTL analysis and wavelet analysis.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133953090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a partition-based uncertain high-dimensional indexing algorithm, called PU-Tree. In the PU-Tree, all (n)data objects are first grouped into some clusters by a k-Means clustering algorithm. Then each object’s corresponding uncertain sphere is partitioned into several slices in terms of the zero-distance. Finally a unified key of each data object is computed by adopting multi-attribute encoding scheme, which are inserted by a B+-tree. Thus, given a query object, its probabilistic range search in high-dimensional spaces is transformed into the search in the single dimensional space with the aid of the PU-Tree. Extensive performance studies are conducted to evaluate the effectiveness and efficiency of the proposed scheme.
{"title":"The PU-Tree: A Partition-Based Uncertain High-Dimensional Indexing Algorithm","authors":"Yi Zhuang","doi":"10.1109/NSS.2010.60","DOIUrl":"https://doi.org/10.1109/NSS.2010.60","url":null,"abstract":"This paper proposes a partition-based uncertain high-dimensional indexing algorithm, called PU-Tree. In the PU-Tree, all (n)data objects are first grouped into some clusters by a k-Means clustering algorithm. Then each object’s corresponding uncertain sphere is partitioned into several slices in terms of the zero-distance. Finally a unified key of each data object is computed by adopting multi-attribute encoding scheme, which are inserted by a B+-tree. Thus, given a query object, its probabilistic range search in high-dimensional spaces is transformed into the search in the single dimensional space with the aid of the PU-Tree. Extensive performance studies are conducted to evaluate the effectiveness and efficiency of the proposed scheme.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114328606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the growing importance of privacy in data access, much research has been done on the privacy protecting technology in recent years. Developing an access control model and related mechanisms to support a selective access data become important. The extensible markup language (XML) is rapidly emerging as the new standard language for semi-structured data representation and exchange on the Internet. And now more and more information is distributed in XML format. In this article, we present a comprehensive approach for privacy preserving access control based on the notion of purpose. In our model, purpose information associated with a given data elements in an XML document specifies the intended use of the data elements. An important issue addressed in this article is the granularity of data labeling for data elements in XML documents and tree databases with which purposes can be associated. We address this issue in XML databases and propose different labeling schemes for XML documents. We also propose an approach to represent purpose information to support access control based on purpose information. Our proposed solution relies on usage access control (UAC) models as well as the components which based on the notions of the purpose information used in subjects and objects. Finally, comparisons with related works are analysed.
{"title":"A Purpose Based Access Control in XML Databases System","authors":"Lili Sun, Hua Wang, Raj Jururajin, S. Sriprakash","doi":"10.1109/NSS.2010.28","DOIUrl":"https://doi.org/10.1109/NSS.2010.28","url":null,"abstract":"With the growing importance of privacy in data access, much research has been done on the privacy protecting technology in recent years. Developing an access control model and related mechanisms to support a selective access data become important. The extensible markup language (XML) is rapidly emerging as the new standard language for semi-structured data representation and exchange on the Internet. And now more and more information is distributed in XML format. In this article, we present a comprehensive approach for privacy preserving access control based on the notion of purpose. In our model, purpose information associated with a given data elements in an XML document specifies the intended use of the data elements. An important issue addressed in this article is the granularity of data labeling for data elements in XML documents and tree databases with which purposes can be associated. We address this issue in XML databases and propose different labeling schemes for XML documents. We also propose an approach to represent purpose information to support access control based on purpose information. Our proposed solution relies on usage access control (UAC) models as well as the components which based on the notions of the purpose information used in subjects and objects. Finally, comparisons with related works are analysed.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114774502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Murshed, Tishna Sabrina, Anindya Iqbal, Kh Mahmudul Alam
In participatory sensing system community people contribute information to be shared by everybody. However, none would be tolerant enough to contribute voluntarily if her privacy is not protected. This has evoked the idea of research in the area of preserving privacy in participatory sensing system. On the other hand, data integrity is desired imperatively to make the service trustworthy and user-friendly. In this paper, we have investigated the performance of a greedy algorithm and its randomized variant to achieve an acceptable tradeoff between these two orthogonal key parameters. We have also analyzed the ability of a third party adversary to decode privacy-sensitive data by eavesdropping. Our experimental results show that the proposed method is performing satisfactorily as an approach of balancing user privacy and data integrity.
{"title":"A Novel Anonymization Technique to Trade Off Location Privacy and Data Integrity in Participatory Sensing Systems","authors":"M. Murshed, Tishna Sabrina, Anindya Iqbal, Kh Mahmudul Alam","doi":"10.1109/NSS.2010.73","DOIUrl":"https://doi.org/10.1109/NSS.2010.73","url":null,"abstract":"In participatory sensing system community people contribute information to be shared by everybody. However, none would be tolerant enough to contribute voluntarily if her privacy is not protected. This has evoked the idea of research in the area of preserving privacy in participatory sensing system. On the other hand, data integrity is desired imperatively to make the service trustworthy and user-friendly. In this paper, we have investigated the performance of a greedy algorithm and its randomized variant to achieve an acceptable tradeoff between these two orthogonal key parameters. We have also analyzed the ability of a third party adversary to decode privacy-sensitive data by eavesdropping. Our experimental results show that the proposed method is performing satisfactorily as an approach of balancing user privacy and data integrity.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123713946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Limited information security budget in organizations make it necessary to effectively prioritize among security requirements. The goal is to make the most out of the available budget and to achieve a balanced overall security level. This leads to maximize the investment outcome. Many existing information security risk assessment approaches identify and assess risks to critical assets and are asset-driven approaches. These are limited in that it is hard to keep track of dependencies between assets and to produce realistic estimates of their values to an organization. We present a new security risk assessment approach focusing on business goals rather than assets and the processes supporting or contributing to these goals. Risks are identified and evaluated on a business process level and aggregated over all such processes depending on their criticality, role and importance for the organization as a whole. We illustrate our approach using examples from the banking industry, as well as discuss how our approach deals with some of the ambiguities involved in expert intensive and asset-driven information security risk assessment.
{"title":"Business Process-Based Information Security Risk Assessment","authors":"Kobra Khanmohammadi, S. Houmb","doi":"10.1109/NSS.2010.37","DOIUrl":"https://doi.org/10.1109/NSS.2010.37","url":null,"abstract":"Limited information security budget in organizations make it necessary to effectively prioritize among security requirements. The goal is to make the most out of the available budget and to achieve a balanced overall security level. This leads to maximize the investment outcome. Many existing information security risk assessment approaches identify and assess risks to critical assets and are asset-driven approaches. These are limited in that it is hard to keep track of dependencies between assets and to produce realistic estimates of their values to an organization. We present a new security risk assessment approach focusing on business goals rather than assets and the processes supporting or contributing to these goals. Risks are identified and evaluated on a business process level and aggregated over all such processes depending on their criticality, role and importance for the organization as a whole. We illustrate our approach using examples from the banking industry, as well as discuss how our approach deals with some of the ambiguities involved in expert intensive and asset-driven information security risk assessment.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125553614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}