There are two common spectrum sharing modes for cognitive radio (CR) networks: underlay and overlay. By a combination them, we propose here the hybrid spectrum sharing (HSS) model, where the transmission power is changed according to the states of the occupied sub carrier and interference constraints and thus, cognitive users can dynamically handoff between the overlay mode and the underlay mode. The optimal power allocation algorithm for this HSS system is derived under the condition of maximizing system capacity. Simulation results show that capacity can be significantly increased by the proposed HSS system.
{"title":"Capacity of Hybrid Spectrum Sharing System and Analysis Model in Cognitive Radio","authors":"Xiaorong Zhu, Yong Wang","doi":"10.1109/CyberC.2012.87","DOIUrl":"https://doi.org/10.1109/CyberC.2012.87","url":null,"abstract":"There are two common spectrum sharing modes for cognitive radio (CR) networks: underlay and overlay. By a combination them, we propose here the hybrid spectrum sharing (HSS) model, where the transmission power is changed according to the states of the occupied sub carrier and interference constraints and thus, cognitive users can dynamically handoff between the overlay mode and the underlay mode. The optimal power allocation algorithm for this HSS system is derived under the condition of maximizing system capacity. Simulation results show that capacity can be significantly increased by the proposed HSS system.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Babar Iqbal, Asif Iqbal, Mário A. M. Guimarães, Kashif Khan, Hanan Al Obaidli
With the move toward mobile computing being the trend of this technology era it is clear that our way of life and how we deal with objects in it is changing. This swift shift from large desktop computers to inexpensive, low power applications that are easily carried in our pockets or placed next to a cup of coffee on the living room table clearly changed the way we interact with media and contact friends, colleagues and family members. This also created advancement in the field of digital forensics as with every device coming to the market, studies have been conducted to investigate the possible evidence that can be found on them. As we realize that with the comfort these devices do provide as a result of their mobility they are also providing a wealth of information about the users themselves for the same reason, hence they are really valuable source of evidence in an investigation. In this paper we will discuss one of these mobile devices which is Amazon kindle Fire. Being a new player in the mobile computing sector there haven't been enough studies of it in the field of digital forensics regarding it. In this paper we will discuss an imaging process to acquire the data from the device then we will provide an analysis of these data and their possible sources of evidence.
{"title":"Amazon Kindle Fire from a Digital Forensics Perspective","authors":"Babar Iqbal, Asif Iqbal, Mário A. M. Guimarães, Kashif Khan, Hanan Al Obaidli","doi":"10.1109/CyberC.2012.61","DOIUrl":"https://doi.org/10.1109/CyberC.2012.61","url":null,"abstract":"With the move toward mobile computing being the trend of this technology era it is clear that our way of life and how we deal with objects in it is changing. This swift shift from large desktop computers to inexpensive, low power applications that are easily carried in our pockets or placed next to a cup of coffee on the living room table clearly changed the way we interact with media and contact friends, colleagues and family members. This also created advancement in the field of digital forensics as with every device coming to the market, studies have been conducted to investigate the possible evidence that can be found on them. As we realize that with the comfort these devices do provide as a result of their mobility they are also providing a wealth of information about the users themselves for the same reason, hence they are really valuable source of evidence in an investigation. In this paper we will discuss one of these mobile devices which is Amazon kindle Fire. Being a new player in the mobile computing sector there haven't been enough studies of it in the field of digital forensics regarding it. In this paper we will discuss an imaging process to acquire the data from the device then we will provide an analysis of these data and their possible sources of evidence.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129869386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread use of Internet provides a good environment for e-commerce. Study on e-commerce network characteristics always focuses on the Taobao. So far, researches based on Taobao are related to credit rating system, marketing strategy, analysis of characteristics of the seller and so on. The purpose of all these studies is to analyze online marketing transactions in e-commerce. In this paper, we analyze e-commerce network from the perspective of graph theory. Our contributions lie in two aspects as following: (1) crawl Taobao share-platform using Scrapy crawl architecture. After analyzing format of web pages in Taobao deeply, combined with the BFS and MHRW two kinds of sampling methods, we ran crawler on five PCs for 30 days. Besides, we list some big problems encountered in the crawling process, then give the final solution. In addition, we crawled one type of sellers' data in order to analyze relationships between sellers and buyers. (2) Analyze characteristics of users' behavior in Taobao share-platform based on obtained dataset. We intend to find the relationships between sellers and buyers connected by items in share-platform. Surprisingly, we find that share-platform is a tool for some buyers to advertise items for sellers who have high credit score, and other buyers only to help them to support the platform.
{"title":"Scrapy-Based Crawling and User-Behavior Characteristics Analysis on Taobao","authors":"Jing Wang, Yuchun Guo","doi":"10.1109/CyberC.2012.17","DOIUrl":"https://doi.org/10.1109/CyberC.2012.17","url":null,"abstract":"The widespread use of Internet provides a good environment for e-commerce. Study on e-commerce network characteristics always focuses on the Taobao. So far, researches based on Taobao are related to credit rating system, marketing strategy, analysis of characteristics of the seller and so on. The purpose of all these studies is to analyze online marketing transactions in e-commerce. In this paper, we analyze e-commerce network from the perspective of graph theory. Our contributions lie in two aspects as following: (1) crawl Taobao share-platform using Scrapy crawl architecture. After analyzing format of web pages in Taobao deeply, combined with the BFS and MHRW two kinds of sampling methods, we ran crawler on five PCs for 30 days. Besides, we list some big problems encountered in the crawling process, then give the final solution. In addition, we crawled one type of sellers' data in order to analyze relationships between sellers and buyers. (2) Analyze characteristics of users' behavior in Taobao share-platform based on obtained dataset. We intend to find the relationships between sellers and buyers connected by items in share-platform. Surprisingly, we find that share-platform is a tool for some buyers to advertise items for sellers who have high credit score, and other buyers only to help them to support the platform.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124888649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Chen, Hongyun Zheng, Yongxiang Zhao, Yuchun Guo
TCP in cast refers to a throughput collapse when too many senders simultaneously transmit to the same receiver. In order to improve throughput, our idea is avoiding packet losses before TCP in cast happens. The scheme is limiting the number of concurrent senders such that the link can be filled as fully as possible but no packet losses. In this paper we re-examine and modify the condition that the link can be saturated but no packet losses, which initially presented in our previous work. Then based on the modified condition we propose an improved approach to determining the number of concurrent senders. Analysis and simulation results show this improved approach avoids TCP in cast and obtains more throughput improvement than the previous approach.
{"title":"Improved Solution to TCP Incast Problem in Data Center Networks","authors":"Kai Chen, Hongyun Zheng, Yongxiang Zhao, Yuchun Guo","doi":"10.1109/CyberC.2012.78","DOIUrl":"https://doi.org/10.1109/CyberC.2012.78","url":null,"abstract":"TCP in cast refers to a throughput collapse when too many senders simultaneously transmit to the same receiver. In order to improve throughput, our idea is avoiding packet losses before TCP in cast happens. The scheme is limiting the number of concurrent senders such that the link can be filled as fully as possible but no packet losses. In this paper we re-examine and modify the condition that the link can be saturated but no packet losses, which initially presented in our previous work. Then based on the modified condition we propose an improved approach to determining the number of concurrent senders. Analysis and simulation results show this improved approach avoids TCP in cast and obtains more throughput improvement than the previous approach.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"391 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122773536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Binary Translation technology is used to convert binary code of one Instruction Set Architecture (ISA) into another. This technology can solve the software-inheritance problem and ISA-compatibility between different computers architecture. In this paper, we describe BTMD (Binary Translation based Malcode Detector), a novel framework that exploits static and dynamic binary translation features to detect broad spectrum malware and prevent its execution. By operating directly on binary code with MD Rules on the availability of source code, BTMD is appropriate for translating low-level binary code to high-level proper representation, obtaining CFG (Control Flow Graph) and other high-level Control Structure by MD Parser. Then Critical API Graph based on CFG is generated to do sub graph matching with the defined Malware Behavior Template. MD Engine in BTMD is called to undertake the process to take on the remaining code analysis. Compared with other detection approaches, BTMD is found to be very efficient in terms of detection capability and false alarm rate.
{"title":"BTMD: A Framework of Binary Translation Based Malcode Detector","authors":"Zheng Shan, Haoran Guo, J. Pang","doi":"10.1109/CyberC.2012.16","DOIUrl":"https://doi.org/10.1109/CyberC.2012.16","url":null,"abstract":"Binary Translation technology is used to convert binary code of one Instruction Set Architecture (ISA) into another. This technology can solve the software-inheritance problem and ISA-compatibility between different computers architecture. In this paper, we describe BTMD (Binary Translation based Malcode Detector), a novel framework that exploits static and dynamic binary translation features to detect broad spectrum malware and prevent its execution. By operating directly on binary code with MD Rules on the availability of source code, BTMD is appropriate for translating low-level binary code to high-level proper representation, obtaining CFG (Control Flow Graph) and other high-level Control Structure by MD Parser. Then Critical API Graph based on CFG is generated to do sub graph matching with the defined Malware Behavior Template. MD Engine in BTMD is called to undertake the process to take on the remaining code analysis. Compared with other detection approaches, BTMD is found to be very efficient in terms of detection capability and false alarm rate.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128121092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riyanat Shittu, A. Healing, R. Bloomfield, M. Rajarajan
A large amount of research effort is focused on developing methods for correlating network intrusion alerts, so as to better understand a network's current security state. The accuracy of traditional static methods of correlation is however limited in large-scale complex systems, where the degree of human insight and validation necessary is higher, and dynamic attack behaviours are likely. Many recent efforts have centred around visualising security data in a way that can better involve and support a human analyst in the network security triage process but this potentially gives rise to another complex system of analytical and visual components which need to be configured, trained and understood. This paper describes an agent-based framework designed to manage a set of visual analytic components in order to improve a security analyst's understanding and ability to classify the threats to the network that they govern. In the proof-of-concept system an agent selects the most effective method for event aggregation, given a particular set of events which have been generated by an Intrusion Detection System (IDS). We present a novel application of a dynamic response model in order to configure the aggregation component such that the data is best simplified for more effective further analysis.
{"title":"Visual Analytic Agent-Based Framework for Intrusion Alert Analysis","authors":"Riyanat Shittu, A. Healing, R. Bloomfield, M. Rajarajan","doi":"10.1109/CyberC.2012.41","DOIUrl":"https://doi.org/10.1109/CyberC.2012.41","url":null,"abstract":"A large amount of research effort is focused on developing methods for correlating network intrusion alerts, so as to better understand a network's current security state. The accuracy of traditional static methods of correlation is however limited in large-scale complex systems, where the degree of human insight and validation necessary is higher, and dynamic attack behaviours are likely. Many recent efforts have centred around visualising security data in a way that can better involve and support a human analyst in the network security triage process but this potentially gives rise to another complex system of analytical and visual components which need to be configured, trained and understood. This paper describes an agent-based framework designed to manage a set of visual analytic components in order to improve a security analyst's understanding and ability to classify the threats to the network that they govern. In the proof-of-concept system an agent selects the most effective method for event aggregation, given a particular set of events which have been generated by an Intrusion Detection System (IDS). We present a novel application of a dynamic response model in order to configure the aggregation component such that the data is best simplified for more effective further analysis.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133334320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyun Geng, Li Wang, Dong Zhou, Huiqun Wu, Wen Cao, Yuanpeng Zhang, Yalan Chen, Weijia Lu, Kui Jiang, Jiancheng Dong
From 2009, Ministry of Health in China continues to develop and promote the construction of the residents' electronic health records. The corresponding standards were pronounced by the Ministry of Health. Under the national standards, we developed a resident's electronic health records system. In order to improve effective of medical staff's daily management, the visualization of the residents address is added to our system. The system is composed by a variety of chronic diseases query and statistical database model. All routes among the addresses are marked on the map. The best route to visit each address is selected by our system. The algorithm used in the visualization is compared to three other algorithms. The efficiency of our system is the highest.
{"title":"Online and Intelligent Route Decision-Making from the Public Health DataSet","authors":"Xingyun Geng, Li Wang, Dong Zhou, Huiqun Wu, Wen Cao, Yuanpeng Zhang, Yalan Chen, Weijia Lu, Kui Jiang, Jiancheng Dong","doi":"10.1109/CyberC.2012.82","DOIUrl":"https://doi.org/10.1109/CyberC.2012.82","url":null,"abstract":"From 2009, Ministry of Health in China continues to develop and promote the construction of the residents' electronic health records. The corresponding standards were pronounced by the Ministry of Health. Under the national standards, we developed a resident's electronic health records system. In order to improve effective of medical staff's daily management, the visualization of the residents address is added to our system. The system is composed by a variety of chronic diseases query and statistical database model. All routes among the addresses are marked on the map. The best route to visit each address is selected by our system. The algorithm used in the visualization is compared to three other algorithms. The efficiency of our system is the highest.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a coarse-to-fine approach to discovery motion patterns. There are two phases in the proposed approach. In the first phase, the proposed median-based GMM achieves coarse clustering. Moreover, the number of clusters can be heuristically found by the proposed algorithm. In the second phase, to refine coarse clustering in the first phase, a Fisher optimal division method is proposed to examine the boundary data points and to detect the change point between motion patterns. The experimental results show that the proposed approach outperforms the existing algorithms.
{"title":"A Coarse-to-Fine Approach for Motion Pattern Discovery","authors":"Bolun Cai, Zhifeng Luo, Kerui Li","doi":"10.1109/CyberC.2012.95","DOIUrl":"https://doi.org/10.1109/CyberC.2012.95","url":null,"abstract":"In this paper, we propose a coarse-to-fine approach to discovery motion patterns. There are two phases in the proposed approach. In the first phase, the proposed median-based GMM achieves coarse clustering. Moreover, the number of clusters can be heuristically found by the proposed algorithm. In the second phase, to refine coarse clustering in the first phase, a Fisher optimal division method is proposed to examine the boundary data points and to detect the change point between motion patterns. The experimental results show that the proposed approach outperforms the existing algorithms.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124962851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Li, Yanxu Zhu, Gang Yin, Tao Wang, Huaimin Wang
Open Source Forge (OSF) websites provide information on massive open source software projects, extracting these web data is important for open source research. Traditional extraction methods use string matching among pages to detect page template, which is time-consuming. A recent work published in VLDB exploits redundant entities among websites to detect web page coordinates of these entities. The experiment gives good results when these coordinates are used for extracting other entities of the target site. However, OSF websites have few redundant project entities. This paper proposes a modified version of that redundancy-based method tailored for OSF websites, which relies on a similar yet weaker presumption that entity attributes are redundant rather than whole entities. Like the previous work, we also construct a seed database to detect web page coordinates of the redundancies, but all at the attribute-level. In addition, we apply attribute name verification to reduce false positives during extraction. The experiment result indicates that our approach is competent in extracting OSF websites, in which scenario the previous method can not be applied.
Open Source Forge (OSF)网站提供了大量开源软件项目的信息,提取这些网络数据对于开源研究非常重要。传统的提取方法采用页面间的字符串匹配来检测页面模板,耗时长。最近在VLDB上发表的一项研究利用网站之间的冗余实体来检测这些实体的网页坐标。将这些坐标用于提取目标位置的其他实体时,实验得到了很好的结果。然而,OSF网站很少有冗余的项目实体。本文提出了为OSF网站量身定制的基于冗余的方法的修改版本,该方法依赖于一个类似但较弱的假设,即实体属性是冗余的,而不是整个实体。与之前的工作一样,我们也构建了一个种子数据库来检测冗余的网页坐标,但都是在属性级别。此外,我们应用属性名验证来减少提取过程中的误报。实验结果表明,我们的方法可以有效地提取OSF网站,而在这种情况下,以前的方法是无法应用的。
{"title":"Exploiting Attribute Redundancy in Extracting Open Source Forge Websites","authors":"Xiang Li, Yanxu Zhu, Gang Yin, Tao Wang, Huaimin Wang","doi":"10.1109/CyberC.2012.12","DOIUrl":"https://doi.org/10.1109/CyberC.2012.12","url":null,"abstract":"Open Source Forge (OSF) websites provide information on massive open source software projects, extracting these web data is important for open source research. Traditional extraction methods use string matching among pages to detect page template, which is time-consuming. A recent work published in VLDB exploits redundant entities among websites to detect web page coordinates of these entities. The experiment gives good results when these coordinates are used for extracting other entities of the target site. However, OSF websites have few redundant project entities. This paper proposes a modified version of that redundancy-based method tailored for OSF websites, which relies on a similar yet weaker presumption that entity attributes are redundant rather than whole entities. Like the previous work, we also construct a seed database to detect web page coordinates of the redundancies, but all at the attribute-level. In addition, we apply attribute name verification to reduce false positives during extraction. The experiment result indicates that our approach is competent in extracting OSF websites, in which scenario the previous method can not be applied.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"132 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131810171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The weak connection between human users and their digital identities is often the target vulnerability explored by attacks to information systems. Currently, authentication mechanisms are the only barrier to prevent those attacks. Traditional password-based authentication is efficient (especially from the user point of view), but not effective -- the lack of continuous verification is a severe access control vulnerability. To overcome this issue, continuous identity monitoring is needed, operating in similar fashion to that of Intrusion Detection Systems (IDSs). However, traditional host-based IDSs are system-centric -- they monitor system events but fail on flagging malicious activity from intruders with access to the legitimate user's credentials. Therefore, extending the IDS concept to the user authentication level appears as a promising security control. The need to distinguish human users (user-centric anomaly-based detection) leads to the use of biometric features. In this paper we present a secure, reliable, inexpensive and non-intrusive technique for complementing traditional static authentication mechanisms with continuous identity verification, based on keystroke dynamics biometrics.
{"title":"Keystroke Dynamics for Continuous Access Control Enforcement","authors":"João Ferreira, H. Santos","doi":"10.1109/CyberC.2012.43","DOIUrl":"https://doi.org/10.1109/CyberC.2012.43","url":null,"abstract":"The weak connection between human users and their digital identities is often the target vulnerability explored by attacks to information systems. Currently, authentication mechanisms are the only barrier to prevent those attacks. Traditional password-based authentication is efficient (especially from the user point of view), but not effective -- the lack of continuous verification is a severe access control vulnerability. To overcome this issue, continuous identity monitoring is needed, operating in similar fashion to that of Intrusion Detection Systems (IDSs). However, traditional host-based IDSs are system-centric -- they monitor system events but fail on flagging malicious activity from intruders with access to the legitimate user's credentials. Therefore, extending the IDS concept to the user authentication level appears as a promising security control. The need to distinguish human users (user-centric anomaly-based detection) leads to the use of biometric features. In this paper we present a secure, reliable, inexpensive and non-intrusive technique for complementing traditional static authentication mechanisms with continuous identity verification, based on keystroke dynamics biometrics.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134176302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}