Society becomes more and more electronic and many daily applications, like those of e-commerce, need a height level of security because of the sensitivity of the manipulated data. Cryptography is the main way to ensure security of data, such passwords, cards numbers, etc. However, the classical cryptography suffers from relevant problematic of secured key sharing. According to Shannon theory, a secured encryption key is the one generated randomly and used only once time. The Quantum Key Distribution, based on quantum physic lows, offers the opportunity to generate such a key. It seems also, to be the only technique that does not present vulnerability against the quantum calculating power. In this paper, we present an enhanced scheme for deriving a secured encryption key for WLAN using the quantum key distribution principals.
{"title":"Securing Encryption Key Distribution in WLAN via QKD","authors":"Rima Djellab, M. Benmohammed","doi":"10.1109/CyberC.2012.34","DOIUrl":"https://doi.org/10.1109/CyberC.2012.34","url":null,"abstract":"Society becomes more and more electronic and many daily applications, like those of e-commerce, need a height level of security because of the sensitivity of the manipulated data. Cryptography is the main way to ensure security of data, such passwords, cards numbers, etc. However, the classical cryptography suffers from relevant problematic of secured key sharing. According to Shannon theory, a secured encryption key is the one generated randomly and used only once time. The Quantum Key Distribution, based on quantum physic lows, offers the opportunity to generate such a key. It seems also, to be the only technique that does not present vulnerability against the quantum calculating power. In this paper, we present an enhanced scheme for deriving a secured encryption key for WLAN using the quantum key distribution principals.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127647381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the minimum phase space volume (MPSV) method is modified to identify an autoregressive (AR) system driven by a chaotic signal blindly. After modification, the estimation speed is much faster than before, which makes the MPSV method more suitable for engineering applications. The simulation results show that, when comparing with the original MPSV method, the proposed method can obtain the same estimation result at a much higher speed.
{"title":"A Fast System Identification Method Based on Minimum Phase Space Volume","authors":"Xinzhi Xu, Jingbo Guo","doi":"10.1109/CyberC.2012.96","DOIUrl":"https://doi.org/10.1109/CyberC.2012.96","url":null,"abstract":"In this paper, the minimum phase space volume (MPSV) method is modified to identify an autoregressive (AR) system driven by a chaotic signal blindly. After modification, the estimation speed is much faster than before, which makes the MPSV method more suitable for engineering applications. The simulation results show that, when comparing with the original MPSV method, the proposed method can obtain the same estimation result at a much higher speed.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115980923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to introduce some key parameters for the tracking application in wireless sensor networks. In this work the LEACH protocol with J-sim simulation tool has been implemented, and consequently some useful trade-off analysis results among the EDCR (Energy, Density, Coverage and Reliability) parameters has been obtained. Based on these results, an intelligent evaluation model is proposed in this paper.
{"title":"An Intelligent Evaluation Model Based on the LEACH Protocol in Wireless Sensor Networks","authors":"Ning Cao, R. Higgs, G. O’hare","doi":"10.1109/CYBERC.2012.70","DOIUrl":"https://doi.org/10.1109/CYBERC.2012.70","url":null,"abstract":"This paper aims to introduce some key parameters for the tracking application in wireless sensor networks. In this work the LEACH protocol with J-sim simulation tool has been implemented, and consequently some useful trade-off analysis results among the EDCR (Energy, Density, Coverage and Reliability) parameters has been obtained. Based on these results, an intelligent evaluation model is proposed in this paper.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116858802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The weak connection between human users and their digital identities is often the target vulnerability explored by attacks to information systems. Currently, authentication mechanisms are the only barrier to prevent those attacks. Traditional password-based authentication is efficient (especially from the user point of view), but not effective -- the lack of continuous verification is a severe access control vulnerability. To overcome this issue, continuous identity monitoring is needed, operating in similar fashion to that of Intrusion Detection Systems (IDSs). However, traditional host-based IDSs are system-centric -- they monitor system events but fail on flagging malicious activity from intruders with access to the legitimate user's credentials. Therefore, extending the IDS concept to the user authentication level appears as a promising security control. The need to distinguish human users (user-centric anomaly-based detection) leads to the use of biometric features. In this paper we present a secure, reliable, inexpensive and non-intrusive technique for complementing traditional static authentication mechanisms with continuous identity verification, based on keystroke dynamics biometrics.
{"title":"Keystroke Dynamics for Continuous Access Control Enforcement","authors":"João Ferreira, H. Santos","doi":"10.1109/CyberC.2012.43","DOIUrl":"https://doi.org/10.1109/CyberC.2012.43","url":null,"abstract":"The weak connection between human users and their digital identities is often the target vulnerability explored by attacks to information systems. Currently, authentication mechanisms are the only barrier to prevent those attacks. Traditional password-based authentication is efficient (especially from the user point of view), but not effective -- the lack of continuous verification is a severe access control vulnerability. To overcome this issue, continuous identity monitoring is needed, operating in similar fashion to that of Intrusion Detection Systems (IDSs). However, traditional host-based IDSs are system-centric -- they monitor system events but fail on flagging malicious activity from intruders with access to the legitimate user's credentials. Therefore, extending the IDS concept to the user authentication level appears as a promising security control. The need to distinguish human users (user-centric anomaly-based detection) leads to the use of biometric features. In this paper we present a secure, reliable, inexpensive and non-intrusive technique for complementing traditional static authentication mechanisms with continuous identity verification, based on keystroke dynamics biometrics.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134176302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the suitability of multi-core processors over single core in automotive safety critical applications. As vehicles become more and more complex with an embedded network interconnection of ECUs (Electronic Control Unit) and integrate more and more features, safety standardization is becoming increasingly important among the automakers and OEMs (Original Equipment Manufacture). Thus the demand for computing power is increasing by the day in the automotive domain to meet all the requirements of time critical functionalities. Multi-core processor hardware is seen as a solution to the problem of increasing the ECU processing power with the support of software and also to power consumption with frequency. In this work, ABS (Anti-Lock Braking System) is taken as an example to demonstrate the suitability of multicore processor. It is shown how, through the scheduling of events in the hard braking system, multicore processor can help in achieving near real time response. The performance of ABS has been studied with the help of TMS570 which is a dual core controller from Texas Instruments (TI) and compared with TMS470 which is single core controller from the same company. A software architecture using MPI (Message Passing Interface) with shared memory is described in detail and applied to quantify the performance. In addition to performance, a comparative study of power consumption by TMS570 operating at 150MHz and TMS470 operating at 80MHz at an ambient temperature of 25oC has been studied in detail.
{"title":"On the Suitability of Multi-Core Processing for Embedded Automotive Systems","authors":"S. Jena, M. Srinivas","doi":"10.1109/CyberC.2012.60","DOIUrl":"https://doi.org/10.1109/CyberC.2012.60","url":null,"abstract":"This paper examines the suitability of multi-core processors over single core in automotive safety critical applications. As vehicles become more and more complex with an embedded network interconnection of ECUs (Electronic Control Unit) and integrate more and more features, safety standardization is becoming increasingly important among the automakers and OEMs (Original Equipment Manufacture). Thus the demand for computing power is increasing by the day in the automotive domain to meet all the requirements of time critical functionalities. Multi-core processor hardware is seen as a solution to the problem of increasing the ECU processing power with the support of software and also to power consumption with frequency. In this work, ABS (Anti-Lock Braking System) is taken as an example to demonstrate the suitability of multicore processor. It is shown how, through the scheduling of events in the hard braking system, multicore processor can help in achieving near real time response. The performance of ABS has been studied with the help of TMS570 which is a dual core controller from Texas Instruments (TI) and compared with TMS470 which is single core controller from the same company. A software architecture using MPI (Message Passing Interface) with shared memory is described in detail and applied to quantify the performance. In addition to performance, a comparative study of power consumption by TMS570 operating at 150MHz and TMS470 operating at 80MHz at an ambient temperature of 25oC has been studied in detail.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134037925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Li, Yanxu Zhu, Gang Yin, Tao Wang, Huaimin Wang
Open Source Forge (OSF) websites provide information on massive open source software projects, extracting these web data is important for open source research. Traditional extraction methods use string matching among pages to detect page template, which is time-consuming. A recent work published in VLDB exploits redundant entities among websites to detect web page coordinates of these entities. The experiment gives good results when these coordinates are used for extracting other entities of the target site. However, OSF websites have few redundant project entities. This paper proposes a modified version of that redundancy-based method tailored for OSF websites, which relies on a similar yet weaker presumption that entity attributes are redundant rather than whole entities. Like the previous work, we also construct a seed database to detect web page coordinates of the redundancies, but all at the attribute-level. In addition, we apply attribute name verification to reduce false positives during extraction. The experiment result indicates that our approach is competent in extracting OSF websites, in which scenario the previous method can not be applied.
Open Source Forge (OSF)网站提供了大量开源软件项目的信息,提取这些网络数据对于开源研究非常重要。传统的提取方法采用页面间的字符串匹配来检测页面模板,耗时长。最近在VLDB上发表的一项研究利用网站之间的冗余实体来检测这些实体的网页坐标。将这些坐标用于提取目标位置的其他实体时,实验得到了很好的结果。然而,OSF网站很少有冗余的项目实体。本文提出了为OSF网站量身定制的基于冗余的方法的修改版本,该方法依赖于一个类似但较弱的假设,即实体属性是冗余的,而不是整个实体。与之前的工作一样,我们也构建了一个种子数据库来检测冗余的网页坐标,但都是在属性级别。此外,我们应用属性名验证来减少提取过程中的误报。实验结果表明,我们的方法可以有效地提取OSF网站,而在这种情况下,以前的方法是无法应用的。
{"title":"Exploiting Attribute Redundancy in Extracting Open Source Forge Websites","authors":"Xiang Li, Yanxu Zhu, Gang Yin, Tao Wang, Huaimin Wang","doi":"10.1109/CyberC.2012.12","DOIUrl":"https://doi.org/10.1109/CyberC.2012.12","url":null,"abstract":"Open Source Forge (OSF) websites provide information on massive open source software projects, extracting these web data is important for open source research. Traditional extraction methods use string matching among pages to detect page template, which is time-consuming. A recent work published in VLDB exploits redundant entities among websites to detect web page coordinates of these entities. The experiment gives good results when these coordinates are used for extracting other entities of the target site. However, OSF websites have few redundant project entities. This paper proposes a modified version of that redundancy-based method tailored for OSF websites, which relies on a similar yet weaker presumption that entity attributes are redundant rather than whole entities. Like the previous work, we also construct a seed database to detect web page coordinates of the redundancies, but all at the attribute-level. In addition, we apply attribute name verification to reduce false positives during extraction. The experiment result indicates that our approach is competent in extracting OSF websites, in which scenario the previous method can not be applied.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"132 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131810171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the relay-trading mode of wireless cognitive radio networks the secondary user (SU) can achieve a promised spectrum access opportunity by relaying for the primary user (PU). How to utilize the exchanged resource efficiently and fairly is an interesting and practical problem. In this paper we proposed a cooperative spectrum sharing strategy (RT-CSS) for the relay-trading mode from the fairness view. The cooperative SUs are gathered in a cooperative sharing group (CSG), and contribution metric (CM) is proposed to measure each CSG member's contribution to CSG as well as benefit from CSG. The adjustment of CM can guarantee the fairness and efficiency of spectrum sharing. The numerical simulation shows that RT-CSS can achieve better performance than the sense-uncooperative mode.
{"title":"Cooperative Spectrum Sharing in Relay-Trading Mode: A Fairness View","authors":"Lixia Liu, Gang Hu, Ming Xu, Yuxing Peng","doi":"10.1109/CyberC.2012.76","DOIUrl":"https://doi.org/10.1109/CyberC.2012.76","url":null,"abstract":"In the relay-trading mode of wireless cognitive radio networks the secondary user (SU) can achieve a promised spectrum access opportunity by relaying for the primary user (PU). How to utilize the exchanged resource efficiently and fairly is an interesting and practical problem. In this paper we proposed a cooperative spectrum sharing strategy (RT-CSS) for the relay-trading mode from the fairness view. The cooperative SUs are gathered in a cooperative sharing group (CSG), and contribution metric (CM) is proposed to measure each CSG member's contribution to CSG as well as benefit from CSG. The adjustment of CM can guarantee the fairness and efficiency of spectrum sharing. The numerical simulation shows that RT-CSS can achieve better performance than the sense-uncooperative mode.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121436615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing of data-centric health-care integrated systems involve numerous non-traditional testing challenges, particularly in the areas of input validation, functional testing, regression testing, and load testing. For these and other types of testing, the test-data suites typically need to be relatively large and demonstrate characteristics that are similar to real-data. Generating test-data for integrated system is problematic because records from different systems need to be inter-related in realistic and less-than perfect ways. Using real-data is also not a feasible choice, because health-care data contains sensitive personal identifying information (PII). As a foundation, this paper provides a classification of testing challenges for health-care integrated systems and a comparison of anonymization techniques. It also narrates our experiences with a test-data creation tool [13] that extracts and anonymizes loosely correlated slices of data from multiple operational health-care systems while preserving those real-data characteristics, discussed under the classification scheme.
{"title":"Testing Health-Care Integrated Systems with Anonymized Test-Data Extracted from Production Systems","authors":"A. Raza, S. Clyde","doi":"10.1109/CyberC.2012.83","DOIUrl":"https://doi.org/10.1109/CyberC.2012.83","url":null,"abstract":"Testing of data-centric health-care integrated systems involve numerous non-traditional testing challenges, particularly in the areas of input validation, functional testing, regression testing, and load testing. For these and other types of testing, the test-data suites typically need to be relatively large and demonstrate characteristics that are similar to real-data. Generating test-data for integrated system is problematic because records from different systems need to be inter-related in realistic and less-than perfect ways. Using real-data is also not a feasible choice, because health-care data contains sensitive personal identifying information (PII). As a foundation, this paper provides a classification of testing challenges for health-care integrated systems and a comparison of anonymization techniques. It also narrates our experiences with a test-data creation tool [13] that extracts and anonymizes loosely correlated slices of data from multiple operational health-care systems while preserving those real-data characteristics, discussed under the classification scheme.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114506015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmad Abba Haruna, Syed Nasir Mehmood Shah, M. N. Zakaria, A. J. Pal
Grid scheduling is one of the prime challenges in grid computing. Reliability, efficiency (with regard to time utilization), effectiveness in resource usage, as well as robustness tend to be the demanded features of Grid scheduling systems. A number of algorithms have been designed and developed in order to make effective Grid scheduling. Project management is the well known area of operation research. Here we are proposing a new prioritized deadline based scheduling algorithm (PDSA) using project management technique for efficient job execution with deadline constraints of users' jobs. An extensive performance comparison has been presented using synthetic workload traces to evaluate the efficiency and robustness of grid scheduling with respect to average turnaround times and maximum tardiness. Result has shown improved performance under dynamic scheduling environment. Based on the comparative performance analysis, proposed PDSA has shown the optimal performance as compared to EDF and RR scheduling algorithms under dynamic scheduling environment. In brief, PDSA is the true application of project management in grid computing.
{"title":"Deadline Based Performance Evaluation of Job Scheduling Algorithms","authors":"Ahmad Abba Haruna, Syed Nasir Mehmood Shah, M. N. Zakaria, A. J. Pal","doi":"10.1109/CyberC.2012.25","DOIUrl":"https://doi.org/10.1109/CyberC.2012.25","url":null,"abstract":"Grid scheduling is one of the prime challenges in grid computing. Reliability, efficiency (with regard to time utilization), effectiveness in resource usage, as well as robustness tend to be the demanded features of Grid scheduling systems. A number of algorithms have been designed and developed in order to make effective Grid scheduling. Project management is the well known area of operation research. Here we are proposing a new prioritized deadline based scheduling algorithm (PDSA) using project management technique for efficient job execution with deadline constraints of users' jobs. An extensive performance comparison has been presented using synthetic workload traces to evaluate the efficiency and robustness of grid scheduling with respect to average turnaround times and maximum tardiness. Result has shown improved performance under dynamic scheduling environment. Based on the comparative performance analysis, proposed PDSA has shown the optimal performance as compared to EDF and RR scheduling algorithms under dynamic scheduling environment. In brief, PDSA is the true application of project management in grid computing.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"163 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114098266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Overlapping community detecting for large-scale social networks becomes a research focus with the development of online social network applications. Among the current overlapping community discovery algorithms, LFM is based on local optimization of a fitness function, which is in consistent with the local nature of community, especially in large networks. But the original LFM may fall in loops when finding community memberships for some overlapping nodes and consumes still too much time when applied in large-scale social networks with power-law community size distribution. By limiting each node to be a seed at most once, LFM can avoid loop but fail to assign community memberships to some overlapping nodes. Based on the structural analysis, we found that the loop is due to the dysfunction of the fitness metric as well as the random seed selection used in LFM. To improve the detecting quality and computation efficiency of LFM, we propose a local orientation scheme based on clustering coefficient and several efficiency enhancing schemes. With these schemes, we design a modified algorithm LOFO (local oriented fitness optimization). Comparison over several large-scale social networks shows that LOFO significantly outperforms LFM in computation efficiency and community detection goodness.
{"title":"Local Oriented Efficient Detection of Overlapping Communities in Large Networks","authors":"Shengdun Liang, Yuchun Guo","doi":"10.1109/CyberC.2012.15","DOIUrl":"https://doi.org/10.1109/CyberC.2012.15","url":null,"abstract":"Overlapping community detecting for large-scale social networks becomes a research focus with the development of online social network applications. Among the current overlapping community discovery algorithms, LFM is based on local optimization of a fitness function, which is in consistent with the local nature of community, especially in large networks. But the original LFM may fall in loops when finding community memberships for some overlapping nodes and consumes still too much time when applied in large-scale social networks with power-law community size distribution. By limiting each node to be a seed at most once, LFM can avoid loop but fail to assign community memberships to some overlapping nodes. Based on the structural analysis, we found that the loop is due to the dysfunction of the fitness metric as well as the random seed selection used in LFM. To improve the detecting quality and computation efficiency of LFM, we propose a local orientation scheme based on clustering coefficient and several efficiency enhancing schemes. With these schemes, we design a modified algorithm LOFO (local oriented fitness optimization). Comparison over several large-scale social networks shows that LOFO significantly outperforms LFM in computation efficiency and community detection goodness.","PeriodicalId":416468,"journal":{"name":"2012 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114815160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}