Fengyu Wang, Bin Gong, Shanqing Guo, Xiaofeng Wang
Identifying heavy-hitter flows in high-speed network link is important for some applications. This paper studied the approach of measuring various heavy-hitter flows simultaneously. We proposed a novel scheme, named TS-LRU (Two-Stage Least Recently Used), which process arriving packets through two stages to extract heavy-hitter flows. New packets are aggregated into FGFs (Fine-Grained Flow) and preserved in Stage1. The FGFs with no arrival packets for a relative long time are evicted from Stage1 using LRU replacement. The replaced FGFs are added into Stage2 and aggregated into RGFs (Rough-Grained Flow) further. The replacement scheme used in Stage2 is based on LRU with considering RGF size, named LRU-Size. There could be several similar data structures in Stage2 to extract different types of RGFs concurrently. Mathematical analysis indicates that this algorithm can save memory space and improve processing speed efficiently through exploiting the distribution characteristics of flows. We also examined TS-LRU with simulated experiments on real packet traces. Other than the proportional increasing of common approaches, the average processing time per packet of TS-LRU increases more slowly when measure multiple types of flows concurrently. Compared to the well-known multi-stage filters algorithm, TS-LRU achieves superior performance in terms of measurement accuracy in constrained memory space.
{"title":"Monitoring Heavy-Hitter Flows in High-Speed Network Concurrently","authors":"Fengyu Wang, Bin Gong, Shanqing Guo, Xiaofeng Wang","doi":"10.1109/NSS.2010.31","DOIUrl":"https://doi.org/10.1109/NSS.2010.31","url":null,"abstract":"Identifying heavy-hitter flows in high-speed network link is important for some applications. This paper studied the approach of measuring various heavy-hitter flows simultaneously. We proposed a novel scheme, named TS-LRU (Two-Stage Least Recently Used), which process arriving packets through two stages to extract heavy-hitter flows. New packets are aggregated into FGFs (Fine-Grained Flow) and preserved in Stage1. The FGFs with no arrival packets for a relative long time are evicted from Stage1 using LRU replacement. The replaced FGFs are added into Stage2 and aggregated into RGFs (Rough-Grained Flow) further. The replacement scheme used in Stage2 is based on LRU with considering RGF size, named LRU-Size. There could be several similar data structures in Stage2 to extract different types of RGFs concurrently. Mathematical analysis indicates that this algorithm can save memory space and improve processing speed efficiently through exploiting the distribution characteristics of flows. We also examined TS-LRU with simulated experiments on real packet traces. Other than the proportional increasing of common approaches, the average processing time per packet of TS-LRU increases more slowly when measure multiple types of flows concurrently. Compared to the well-known multi-stage filters algorithm, TS-LRU achieves superior performance in terms of measurement accuracy in constrained memory space.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123010249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunli Lv, Xiaoqi Jia, Lijun Tian, Jiwu Jing, Mingli Sun
Most of secret sharing schemes have to be computed in a Galois field, such as Shamir’s scheme, which have relatively heavy computational cost. Kurihara et al. [1] recently proposed a fast secret sharing scheme using only Exclusive-OR(XOR) operations to make shares and recover the secret. Their proposed scheme was shown to be hundreds of times faster than Shamir’s (in GF(q=264)) in terms of both distribution and recovery with a 4.5 MB secret when k=3 and n=11. However, some steps in their scheme still need to be improved. Their security proofs were too complex and difficult to be understood and verified intuitively. In this paper, we present a conciser, cleaner, faster scheme which is also based on XOR. Moreover, we give two geometric explanations of making shares in both our and Kurihara’s schemes respectively, which would help to easier and further understand how the shares are made in the two schemes.
{"title":"Efficient Ideal Threshold Secret Sharing Schemes Based on EXCLUSIVE-OR Operations","authors":"Chunli Lv, Xiaoqi Jia, Lijun Tian, Jiwu Jing, Mingli Sun","doi":"10.1109/NSS.2010.82","DOIUrl":"https://doi.org/10.1109/NSS.2010.82","url":null,"abstract":"Most of secret sharing schemes have to be computed in a Galois field, such as Shamir’s scheme, which have relatively heavy computational cost. Kurihara et al. [1] recently proposed a fast secret sharing scheme using only Exclusive-OR(XOR) operations to make shares and recover the secret. Their proposed scheme was shown to be hundreds of times faster than Shamir’s (in GF(q=264)) in terms of both distribution and recovery with a 4.5 MB secret when k=3 and n=11. However, some steps in their scheme still need to be improved. Their security proofs were too complex and difficult to be understood and verified intuitively. In this paper, we present a conciser, cleaner, faster scheme which is also based on XOR. Moreover, we give two geometric explanations of making shares in both our and Kurihara’s schemes respectively, which would help to easier and further understand how the shares are made in the two schemes.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132449759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Signcryption is a cryptographic primitive that provides confidentiality and authenticity simultaneously at a cost significantly lower than that of the naive combination of encrypting and signing the message. Threshold signcryption is used when a message to be sent needs the authentication of a certain number of members in an organisation, and until and unless a given number of members (known as the threshold) join the signcyption process, a particular message cannot be signcrypted. Threshold unsigncryption is used when this constraint is applicable during the unsigncryption process. In this work, we cryptanalyze two threshold unsigncryption schemes. We show that both these schemes do not meet the stringent requirements of insider security and propose attacks on both confidentiality and unforgeability. We also propose an improved identity based threshold unsigncryption scheme and give the formal proof of security in a new stronger security model.
{"title":"On the Security of Identity Based Threshold Unsigncryption Schemes","authors":"S. S. D. Selvi, S. Vivek, C. Rangan, S. Priti","doi":"10.1109/NSS.2010.99","DOIUrl":"https://doi.org/10.1109/NSS.2010.99","url":null,"abstract":"Signcryption is a cryptographic primitive that provides confidentiality and authenticity simultaneously at a cost significantly lower than that of the naive combination of encrypting and signing the message. Threshold signcryption is used when a message to be sent needs the authentication of a certain number of members in an organisation, and until and unless a given number of members (known as the threshold) join the signcyption process, a particular message cannot be signcrypted. Threshold unsigncryption is used when this constraint is applicable during the unsigncryption process. In this work, we cryptanalyze two threshold unsigncryption schemes. We show that both these schemes do not meet the stringent requirements of insider security and propose attacks on both confidentiality and unforgeability. We also propose an improved identity based threshold unsigncryption scheme and give the formal proof of security in a new stronger security model.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The research of cognitive science indicates that manifold-learning-based facial image retrieval is based on human perception, which can accurately capture the intrinsic similarity of two facial images. The paper proposes a pivot-based Distributed Pseudo Similarity Retrieval method called DPSR in manifold spaces with the aid of a adjacency distance list (ADL). Specifically, we first construct a two dimensional array, called ADL which records the pair-wise distance between any two facial images with a constraint in the database. Then, the distances are indexed by a B+-tree. Finally, a DPSR process in high-dimensional manifold spaces is transformed into range search over the B+-tree in the single-dimensional space at a filtering level. Extensive experimental studies show that the DPSR outperforms the conventional sequential scan in manifold spaces by a large margin, especially for the large high-dimensional datasets.
{"title":"A Pivot-Based Distributed Pseudo Facial Image Retrieval in Manifold Spaces: An Efficiency Study","authors":"Zhuang Yi","doi":"10.1109/NSS.2010.59","DOIUrl":"https://doi.org/10.1109/NSS.2010.59","url":null,"abstract":"The research of cognitive science indicates that manifold-learning-based facial image retrieval is based on human perception, which can accurately capture the intrinsic similarity of two facial images. The paper proposes a pivot-based Distributed Pseudo Similarity Retrieval method called DPSR in manifold spaces with the aid of a adjacency distance list (ADL). Specifically, we first construct a two dimensional array, called ADL which records the pair-wise distance between any two facial images with a constraint in the database. Then, the distances are indexed by a B+-tree. Finally, a DPSR process in high-dimensional manifold spaces is transformed into range search over the B+-tree in the single-dimensional space at a filtering level. Extensive experimental studies show that the DPSR outperforms the conventional sequential scan in manifold spaces by a large margin, especially for the large high-dimensional datasets.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127438813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Artale, B. Crispo, Fausto Giunchiglia, F. Turkmen, Rui Zhang
Relation Based Access Control (RelBAC) is an access control model that places permissions as first class concepts. Under this model, we discuss in this paper how to formalize typical access control policies with Description Logics. Important security properties, i.e., Separation of Duties (SoD) and Chinese Wall are studied and formally represented in RelBAC. To meet the needs of automated tools for administrators, we show that RelBAC can formalize and answer queries about access control requests and administrative checks resorting to the reasoning services of the underlying Description Logic.
{"title":"Reasoning about Relation Based Access Control","authors":"A. Artale, B. Crispo, Fausto Giunchiglia, F. Turkmen, Rui Zhang","doi":"10.1109/NSS.2010.76","DOIUrl":"https://doi.org/10.1109/NSS.2010.76","url":null,"abstract":"Relation Based Access Control (RelBAC) is an access control model that places permissions as first class concepts. Under this model, we discuss in this paper how to formalize typical access control policies with Description Logics. Important security properties, i.e., Separation of Duties (SoD) and Chinese Wall are studied and formally represented in RelBAC. To meet the needs of automated tools for administrators, we show that RelBAC can formalize and answer queries about access control requests and administrative checks resorting to the reasoning services of the underlying Description Logic.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126159521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of analyzing available network forensics evidence to determine their meaning and significance can be very involved. It is often necessary to develop a timeline of significant events to obtain an overview of what occurred, to create relational diagrams showing which users are connected to which systems, or to correlate and analyze data to find noteworthy patterns of network traffic. However, there is a lack of statistical analysis of network traffic for security incident determination, especially the Denial of Service (DoS) attack in mobile ad hoc network (MANET). In this work, we focus on the "analysis" part of network forensic investigation. Specifically, we study one type of DoS attack, called distributed DoS (DDoS) flooding attack in MANET. We present a quantitative model to characterizes this attack and its traffic statistics. We also propose an analytical model for looking for specific patterns of the attack traffic, aiming to achieve: (1) Determine if there is an anomaly in the traffic and whether the anomaly is the DDoS attack (2) Determine the time when the attack is launched.
{"title":"Network Forensics in MANET: Traffic Analysis of Source Spoofed DoS Attacks","authors":"Yinghua Guo, Matthew Simon","doi":"10.1109/NSS.2010.45","DOIUrl":"https://doi.org/10.1109/NSS.2010.45","url":null,"abstract":"The process of analyzing available network forensics evidence to determine their meaning and significance can be very involved. It is often necessary to develop a timeline of significant events to obtain an overview of what occurred, to create relational diagrams showing which users are connected to which systems, or to correlate and analyze data to find noteworthy patterns of network traffic. However, there is a lack of statistical analysis of network traffic for security incident determination, especially the Denial of Service (DoS) attack in mobile ad hoc network (MANET). In this work, we focus on the \"analysis\" part of network forensic investigation. Specifically, we study one type of DoS attack, called distributed DoS (DDoS) flooding attack in MANET. We present a quantitative model to characterizes this attack and its traffic statistics. We also propose an analytical model for looking for specific patterns of the attack traffic, aiming to achieve: (1) Determine if there is an anomaly in the traffic and whether the anomaly is the DDoS attack (2) Determine the time when the attack is launched.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129741404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-01DOI: 10.4108/trans.sis.2013.01-03.e5
Min Li
The recent usage control model (UCON) is a foundation for next-generation access control models with distinguishing properties of decision continuity and attribute mutability. Constraints in UCON are one of the most important components that have involved in the principle motivations of usage analysis and design. The importance of constraints associated with authorizations, obligations, and conditions in UCON has been recognized but modeling these constraints has not been received much attention. In this paper we use a de facto constraints specification language in software engineering to analyze the constraints in UCON model. We show how to represent constraints with object constraint language (OCL) and give out a formalized specification of UCON model which is built from basic constraints, such as authorization predicates, obligation actions and condition requirements. Further, we show the flexibility and expressive capability of this specified UCON model with extensive examples.
{"title":"Specifying Usage Control Model with Object Constraint Language","authors":"Min Li","doi":"10.4108/trans.sis.2013.01-03.e5","DOIUrl":"https://doi.org/10.4108/trans.sis.2013.01-03.e5","url":null,"abstract":"The recent usage control model (UCON) is a foundation for next-generation access control models with distinguishing properties of decision continuity and attribute mutability. Constraints in UCON are one of the most important components that have involved in the principle motivations of usage analysis and design. The importance of constraints associated with authorizations, obligations, and conditions in UCON has been recognized but modeling these constraints has not been received much attention. In this paper we use a de facto constraints specification language in software engineering to analyze the constraints in UCON model. We show how to represent constraints with object constraint language (OCL) and give out a formalized specification of UCON model which is built from basic constraints, such as authorization predicates, obligation actions and condition requirements. Further, we show the flexibility and expressive capability of this specified UCON model with extensive examples.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125615288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microdata protection in statistical databases has recently become a major societal concern. Micro aggregation for Statistical Disclosure Control (SDC) is a family of methods to protect microdata from individual identification. Micro aggregation works by partitioning the microdata into groups of at least k records and then replacing the records in each group with the centroid of the group. This paper presents a clustering-based micro aggregation method to minimize the information loss. The proposed technique adopts to group similar records together in a systematic way and then anonymized with the centroid of each group individually. The structure of systematic clustering problem is defined and investigated and an algorithm of the proposed problem is developed. Experimental results show that our method attains a reasonable dominance with respect to both information loss and execution time than the most popular heuristic algorithm called Maximum Distance to Average Vector (MDAV).
统计数据库中的微数据保护最近已成为一个主要的社会问题。用于统计披露控制(SDC)的微聚合是保护微数据不受个人识别的一系列方法。微聚合的工作原理是将微数据划分为至少有k条记录的组,然后用该组的质心替换每组中的记录。本文提出了一种基于聚类的微聚合方法,使信息丢失最小化。本文提出的方法是将相似的记录系统地分组在一起,然后以每组的质心单独匿名化。定义并研究了系统聚类问题的结构,提出了系统聚类问题的算法。实验结果表明,与最流行的启发式算法MDAV (Maximum Distance to Average Vector)相比,我们的方法在信息丢失和执行时间方面都取得了合理的优势。
{"title":"Systematic Clustering-Based Microaggregation for Statistical Disclosure Control","authors":"M. E. Kabir, Hua Wang","doi":"10.1109/NSS.2010.66","DOIUrl":"https://doi.org/10.1109/NSS.2010.66","url":null,"abstract":"Microdata protection in statistical databases has recently become a major societal concern. Micro aggregation for Statistical Disclosure Control (SDC) is a family of methods to protect microdata from individual identification. Micro aggregation works by partitioning the microdata into groups of at least k records and then replacing the records in each group with the centroid of the group. This paper presents a clustering-based micro aggregation method to minimize the information loss. The proposed technique adopts to group similar records together in a systematic way and then anonymized with the centroid of each group individually. The structure of systematic clustering problem is defined and investigated and an algorithm of the proposed problem is developed. Experimental results show that our method attains a reasonable dominance with respect to both information loss and execution time than the most popular heuristic algorithm called Maximum Distance to Average Vector (MDAV).","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124998711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reputation and proof-of-work systems have been outlined as methods bot masters will soon use to defend their peer-to-peer botnets. These techniques are designed to prevent sybil attacks, such as those that led to the downfall of the Storm botnet. To evaluate the effectiveness of these techniques, a botnet that employed these techniques was simulated, and the amount of resources required to stage a successful sybil attack against it measured. While the proof-of-work system was found to increase the resources required for a successful sybil attack, the reputation system was found to lower the amount of resources required to disable the botnet.
{"title":"Overcoming Reputation and Proof-of-Work Systems in Botnets","authors":"A. White, Alan B. Tickle, A. Clark","doi":"10.1109/NSS.2010.65","DOIUrl":"https://doi.org/10.1109/NSS.2010.65","url":null,"abstract":"Reputation and proof-of-work systems have been outlined as methods bot masters will soon use to defend their peer-to-peer botnets. These techniques are designed to prevent sybil attacks, such as those that led to the downfall of the Storm botnet. To evaluate the effectiveness of these techniques, a botnet that employed these techniques was simulated, and the amount of resources required to stage a successful sybil attack against it measured. While the proof-of-work system was found to increase the resources required for a successful sybil attack, the reputation system was found to lower the amount of resources required to disable the botnet.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123034869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed denial of service (DDoS) attack is a continuous critical threat to the Internet. Derived from the low layers, new application-layer-based DDoS attacks utilizing legitimate HTTP requests to overwhelm victim resources are more undetectable. The case may be more serious when such attacks mimic or occur during the flash crowd event of a popular Website. In this paper, we present the design and implementation of CALD, an architectural extension to protect Web servers against various DDoS attacks that masquerade as flash crowds. CALD provides real-time detection using mess tests but is different from other systems that use resembling methods. First, CALD uses a front-end sensor to monitor the traffic that may contain various DDoS attacks or flash crowds. Intense pulse in the traffic means possible existence of anomalies because this is the basic property of DDoS attacks and flash crowds. Once abnormal traffic is identified, the sensor sends ATTENTION signal to activate the attack detection module. Second, CALD dynamically records the average frequency of each source IP and check the total mess extent. Theoretically, the mess extent of DDoS attacks is larger than the one of flash crowds. Thus, with some parameters from the attack detection module, the filter is capable of letting the legitimate requests through but the attack traffic stopped. Third, CALD may divide the security modules away from the Web servers. As a result, it keeps maximum performance on the kernel web services, regardless of the harassment from DDoS. In the experiments, the records from www.sina.com and www.taobao.com have proved the value of CALD.
分布式拒绝服务攻击(Distributed denial of service, DDoS)是互联网面临的持续严重威胁。新的基于应用层的DDoS攻击源自底层,利用合法的HTTP请求来淹没受害者资源,这种攻击更加难以察觉。当这种攻击模仿或发生在热门网站的快闪人群事件时,情况可能会更严重。在本文中,我们介绍了CALD的设计和实现,CALD是一种架构扩展,用于保护Web服务器免受伪装成闪电人群的各种DDoS攻击。CALD使用混乱测试提供实时检测,但与使用类似方法的其他系统不同。首先,CALD使用前端传感器监控可能包含各种DDoS攻击或闪电人群的流量。流量的强烈脉冲意味着可能存在异常,因为这是DDoS攻击和闪电人群的基本属性。当检测到异常流量时,传感器发送“注意”信号激活攻击检测模块。其次,CALD动态记录每个源IP的平均频率,并检查总混乱程度。从理论上讲,DDoS攻击的混乱程度比闪电人群更大。因此,使用来自攻击检测模块的一些参数,过滤器能够让合法请求通过,但攻击流量停止。第三,CALD可以将安全模块从Web服务器中分离出来。因此,它在内核web服务上保持最大的性能,而不受DDoS的骚扰。在实验中,www.sina.com和www.taobao.com的记录证明了CALD的价值。
{"title":"CALD: Surviving Various Application-Layer DDoS Attacks That Mimic Flash Crowd","authors":"S. Wen, W. Jia, Wei Zhou, Wanlei Zhou, Chuan Xu","doi":"10.1109/NSS.2010.69","DOIUrl":"https://doi.org/10.1109/NSS.2010.69","url":null,"abstract":"Distributed denial of service (DDoS) attack is a continuous critical threat to the Internet. Derived from the low layers, new application-layer-based DDoS attacks utilizing legitimate HTTP requests to overwhelm victim resources are more undetectable. The case may be more serious when such attacks mimic or occur during the flash crowd event of a popular Website. In this paper, we present the design and implementation of CALD, an architectural extension to protect Web servers against various DDoS attacks that masquerade as flash crowds. CALD provides real-time detection using mess tests but is different from other systems that use resembling methods. First, CALD uses a front-end sensor to monitor the traffic that may contain various DDoS attacks or flash crowds. Intense pulse in the traffic means possible existence of anomalies because this is the basic property of DDoS attacks and flash crowds. Once abnormal traffic is identified, the sensor sends ATTENTION signal to activate the attack detection module. Second, CALD dynamically records the average frequency of each source IP and check the total mess extent. Theoretically, the mess extent of DDoS attacks is larger than the one of flash crowds. Thus, with some parameters from the attack detection module, the filter is capable of letting the legitimate requests through but the attack traffic stopped. Third, CALD may divide the security modules away from the Web servers. As a result, it keeps maximum performance on the kernel web services, regardless of the harassment from DDoS. In the experiments, the records from www.sina.com and www.taobao.com have proved the value of CALD.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124418129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}