Increasing Memory Efficiency of Hash-Based Pattern Matching for High-Speed Networks

Tomás Fukac, J. Matoušek, J. Korenek, Lukás Kekely
{"title":"Increasing Memory Efficiency of Hash-Based Pattern Matching for High-Speed Networks","authors":"Tomás Fukac, J. Matoušek, J. Korenek, Lukás Kekely","doi":"10.1109/ICFPT52863.2021.9609859","DOIUrl":null,"url":null,"abstract":"Increasing speed of network links continuously pushes up requirements on the performance of network security and monitoring systems, including their typical representative and its core function: an intrusion detection system (IDS) and pattern matching. To allow the operation of IDS applications like Snort and Suricata in networks supporting throughput of 100Gbps or even more, a recently proposed pre-filtering architecture approximates exact pattern matching using hash-based matching of short strings that represent a given set of patterns. This architecture can scale supported throughput by adjusting the number of parallel hash functions and on-chip memory blocks utilized in the implementation of a hash table. Since each hash function can address every memory block, scaling throughput also increases the total capacity of the hash table. Nevertheless, the original architecture utilizes the available capacity of the hash table inefficiently. We therefore propose three optimization techniques that either reduce the amount of information stored in the hash table or increase its achievable occupancy. Moreover, we also design modifications of the architecture that enable resource-efficient utilization of all three optimization techniques together in synergy. Compared to the original pre-filtering architecture, combined use of the proposed optimizations in the 100Gbps scenario increases the achievable capacity for short strings by three orders of magnitude. It also reduces the utilization of FPGA logic resources to only a third.","PeriodicalId":376220,"journal":{"name":"2021 International Conference on Field-Programmable Technology (ICFPT)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Field-Programmable Technology (ICFPT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFPT52863.2021.9609859","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Increasing speed of network links continuously pushes up requirements on the performance of network security and monitoring systems, including their typical representative and its core function: an intrusion detection system (IDS) and pattern matching. To allow the operation of IDS applications like Snort and Suricata in networks supporting throughput of 100Gbps or even more, a recently proposed pre-filtering architecture approximates exact pattern matching using hash-based matching of short strings that represent a given set of patterns. This architecture can scale supported throughput by adjusting the number of parallel hash functions and on-chip memory blocks utilized in the implementation of a hash table. Since each hash function can address every memory block, scaling throughput also increases the total capacity of the hash table. Nevertheless, the original architecture utilizes the available capacity of the hash table inefficiently. We therefore propose three optimization techniques that either reduce the amount of information stored in the hash table or increase its achievable occupancy. Moreover, we also design modifications of the architecture that enable resource-efficient utilization of all three optimization techniques together in synergy. Compared to the original pre-filtering architecture, combined use of the proposed optimizations in the 100Gbps scenario increases the achievable capacity for short strings by three orders of magnitude. It also reduces the utilization of FPGA logic resources to only a third.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
提高高速网络中基于哈希模式匹配的内存效率
随着网络链路速度的不断提高,对网络安全和监控系统的性能要求也不断提高,其中包括入侵检测系统(IDS)和模式匹配,这是网络安全和监控系统的典型代表和核心功能。为了允许Snort和Suricata等IDS应用程序在支持100Gbps甚至更高吞吐量的网络中运行,最近提出的一种预过滤体系结构使用代表一组给定模式的短字符串的基于哈希的匹配来近似精确的模式匹配。这种体系结构可以通过调整并行哈希函数的数量和在哈希表的实现中使用的片上内存块来扩展支持的吞吐量。由于每个哈希函数都可以寻址每个内存块,因此扩展吞吐量也会增加哈希表的总容量。然而,原来的体系结构没有有效地利用哈希表的可用容量。因此,我们提出了三种优化技术,要么减少存储在哈希表中的信息量,要么增加其可实现的占用。此外,我们还设计了对体系结构的修改,使所有三种优化技术能够协同高效地利用资源。与原始的预滤波架构相比,在100Gbps场景中结合使用所提出的优化将短字符串的可实现容量提高了三个数量级。它还将FPGA逻辑资源的利用率降低到只有三分之一。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Characterization of IOBUF-based Ring Oscillators StreamZip: Compressed Sliding-Windows for Stream Aggregation Tens of gigabytes per second JSON-to-Arrow conversion with FPGA accelerators A High-Performance and Flexible FPGA Inference Accelerator for Decision Forests Based on Prior Feature Space Partitioning SoC FPGA implementation of an unmanned mobile vehicle with an image transmission system over VNC
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1