首页 > 最新文献

2007 IEEE International Conference on Granular Computing (GRC 2007)最新文献

英文 中文
Application of Quantum Genetic Algorithm on Finding Minimal Reduct 量子遗传算法在寻找最小约简中的应用
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.87
M. Qadir, M. Fahad, Syed Adnan Hussain Shah
Quantum Genetic Algorithm (QGA) is a promising area in the field of computational intelligence nowadays. Although some genetic algorithms to find minimal reduct of attributes have been proposed, most of them have some defects. On the other hand, quantum genetic algorithm has some advantages, such as strong parallelism, rapid good search capability, and small population size. In this paper, we propose a QGA to find minimal reduct based on distinction table. The algorithm can obtain the best solution with one chromosome in a short time. It is testified by two experiments that our algorithm improves the GA from four points of view: population size, parallelism, computing time and search capability.
量子遗传算法(QGA)是当今计算智能领域一个很有前途的研究方向。虽然已经提出了一些寻找属性最小约简的遗传算法,但大多数算法都存在一定的缺陷。另一方面,量子遗传算法具有并行性强、搜索速度快、种群规模小等优点。本文提出了一种基于区别表的QGA最小约简算法。该算法可以在短时间内得到一条染色体的最优解。通过两个实验证明,该算法从种群大小、并行度、计算时间和搜索能力四个方面改进了遗传算法。
{"title":"Application of Quantum Genetic Algorithm on Finding Minimal Reduct","authors":"M. Qadir, M. Fahad, Syed Adnan Hussain Shah","doi":"10.1109/GrC.2007.87","DOIUrl":"https://doi.org/10.1109/GrC.2007.87","url":null,"abstract":"Quantum Genetic Algorithm (QGA) is a promising area in the field of computational intelligence nowadays. Although some genetic algorithms to find minimal reduct of attributes have been proposed, most of them have some defects. On the other hand, quantum genetic algorithm has some advantages, such as strong parallelism, rapid good search capability, and small population size. In this paper, we propose a QGA to find minimal reduct based on distinction table. The algorithm can obtain the best solution with one chromosome in a short time. It is testified by two experiments that our algorithm improves the GA from four points of view: population size, parallelism, computing time and search capability.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127253823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Tree Mining Application to Matching of Heterogeneous Knowledge Representations 树挖掘在异构知识表示匹配中的应用
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.134
F. Hadzic, T. Dillon, E. Chang
Matching of heterogeneous knowledge sources is of increasing importance in areas such as scientific knowledge management, e-commerce, enterprise application integration, and many emerging Semantic Web applications. With the desire of knowledge sharing and reuse in these fields, it is common that the knowledge coming from different organizations from the same domain is to be matched. We propose a knowledge matching method based on our previously developed tree mining algorithms for extracting frequently occurring subtrees from a tree structured database such as XML. Using the method the common structure among the different representations can be automatically extracted. Our focus is on knowledge matching at the structural level and we use a set of example XML schema documents from the same domain to evaluate the method. We discuss some important issues that arise when applying tree mining algorithms for detection of common document structures. The experiments demonstrate the usefulness of the approach.
在科学知识管理、电子商务、企业应用集成和许多新兴的语义Web应用等领域,异构知识来源的匹配变得越来越重要。由于这些领域的知识共享和重用的需要,来自同一领域的不同组织的知识往往需要进行匹配。我们提出了一种基于我们之前开发的树挖掘算法的知识匹配方法,用于从树结构数据库(如XML)中提取频繁出现的子树。使用该方法可以自动提取不同表示之间的公共结构。我们的重点是结构级的知识匹配,并使用来自同一领域的一组示例XML模式文档来评估该方法。我们讨论了应用树挖掘算法检测常见文档结构时出现的一些重要问题。实验证明了该方法的有效性。
{"title":"Tree Mining Application to Matching of Heterogeneous Knowledge Representations","authors":"F. Hadzic, T. Dillon, E. Chang","doi":"10.1109/GrC.2007.134","DOIUrl":"https://doi.org/10.1109/GrC.2007.134","url":null,"abstract":"Matching of heterogeneous knowledge sources is of increasing importance in areas such as scientific knowledge management, e-commerce, enterprise application integration, and many emerging Semantic Web applications. With the desire of knowledge sharing and reuse in these fields, it is common that the knowledge coming from different organizations from the same domain is to be matched. We propose a knowledge matching method based on our previously developed tree mining algorithms for extracting frequently occurring subtrees from a tree structured database such as XML. Using the method the common structure among the different representations can be automatically extracted. Our focus is on knowledge matching at the structural level and we use a set of example XML schema documents from the same domain to evaluate the method. We discuss some important issues that arise when applying tree mining algorithms for detection of common document structures. The experiments demonstrate the usefulness of the approach.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125506529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Possibility Theory-Based Approach to Spam Email Detection 基于可能性理论的垃圾邮件检测方法
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.123
D. Tran, Wanli Ma, D. Sharma, Thien Huu Nguyen
Most of current spam email detection systems use keywords in a blacklist to detect spam emails. However these keywords can be written as misspellings, for example "baank", "ba-nk" and "bankk" instead of "bank". Moreover, misspellings are changed from time to time and hence spam email detection system needs to constantly update the blacklist to detect spam emails containing such misspellings. However it is impossible to predict all possible misspellings for a given keyword to add those to the blacklist. We present a possibility theory-based approach to spam email detection to solve this problem. We consider every keyword in the blacklist along with its misspellings as a fuzzy set and propose a possibility function. This function will be used to calculate a possibility score for an unknown email. Using a proposed if-then rule and this core, we can decide whether or not this unknown email is spam. Experimental results are also presented.
目前大多数垃圾邮件检测系统使用黑名单中的关键字来检测垃圾邮件。然而,这些关键字可能写成拼写错误,例如“bank”,“bank - bank”和“bankk”而不是“bank”。此外,由于拼写错误经常发生变化,因此垃圾邮件检测系统需要不断更新黑名单,以检测含有这些拼写错误的垃圾邮件。但是,不可能预测给定关键字的所有可能的拼写错误,从而将其添加到黑名单中。我们提出了一种基于可能性理论的垃圾邮件检测方法来解决这个问题。我们将黑名单中的每个关键字及其拼写错误视为一个模糊集,并提出了一个可能性函数。此功能将用于计算未知电子邮件的可能性得分。使用一个建议的if-then规则和这个核心,我们可以决定这个未知的电子邮件是否是垃圾邮件。并给出了实验结果。
{"title":"Possibility Theory-Based Approach to Spam Email Detection","authors":"D. Tran, Wanli Ma, D. Sharma, Thien Huu Nguyen","doi":"10.1109/GrC.2007.123","DOIUrl":"https://doi.org/10.1109/GrC.2007.123","url":null,"abstract":"Most of current spam email detection systems use keywords in a blacklist to detect spam emails. However these keywords can be written as misspellings, for example \"baank\", \"ba-nk\" and \"bankk\" instead of \"bank\". Moreover, misspellings are changed from time to time and hence spam email detection system needs to constantly update the blacklist to detect spam emails containing such misspellings. However it is impossible to predict all possible misspellings for a given keyword to add those to the blacklist. We present a possibility theory-based approach to spam email detection to solve this problem. We consider every keyword in the blacklist along with its misspellings as a fuzzy set and propose a possibility function. This function will be used to calculate a possibility score for an unknown email. Using a proposed if-then rule and this core, we can decide whether or not this unknown email is spam. Experimental results are also presented.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130332997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fuzzy Vector Quantization for Network Intrusion Detection 网络入侵检测中的模糊矢量量化
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.124
D. Tran, Wanli Ma, D. Sharma, Thien Huu Nguyen
This paper considers anomaly network traffic detection using different network feature subsets. Fuzzy c-means vector quantization is used to train network attack models and the minimum distortion rule is applied to detect network attacks. We also demonstrate the effectiveness and ineffectiveness in finding anomalies by looking at the network data alone. Experiments performed on the KDD CUP 1999 dataset show that time based traffic features in the last two second time window should be selected to obtain highest detection rates.
本文考虑使用不同的网络特征子集来检测异常网络流量。利用模糊c均值矢量量化训练网络攻击模型,利用最小失真规则检测网络攻击。我们还演示了通过单独查看网络数据来发现异常的有效性和无效性。在KDD CUP 1999数据集上进行的实验表明,为了获得最高的检测率,应该选择最后两秒时间窗的基于时间的交通特征。
{"title":"Fuzzy Vector Quantization for Network Intrusion Detection","authors":"D. Tran, Wanli Ma, D. Sharma, Thien Huu Nguyen","doi":"10.1109/GrC.2007.124","DOIUrl":"https://doi.org/10.1109/GrC.2007.124","url":null,"abstract":"This paper considers anomaly network traffic detection using different network feature subsets. Fuzzy c-means vector quantization is used to train network attack models and the minimum distortion rule is applied to detect network attacks. We also demonstrate the effectiveness and ineffectiveness in finding anomalies by looking at the network data alone. Experiments performed on the KDD CUP 1999 dataset show that time based traffic features in the last two second time window should be selected to obtain highest detection rates.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129755423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Structured Writing with Granular Computing Strategies 结构化写作与颗粒计算策略
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.15
Yiyu Yao
Granular computing unifies structured thinking, structured problem solving and structured information processing. In order to see the flexibility and universal applicability of this trinity model, we must demonstrate its effectiveness in solving real world problems. In this paper, we apply the basic ideas, principles, and strategies of granular computing to the specific problem solving task known as structured writing. Results from languages, human knowledge organization, rhetoric, writing, computer programming, and mathematical proving are summarized and cast in a setting for structured writing. The results bring new insights into granular computing.
颗粒计算统一了结构化思维、结构化问题解决和结构化信息处理。为了看到这个三位一体模型的灵活性和普遍适用性,我们必须证明它在解决现实世界问题方面的有效性。在本文中,我们将颗粒计算的基本思想、原则和策略应用于被称为结构化写作的特定问题解决任务。从语言,人类知识组织,修辞学,写作,计算机编程和数学证明的结果总结和铸造在一个设置结构化写作。研究结果为颗粒计算带来了新的见解。
{"title":"Structured Writing with Granular Computing Strategies","authors":"Yiyu Yao","doi":"10.1109/GrC.2007.15","DOIUrl":"https://doi.org/10.1109/GrC.2007.15","url":null,"abstract":"Granular computing unifies structured thinking, structured problem solving and structured information processing. In order to see the flexibility and universal applicability of this trinity model, we must demonstrate its effectiveness in solving real world problems. In this paper, we apply the basic ideas, principles, and strategies of granular computing to the specific problem solving task known as structured writing. Results from languages, human knowledge organization, rhetoric, writing, computer programming, and mathematical proving are summarized and cast in a setting for structured writing. The results bring new insights into granular computing.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134346580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
An Incremental Algorithm for Mining Default Definite Decision Rules from Incomplete Decision Tables 一种从不完全决策表中挖掘默认确定决策规则的增量算法
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.57
Chen Wu, Xiao-lin Hu, Xiajiong Shen, Xiaodan Zhang, Yi Pan
The present paper puts forward an incremental algorithm for extracting default definite rules proposed by us from incomplete decision table using semi-equivalence classes derived from a semi-equivalence relation and their meet and join blocks on the universe. After default definite decision rules and constraint rules are acquired from the incomplete decision table, the incremental algorithm is used to modify them when new data is added to the incomplete information table. It does not need to process the original dataset repeatedly but only updates related data and rules. So it is effective in performing mining tasks from incomplete decision table. Through an example, a procedure for mining and revising rules is illustrated.
本文利用由半等价关系派生的半等价类及其在宇宙上的会合块和连接块,提出了一种从不完全决策表中提取我们提出的默认确定规则的增量算法。从不完全信息表中获取默认的确定决策规则和约束规则后,在不完全信息表中添加新数据时,使用增量算法对其进行修改。它不需要重复处理原始数据集,只需更新相关数据和规则。因此,它可以有效地从不完全决策表中执行挖掘任务。通过实例说明了规则的挖掘和修改过程。
{"title":"An Incremental Algorithm for Mining Default Definite Decision Rules from Incomplete Decision Tables","authors":"Chen Wu, Xiao-lin Hu, Xiajiong Shen, Xiaodan Zhang, Yi Pan","doi":"10.1109/GrC.2007.57","DOIUrl":"https://doi.org/10.1109/GrC.2007.57","url":null,"abstract":"The present paper puts forward an incremental algorithm for extracting default definite rules proposed by us from incomplete decision table using semi-equivalence classes derived from a semi-equivalence relation and their meet and join blocks on the universe. After default definite decision rules and constraint rules are acquired from the incomplete decision table, the incremental algorithm is used to modify them when new data is added to the incomplete information table. It does not need to process the original dataset repeatedly but only updates related data and rules. So it is effective in performing mining tasks from incomplete decision table. Through an example, a procedure for mining and revising rules is illustrated.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133704408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fuzzy Logic Approach to Identification of Cellular Quantity by Ultrasonic System 超声系统细胞数量识别的模糊逻辑方法
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.68
S. Yamaguchi, K. Nagamune, K. Oe, Syoji Kobashi, K. Kondo, Y. Hata
This paper introduces an ultrasound identification system for cellular quantity of artificial culture bone with fuzzy inference. In our method, first, we measure ultrasound wave. Second, we obtain the two characteristics of the amplitude and the frequency. The amplitude is calculated as the Peak to Peak value, and the frequency is calculated from frequency spectrum of transfer-function by using cross-spectrum method. Our fuzzy inferences system estimates the cellular quantity from these values. As an experimental result, our identification system could evaluate the cellular quantity in culture bone with high accuracy.
介绍了一种基于模糊推理的人工培养骨细胞数量超声识别系统。在我们的方法中,首先,我们测量超声波。其次,我们得到了振幅和频率两个特性。幅值按峰对峰计算,频率用交叉谱法从传递函数的频谱计算。我们的模糊推理系统从这些值估计细胞数量。实验结果表明,该鉴定系统能较准确地评价培养骨中的细胞数量。
{"title":"Fuzzy Logic Approach to Identification of Cellular Quantity by Ultrasonic System","authors":"S. Yamaguchi, K. Nagamune, K. Oe, Syoji Kobashi, K. Kondo, Y. Hata","doi":"10.1109/GrC.2007.68","DOIUrl":"https://doi.org/10.1109/GrC.2007.68","url":null,"abstract":"This paper introduces an ultrasound identification system for cellular quantity of artificial culture bone with fuzzy inference. In our method, first, we measure ultrasound wave. Second, we obtain the two characteristics of the amplitude and the frequency. The amplitude is calculated as the Peak to Peak value, and the frequency is calculated from frequency spectrum of transfer-function by using cross-spectrum method. Our fuzzy inferences system estimates the cellular quantity from these values. As an experimental result, our identification system could evaluate the cellular quantity in culture bone with high accuracy.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linguistic Summaries of Static and Dynamic Data: Computing with Words and Granularity 静态和动态数据的语言摘要:用词和粒度计算
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.161
J. Kacprzyk
Summary form only given. First, we briefly advocate a need of natural language based methods in data mining, notably whew domain experts have a limited knowledge of modern tools of information technology. We present some approaches to linguistic summarization of sets of (numeric and/or textual) data, and show that a fuzzy logic based approach by Yager (1982), notably in its extended and implementable version of Kacprzyk and Yager (2001), and Kacprzyk, Yager and Zadrozny (2000), offers a simplicity and intuitive appeal, in particular in its new setting by Kacprzyk and Zadrozny (2005) based on Zadeh's computing with words and protoforms.
只提供摘要形式。首先,我们简要地提倡在数据挖掘中需要基于自然语言的方法,特别是当领域专家对现代信息技术工具的了解有限时。我们提出了一些对(数字和/或文本)数据集进行语言总结的方法,并表明Yager(1982)基于模糊逻辑的方法,特别是在Kacprzyk和Yager(2001)以及Kacprzyk, Yager和Zadrozny(2000)的扩展和可实现版本中,提供了简单和直观的吸引力,特别是在Kacprzyk和Zadrozny(2005)基于Zadeh的单词和原型计算的新设置中。
{"title":"Linguistic Summaries of Static and Dynamic Data: Computing with Words and Granularity","authors":"J. Kacprzyk","doi":"10.1109/GrC.2007.161","DOIUrl":"https://doi.org/10.1109/GrC.2007.161","url":null,"abstract":"Summary form only given. First, we briefly advocate a need of natural language based methods in data mining, notably whew domain experts have a limited knowledge of modern tools of information technology. We present some approaches to linguistic summarization of sets of (numeric and/or textual) data, and show that a fuzzy logic based approach by Yager (1982), notably in its extended and implementable version of Kacprzyk and Yager (2001), and Kacprzyk, Yager and Zadrozny (2000), offers a simplicity and intuitive appeal, in particular in its new setting by Kacprzyk and Zadrozny (2005) based on Zadeh's computing with words and protoforms.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116747651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generating Attack Scenarios with Causal Relationship 生成具有因果关系的攻击场景
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.117
Yu-Chin Cheng, Chien-Hung Chen, Chung-Chih Chiang, Jun-Wei Wang, C. Laih
With the incoming of information era, Internet has been developed rapidly and offered more and more services. However, intrusions, viruses and worms follow with the grown of Internet, spread widely all over the world within high speed network. Although many kinds of intrusion detection systems (IDSs) are developed, they have some disadvantages in that they focus on low-level attacks or anomalies, and raise alerts independently. In this paper, we give a formal description about attack patterns, attack transition states and attack scenarios. We proposed the system architecture to generate an attack scenario database correctly and completely. We first classify and extract attack patterns, then, correlate attack patterns with pre/post conditions matching and. Moreover, the approach, attack scenario generation with casual relationship (ASGCR), is proposed to build an attack scenario database Finally, we present the combination of our attack scenario database with security operation center (SOC) to implement the related components concerning alert integrations and correlations. It is shown that our method is better than CAML [4] since we can generate more attack scenarios effectively and correctly to help system managers to maintain network security.
随着信息时代的到来,互联网得到了迅速发展,提供的服务也越来越多。然而,随着互联网的发展,网络入侵、病毒和蠕虫在高速网络中广泛传播。虽然目前已经开发出了多种入侵检测系统,但它们都存在一些缺点,即只关注底层的攻击或异常,而单独发出警报。本文给出了攻击模式、攻击转换状态和攻击场景的形式化描述。提出了正确完整地生成攻击场景数据库的系统架构。我们首先对攻击模式进行分类和提取,然后将攻击模式与前后条件匹配和关联起来。在此基础上,提出了基于因果关系的攻击场景生成方法(ASGCR)来构建攻击场景数据库,并将攻击场景数据库与安全运营中心(SOC)相结合,实现警报集成和关联等相关组件。结果表明,我们的方法优于CAML[4],因为我们可以有效、正确地生成更多的攻击场景,以帮助系统管理员维护网络安全。
{"title":"Generating Attack Scenarios with Causal Relationship","authors":"Yu-Chin Cheng, Chien-Hung Chen, Chung-Chih Chiang, Jun-Wei Wang, C. Laih","doi":"10.1109/GrC.2007.117","DOIUrl":"https://doi.org/10.1109/GrC.2007.117","url":null,"abstract":"With the incoming of information era, Internet has been developed rapidly and offered more and more services. However, intrusions, viruses and worms follow with the grown of Internet, spread widely all over the world within high speed network. Although many kinds of intrusion detection systems (IDSs) are developed, they have some disadvantages in that they focus on low-level attacks or anomalies, and raise alerts independently. In this paper, we give a formal description about attack patterns, attack transition states and attack scenarios. We proposed the system architecture to generate an attack scenario database correctly and completely. We first classify and extract attack patterns, then, correlate attack patterns with pre/post conditions matching and. Moreover, the approach, attack scenario generation with casual relationship (ASGCR), is proposed to build an attack scenario database Finally, we present the combination of our attack scenario database with security operation center (SOC) to implement the related components concerning alert integrations and correlations. It is shown that our method is better than CAML [4] since we can generate more attack scenarios effectively and correctly to help system managers to maintain network security.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124740800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Speed-up Technique for Association Rule Mining Based on an Artificial Life Algorithm 基于人工生命算法的关联规则挖掘加速技术
Pub Date : 2007-11-02 DOI: 10.1109/GrC.2007.103
Masaaki Kanakubo, M. Hagiwara
Association rule mining is one of the most important issues in data mining. Apriori computation schemes greatly reduce the computation time by pruning the candidate item-set. However, a large computation time is required when the treated data are dense and the amount of data is large. With apriori methods, the problem of becoming incomputable cannot be avoided when the total number of items is large. On the other hand, bottom-up approaches such as artificial life approaches are the opposite to of the top-down approaches of searches covering all transactions, and may provide new methods of breaking away from the completeness of searches in conventional algorithms. Here, an artificial life data mining technique is proposed in which one transaction is considered as one individual, and association rules are accumulated by the interaction of randomly selected individuals. The proposed algorithm is compared to other methods in application to a large-scale actual dataset, and it is verified that its performance is greatly superior to that of the method using transaction data virtually divided and that of apriori method by sampling approach, thus demonstrating its usefulness.
关联规则挖掘是数据挖掘中的一个重要问题。Apriori计算方案通过对候选项集进行修剪,大大减少了计算时间。但是,当处理的数据比较密集,数据量比较大时,需要大量的计算时间。使用先验方法,当项目总数很大时,无法避免不可计算的问题。另一方面,自底向上的方法,如人工生命方法,与覆盖所有事务的自顶向下的搜索方法相反,并且可能提供新的方法来摆脱传统算法中搜索的完整性。本文提出了一种人工生命数据挖掘技术,该技术将一个事务视为一个个体,并通过随机选择的个体之间的相互作用积累关联规则。将该算法与其他方法在大规模实际数据集上的应用进行了对比,验证了其性能大大优于虚拟分割交易数据的方法和采样方法的先验方法,从而证明了该算法的实用性。
{"title":"Speed-up Technique for Association Rule Mining Based on an Artificial Life Algorithm","authors":"Masaaki Kanakubo, M. Hagiwara","doi":"10.1109/GrC.2007.103","DOIUrl":"https://doi.org/10.1109/GrC.2007.103","url":null,"abstract":"Association rule mining is one of the most important issues in data mining. Apriori computation schemes greatly reduce the computation time by pruning the candidate item-set. However, a large computation time is required when the treated data are dense and the amount of data is large. With apriori methods, the problem of becoming incomputable cannot be avoided when the total number of items is large. On the other hand, bottom-up approaches such as artificial life approaches are the opposite to of the top-down approaches of searches covering all transactions, and may provide new methods of breaking away from the completeness of searches in conventional algorithms. Here, an artificial life data mining technique is proposed in which one transaction is considered as one individual, and association rules are accumulated by the interaction of randomly selected individuals. The proposed algorithm is compared to other methods in application to a large-scale actual dataset, and it is verified that its performance is greatly superior to that of the method using transaction data virtually divided and that of apriori method by sampling approach, thus demonstrating its usefulness.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130313917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2007 IEEE International Conference on Granular Computing (GRC 2007)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1