首页 > 最新文献

2011 Sixth International Conference on Digital Information Management最新文献

英文 中文
Expert finding and query answering for Collaborative Inter-Organizational system by using Rule Responder 基于规则响应器的协同组织间系统专家查找与查询应答
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093344
R. Tang, S. Fong, S. Sarasvady
Collaborative inter-organizational system (C-IOS) is defined as information technology-based systems that engage multiple business partners for achieving some common value-added goals. In the past, many papers from the literature addressed a large number of techniques on collaborative agents. The techniques range from basic information exchange to sophisticated negotiation. Specifically, for C-IOS's two important tasks namely Experts Finding (EF) and Query Answering (QA) are required in collaboration. These two specific tasks facilitate supply-chain mediation and may be subsequent procurement negotiation. EF concerns about finding or match-making the right personnel in an organization as a committee member for fulfilling a part of the job. QA screens initially whether the resources and commitments are potentially available. The two tasks supposedly would have executed prior to any further collaboration, and the communication is cross organizations. This paper contributes a design of C-IOS that supports EF and QA for inter-organizational collaboration. The underlying technical framework is by Rule Responder which is a powerful tool for creating virtual organizations as multi-agent systems that support collaborative teams on the Semantic Web. A use case of hosting an academic conference among different organizations is illustrated with our proposed concepts in this paper.
协作式组织间系统(C-IOS)被定义为基于信息技术的系统,它使多个业务伙伴参与进来,以实现一些共同的增值目标。过去,文献中的许多论文讨论了大量关于协作代理的技术。技术范围从基本的信息交换到复杂的谈判。具体来说,对于C-IOS的两个重要任务,即专家查找(EF)和查询回答(QA),需要协同工作。这两项具体任务有助于供应链调解,可能是随后的采购谈判。EF关注的是在组织中寻找或匹配合适的人员作为委员会成员来完成部分工作。QA首先筛选资源和承诺是否潜在可用。这两个任务应该在任何进一步的合作之前执行,并且沟通是跨组织的。本文提出了一种支持EF和QA的组织间协作的C-IOS设计。底层技术框架由Rule Responder提供,它是一个强大的工具,用于将虚拟组织创建为支持语义Web上的协作团队的多代理系统。本文用我们提出的概念说明了在不同组织之间主持学术会议的用例。
{"title":"Expert finding and query answering for Collaborative Inter-Organizational system by using Rule Responder","authors":"R. Tang, S. Fong, S. Sarasvady","doi":"10.1109/ICDIM.2011.6093344","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093344","url":null,"abstract":"Collaborative inter-organizational system (C-IOS) is defined as information technology-based systems that engage multiple business partners for achieving some common value-added goals. In the past, many papers from the literature addressed a large number of techniques on collaborative agents. The techniques range from basic information exchange to sophisticated negotiation. Specifically, for C-IOS's two important tasks namely Experts Finding (EF) and Query Answering (QA) are required in collaboration. These two specific tasks facilitate supply-chain mediation and may be subsequent procurement negotiation. EF concerns about finding or match-making the right personnel in an organization as a committee member for fulfilling a part of the job. QA screens initially whether the resources and commitments are potentially available. The two tasks supposedly would have executed prior to any further collaboration, and the communication is cross organizations. This paper contributes a design of C-IOS that supports EF and QA for inter-organizational collaboration. The underlying technical framework is by Rule Responder which is a powerful tool for creating virtual organizations as multi-agent systems that support collaborative teams on the Semantic Web. A use case of hosting an academic conference among different organizations is illustrated with our proposed concepts in this paper.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130710529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Web service with criteria: Extending WSDL 带有标准的Web服务:扩展WSDL
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093332
N. Parimala, Anu Saini
WSDL is used to describe the interface of a service, in XML format. The interface describes the functional properties as well as non functional properties. We are concerned with specifying ‘criteria’ as a non functional property of a web service. For this we have extend WSDL to X-WSDL. In order to add criteria information we extend the WSDL (Web Service Definition Language) schema by adding a new element ‘criteriaservice’ this is available in the new namespace. Using this ‘criteriaservcie’ element it is possible to specify the criteria along with a service in an X-WSDL document. The WSDL document is also extended by adding new attributes ‘criteria name’ and ‘description’ to service element. Using this extension it is possible to specify the criteria along with the service in X-WSDL document. The criteria are specified by the user when invoking a service. As a result, we are providing support to discover a more appropriate service according to his/her requirement.
WSDL以XML格式用于描述服务的接口。该接口描述了功能属性和非功能属性。我们关注的是将“标准”指定为web服务的非功能属性。为此,我们将WSDL扩展为X-WSDL。为了添加标准信息,我们通过添加新元素“criteriaservice”扩展WSDL (Web服务定义语言)模式,这在新的名称空间中可用。使用这个“criteriaservcie”元素,可以在X-WSDL文档中指定标准和服务。通过向服务元素添加新属性“标准名称”和“描述”,WSDL文档也得到了扩展。使用此扩展可以在X-WSDL文档中指定标准和服务。标准由用户在调用服务时指定。因此,我们正在提供支持,根据他/她的需求发现更合适的服务。
{"title":"Web service with criteria: Extending WSDL","authors":"N. Parimala, Anu Saini","doi":"10.1109/ICDIM.2011.6093332","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093332","url":null,"abstract":"WSDL is used to describe the interface of a service, in XML format. The interface describes the functional properties as well as non functional properties. We are concerned with specifying ‘criteria’ as a non functional property of a web service. For this we have extend WSDL to X-WSDL. In order to add criteria information we extend the WSDL (Web Service Definition Language) schema by adding a new element ‘criteriaservice’ this is available in the new namespace. Using this ‘criteriaservcie’ element it is possible to specify the criteria along with a service in an X-WSDL document. The WSDL document is also extended by adding new attributes ‘criteria name’ and ‘description’ to service element. Using this extension it is possible to specify the criteria along with the service in X-WSDL document. The criteria are specified by the user when invoking a service. As a result, we are providing support to discover a more appropriate service according to his/her requirement.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130952448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
ICT + PBL = holistic learning solution: UTeM's experience ICT + PBL =整体学习解决方案:东大的经验
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093355
F. Shahbodin, M. Yusoff, C. K. Mohd
This paper highlights how ICT could be integrated in the process of teaching and learning in the Problem Based Learning (PBL) environment. The main focus is integrating the ICT components such as multimedia and internet technologies as a tool for PBL learning environment, and utilizing the PBL approach for the delivering instructions in the teaching and learning process at UTeM. This paper also shares findings on the effectiveness of PBLAssess which have been developed in this study. Fifty-six respondents (second year students) enrolled for the Human Computer Interaction course are selected for this study. Two research instruments are developed for the purpose of evaluating students' performances and preferences which include a set of questionnaire and prototype known as PBLAssess. Further some of the current work on integrating ICT and PBL learning environment are also shared. Understanding both the current state of art for PBL and future prospects are the key issues in setting an agenda for future research and development in PBL.
本文重点介绍了在基于问题的学习(PBL)环境中如何将ICT整合到教与学的过程中。重点是将多媒体和互联网技术等信息通信技术组成部分作为PBL学习环境的工具,并利用PBL方法在UTeM的教学和学习过程中提供指导。本文还分享了本研究中关于PBLAssess有效性的发现。本研究选取了56名参加人机交互课程的二年级学生作为调查对象。为了评估学生的表现和偏好,我们开发了两种研究工具,其中包括一套被称为PBLAssess的问卷和原型。此外,还分享了目前在整合ICT和PBL学习环境方面的一些工作。了解PBL的现状和未来前景是为PBL的未来研究和发展制定议程的关键问题。
{"title":"ICT + PBL = holistic learning solution: UTeM's experience","authors":"F. Shahbodin, M. Yusoff, C. K. Mohd","doi":"10.1109/ICDIM.2011.6093355","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093355","url":null,"abstract":"This paper highlights how ICT could be integrated in the process of teaching and learning in the Problem Based Learning (PBL) environment. The main focus is integrating the ICT components such as multimedia and internet technologies as a tool for PBL learning environment, and utilizing the PBL approach for the delivering instructions in the teaching and learning process at UTeM. This paper also shares findings on the effectiveness of PBLAssess which have been developed in this study. Fifty-six respondents (second year students) enrolled for the Human Computer Interaction course are selected for this study. Two research instruments are developed for the purpose of evaluating students' performances and preferences which include a set of questionnaire and prototype known as PBLAssess. Further some of the current work on integrating ICT and PBL learning environment are also shared. Understanding both the current state of art for PBL and future prospects are the key issues in setting an agenda for future research and development in PBL.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129989252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Programming for evaluating strip layout of progressive dies 级进模排样评定的程序设计
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093334
A. C. Lin, Ho Minh Tuan, Dean K. Sheu
A progressive die is an effective tool for efficient and economical production of sheet metal parts in large quantities. Nowadays, progressive die designers still spend much of their time on choosing better layouts among feasible ones. This study employs Pro/Web.Link, Hyper Text Markup Language (HTML) and JavaScript to develop an application which helps evaluate automatically strip layouts in Pro/Engineer software environment. This paper proposes solutions for calculating total evaluation score of the strip layout based on four factors: station number factor, moment balancing factor, strip stability factor and feed height factor.
级进模是大批量高效经济生产钣金件的有效工具。如今,级进模设计师仍然花费大量时间从可行的布局中选择更好的布局。本研究采用Pro/Web。链接,超文本标记语言(HTML)和JavaScript开发一个应用程序,帮助评估自动剥离布局在Pro/Engineer软件环境。本文提出了基于工位数因素、力矩平衡因素、带钢稳定性因素和送料高度因素四个因素计算带钢布置总评价分数的方法。
{"title":"Programming for evaluating strip layout of progressive dies","authors":"A. C. Lin, Ho Minh Tuan, Dean K. Sheu","doi":"10.1109/ICDIM.2011.6093334","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093334","url":null,"abstract":"A progressive die is an effective tool for efficient and economical production of sheet metal parts in large quantities. Nowadays, progressive die designers still spend much of their time on choosing better layouts among feasible ones. This study employs Pro/Web.Link, Hyper Text Markup Language (HTML) and JavaScript to develop an application which helps evaluate automatically strip layouts in Pro/Engineer software environment. This paper proposes solutions for calculating total evaluation score of the strip layout based on four factors: station number factor, moment balancing factor, strip stability factor and feed height factor.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125586860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Context-aware SQA e-learning system 上下文感知SQA电子学习系统
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093327
Nada Bajnaid, R. Benlamri, B. Cogan
In this paper, we propose an ontological design for developing a context-aware e-learning system that supports learners developing Software Quality Assurance (SQA) compliant software. The learning process is driven by the type of software product the learner is dealing with, as well as, its SQA requirements and corresponding SQA techniques and procedures. The paper shows a global ontology design to embed knowledge related to the learner, SQA domain in general, and product-based SQA requirement and procedures. Reasoning tools are provided to infer knowledge that can provide more-modular and just-in-time contextual SQA resources for the task in hand. A learning scenario is shown to illustrate the system's ability to deal with SQA requirements facing the learner in the software development process.
在本文中,我们提出了一种用于开发上下文感知电子学习系统的本体论设计,该系统支持学习者开发符合软件质量保证(SQA)的软件。学习过程是由学习者正在处理的软件产品类型以及它的SQA需求和相应的SQA技术和过程驱动的。本文提出了一种全局本体设计,将学习者相关知识、一般SQA领域知识以及基于产品的SQA需求和过程知识嵌入其中。提供推理工具来推断知识,这些知识可以为手头的任务提供更模块化和即时的上下文SQA资源。展示了一个学习场景,以说明系统在软件开发过程中处理学习者所面临的SQA需求的能力。
{"title":"Context-aware SQA e-learning system","authors":"Nada Bajnaid, R. Benlamri, B. Cogan","doi":"10.1109/ICDIM.2011.6093327","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093327","url":null,"abstract":"In this paper, we propose an ontological design for developing a context-aware e-learning system that supports learners developing Software Quality Assurance (SQA) compliant software. The learning process is driven by the type of software product the learner is dealing with, as well as, its SQA requirements and corresponding SQA techniques and procedures. The paper shows a global ontology design to embed knowledge related to the learner, SQA domain in general, and product-based SQA requirement and procedures. Reasoning tools are provided to infer knowledge that can provide more-modular and just-in-time contextual SQA resources for the task in hand. A learning scenario is shown to illustrate the system's ability to deal with SQA requirements facing the learner in the software development process.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124763406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The Framy user interface for visually-impaired users 为视障用户设计的框架用户界面
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093335
Gabriele Di Chiara, L. Paolino, Marco Romano, M. Sebillo, G. Tortora, G. Vitiello, A. Ginige
We have developed a multimodal interface; Framy to effectively display large 2-dimenetional data sets such as geographical data on a mobile interface. We have now extended this to be used by visually-impaired users. A pilot study that we conducted, and the interviews with a group of potential stakeholders, helped us to detect some critical problems with the current interface, derive further requirements specific for visually impaired mobile users and re-design Framy accordingly.
我们开发了一个多模式界面;框架,以便在移动界面上有效地显示大型二维数据集,如地理数据。我们现在已经扩展到视障用户使用。我们进行的一项试点研究,以及与一组潜在利益相关者的访谈,帮助我们发现了当前界面的一些关键问题,得出了针对视障移动用户的进一步需求,并据此重新设计Framy。
{"title":"The Framy user interface for visually-impaired users","authors":"Gabriele Di Chiara, L. Paolino, Marco Romano, M. Sebillo, G. Tortora, G. Vitiello, A. Ginige","doi":"10.1109/ICDIM.2011.6093335","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093335","url":null,"abstract":"We have developed a multimodal interface; Framy to effectively display large 2-dimenetional data sets such as geographical data on a mobile interface. We have now extended this to be used by visually-impaired users. A pilot study that we conducted, and the interviews with a group of potential stakeholders, helped us to detect some critical problems with the current interface, derive further requirements specific for visually impaired mobile users and re-design Framy accordingly.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123537426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
MHPSO: A new method to enhance the Particle Swarm Optimizer MHPSO:一种改进粒子群优化器的新方法
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093361
Bafrin Zarei, R. Ghanbarzadeh, Poorya Khodabande, Hadi Toofani
The widespread and increasing application of Particle Swarm Optimizer (PSO) algorithms in both theoretical and practical fields leads to further considerations and new developments for improving its efficiency. To achieve this purpose in this paper a new method is introduced to enhance the convergence rate and reduce the computational time of PSO by combining the PSO including mutation concept (MPSO) and the Hierarchical Particle Swarm Optimizer (HPSO). Therefore the new approach is called MHPSO: a composition of MPSO and HPSO which act simultaneously in the optimization process. In addition some benchmark examples are analyzed using the presented method; consequently, the results are compared to other procedures which illustrate better outcomes and high performance of MHPSO.
粒子群优化算法(PSO)在理论和实践领域的广泛和日益增长的应用,导致了进一步的考虑和新的发展,以提高其效率。为了实现这一目标,本文提出了一种将粒子群优化算法(MPSO)与包含突变概念的粒子群优化算法(HPSO)相结合的新方法,以提高粒子群优化算法的收敛速度和减少计算时间。因此,新的方法被称为MHPSO:一个组合的MPSO和HPSO在优化过程中同时起作用。此外,还对一些基准算例进行了分析;因此,将结果与其他程序进行比较,这些程序说明了MHPSO的更好结果和高性能。
{"title":"MHPSO: A new method to enhance the Particle Swarm Optimizer","authors":"Bafrin Zarei, R. Ghanbarzadeh, Poorya Khodabande, Hadi Toofani","doi":"10.1109/ICDIM.2011.6093361","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093361","url":null,"abstract":"The widespread and increasing application of Particle Swarm Optimizer (PSO) algorithms in both theoretical and practical fields leads to further considerations and new developments for improving its efficiency. To achieve this purpose in this paper a new method is introduced to enhance the convergence rate and reduce the computational time of PSO by combining the PSO including mutation concept (MPSO) and the Hierarchical Particle Swarm Optimizer (HPSO). Therefore the new approach is called MHPSO: a composition of MPSO and HPSO which act simultaneously in the optimization process. In addition some benchmark examples are analyzed using the presented method; consequently, the results are compared to other procedures which illustrate better outcomes and high performance of MHPSO.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125406574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Classification of Privacy-preserving Distributed Data Mining protocols 保护隐私的分布式数据挖掘协议分类
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093356
Zhuojia Xu, X. Yi
Recently, a new research area, named Privacy-preserving Distributed Data Mining (PPDDM) has emerged. It aims at solving the following problem: a number of participants want to jointly conduct a data mining task based on the private data sets held by each of the participants. This problem setting has captured attention and interests of researchers, practitioners and developers from the communities of both data mining and information security. They have made great progress in designing and developing solutions to address this scenario. However, researchers and practitioners are now faced with a challenge on how to devise a standard on synthesizing and evaluating various PPDDM protocols, because they have been confused by the excessive number of techniques developed so far. In this paper, we put forward a framework to synthesize and characterize existing PPDDM protocols so as to provide a standard and systematic approach of understanding PPDDM-related problems, analyzing PPDDM requirements and designing effective and efficient PPDDM protocols.
最近出现了一个新的研究领域,即保护隐私的分布式数据挖掘(PPDDM)。它旨在解决以下问题:多个参与者希望基于每个参与者持有的私有数据集共同进行数据挖掘任务。这个问题集引起了数据挖掘和信息安全社区的研究人员、实践者和开发人员的注意和兴趣。他们在设计和开发解决方案以应对这种情况方面取得了很大进展。然而,由于目前开发的技术数量过多,研究人员和实践者面临着如何制定一个综合和评估各种PPDDM协议的标准的挑战。本文提出了一个框架来综合和表征现有PPDDM协议,从而为理解PPDDM相关问题、分析PPDDM需求和设计高效的PPDDM协议提供一个标准和系统的方法。
{"title":"Classification of Privacy-preserving Distributed Data Mining protocols","authors":"Zhuojia Xu, X. Yi","doi":"10.1109/ICDIM.2011.6093356","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093356","url":null,"abstract":"Recently, a new research area, named Privacy-preserving Distributed Data Mining (PPDDM) has emerged. It aims at solving the following problem: a number of participants want to jointly conduct a data mining task based on the private data sets held by each of the participants. This problem setting has captured attention and interests of researchers, practitioners and developers from the communities of both data mining and information security. They have made great progress in designing and developing solutions to address this scenario. However, researchers and practitioners are now faced with a challenge on how to devise a standard on synthesizing and evaluating various PPDDM protocols, because they have been confused by the excessive number of techniques developed so far. In this paper, we put forward a framework to synthesize and characterize existing PPDDM protocols so as to provide a standard and systematic approach of understanding PPDDM-related problems, analyzing PPDDM requirements and designing effective and efficient PPDDM protocols.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129930638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Minimal dataset for Network Intrusion Detection Systems via dimensionality reduction 基于降维的网络入侵检测系统最小数据集
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093368
Jean-Pierre Nziga
Network Intrusion Detection Systems (NIDS) monitor internet traffic to detect malicious activities including but not limited to denial of service attacks, network accesses by unauthorized users, attempts to gain additional privileges and port scans. The amount of data that must be analyzed by NIDS is too large. Prior studies developed feature selection and feature extraction techniques to reduce the size of data. None has focused on finding exactly by how much the dataset should be reduced. Dimensionality reduction is a field in machine learning that consists on mapping high dimensional data into lower dimension while preserving important features of the original dataset. Dimensionality reduction techniques have been used to reduce the amount of data in applications such as speech signals, digital photographs, fMRI scans, DNA microarrays, Hyper spectral data. The purpose of this paper is to find the finite amount of data required for successful intrusion detection. This evaluation is necessary to improve the efficiency of NIDS in identifying existing attack patterns and recognizing new intrusion in real-time. Two dimensionality reduction techniques are used one linear technique (Principal Component Analysis) and one non-linear technique (Multidimensional Scaling). Data is then submitted to two classification algorithms J48 (C.45) and Naïve Bayes. This study was conducted using the KDD Cup 99 data. Experimental results show optimal performance with reduced datasets of 4 dimensions for J48 and 12 dimensions for Naïve Bayes.
网络入侵检测系统(NIDS)监控互联网流量以检测恶意活动,包括但不限于拒绝服务攻击、未经授权用户的网络访问、试图获得额外特权和端口扫描。NIDS必须分析的数据量太大。先前的研究开发了特征选择和特征提取技术来减小数据的大小。没有人专注于找出数据集应该减少多少。降维是机器学习中的一个领域,它包括将高维数据映射到低维数据,同时保留原始数据集的重要特征。降维技术已被用于减少语音信号、数码照片、功能磁共振成像扫描、DNA微阵列、超光谱数据等应用中的数据量。本文的目的是找到成功的入侵检测所需的有限数量的数据。这种评估对于提高网络入侵检测系统识别现有攻击模式和实时识别新入侵的效率是必要的。采用两种降维技术,一种是线性技术(主成分分析),一种是非线性技术(多维尺度)。然后将数据提交给两种分类算法J48 (C.45)和Naïve Bayes。本研究使用KDD Cup 99数据进行。实验结果表明,J48的约简数据集为4维,Naïve贝叶斯的约简数据集为12维时性能最佳。
{"title":"Minimal dataset for Network Intrusion Detection Systems via dimensionality reduction","authors":"Jean-Pierre Nziga","doi":"10.1109/ICDIM.2011.6093368","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093368","url":null,"abstract":"Network Intrusion Detection Systems (NIDS) monitor internet traffic to detect malicious activities including but not limited to denial of service attacks, network accesses by unauthorized users, attempts to gain additional privileges and port scans. The amount of data that must be analyzed by NIDS is too large. Prior studies developed feature selection and feature extraction techniques to reduce the size of data. None has focused on finding exactly by how much the dataset should be reduced. Dimensionality reduction is a field in machine learning that consists on mapping high dimensional data into lower dimension while preserving important features of the original dataset. Dimensionality reduction techniques have been used to reduce the amount of data in applications such as speech signals, digital photographs, fMRI scans, DNA microarrays, Hyper spectral data. The purpose of this paper is to find the finite amount of data required for successful intrusion detection. This evaluation is necessary to improve the efficiency of NIDS in identifying existing attack patterns and recognizing new intrusion in real-time. Two dimensionality reduction techniques are used one linear technique (Principal Component Analysis) and one non-linear technique (Multidimensional Scaling). Data is then submitted to two classification algorithms J48 (C.45) and Naïve Bayes. This study was conducted using the KDD Cup 99 data. Experimental results show optimal performance with reduced datasets of 4 dimensions for J48 and 12 dimensions for Naïve Bayes.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automatic text classification and focused crawling 自动文本分类和集中爬行
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093329
Sameendra Samarawickrama, L. Jayaratne
A focused crawler is a web crawler that traverse the web to explore information that is related to a particular topic of interest only. On the other hand, generic web crawlers try to search the entire web, which is impossible due to the size and the complexity of WWW. In this paper we make a survey of some of the latest focused web crawling approaches discussing each with their experimental results. We categorize them as focused crawling based on content analysis, focused crawling based on link analysis and focused crawling based on both the content and link analysis. We also give an insight to the future research and draw the overall conclusions.
聚焦爬虫是一种网络爬虫,它遍历网络以探索与感兴趣的特定主题相关的信息。另一方面,一般的网络爬虫试图搜索整个网络,由于WWW的大小和复杂性,这是不可能的。在本文中,我们对一些最新的网络抓取方法进行了综述,并讨论了它们的实验结果。我们将它们分为基于内容分析的聚焦爬行、基于链接分析的聚焦爬行和同时基于内容和链接分析的聚焦爬行。对未来的研究进行了展望,并得出了总体结论。
{"title":"Automatic text classification and focused crawling","authors":"Sameendra Samarawickrama, L. Jayaratne","doi":"10.1109/ICDIM.2011.6093329","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093329","url":null,"abstract":"A focused crawler is a web crawler that traverse the web to explore information that is related to a particular topic of interest only. On the other hand, generic web crawlers try to search the entire web, which is impossible due to the size and the complexity of WWW. In this paper we make a survey of some of the latest focused web crawling approaches discussing each with their experimental results. We categorize them as focused crawling based on content analysis, focused crawling based on link analysis and focused crawling based on both the content and link analysis. We also give an insight to the future research and draw the overall conclusions.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114289671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2011 Sixth International Conference on Digital Information Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1