首页 > 最新文献

2011 3rd Conference on Data Mining and Optimization (DMO)最新文献

英文 中文
Harmony search algorithm for flexible manufacturing system (FMS) machine loading problem 柔性制造系统(FMS)机器加载问题的和谐搜索算法
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976500
U. K. Yusof, R. Budiarto, S. Deris
Product competitiveness, shorter product life cycle and increased product varieties are posing mere challenges to the manufacturing industries. The situation poses a need to improve the effectiveness and efficiency of capacity planning and resource optimization while still maintaining their flexibilities. Machine loading - one of the important components of capacity planning is known for its complexity that encompasses various types of flexibilities pertaining to part selection, machine and operation assignment along with constraints. The main objective of the flexible manufacturing system (FMS) is to balance the productivity of the production floor as well as maintaining its flexibility. From the literature, optimization-based methods tend to become impractical when the problem size increases while heuristic-based methods are more robust in their practicality although they may dependent on constraints of individual problems. We adopt a Harmony Search algorithm (HS) to solve this problem that aims on mapping the feasible solution vectors to the domain problem. The objectives are to minimize the system unbalance as well as increase throughput while satisfying the technological constraints such as machine time availability and tool slots. The performance of the proposed algorithm is tested on 10 sample problems available in FMS literature and compared with existing solution methods.
产品竞争力、产品生命周期的缩短和产品品种的增加对制造业构成了挑战。这种情况要求在保持灵活性的同时提高能力规划和资源优化的有效性和效率。机器装载——产能规划的重要组成部分之一,以其复杂性而闻名,它包含了与零件选择、机器和操作分配以及约束有关的各种类型的灵活性。柔性制造系统(FMS)的主要目标是平衡生产车间的生产力,并保持其灵活性。从文献来看,当问题规模增加时,基于优化的方法往往变得不切实际,而基于启发式的方法虽然可能依赖于单个问题的约束,但在实用性上更为稳健。我们采用和谐搜索算法(HS)来解决这一问题,其目的是将可行解向量映射到域问题。目标是最大限度地减少系统不平衡,提高吞吐量,同时满足技术限制,如机器时间可用性和刀具槽位。通过FMS文献中的10个样本问题对该算法的性能进行了测试,并与现有的求解方法进行了比较。
{"title":"Harmony search algorithm for flexible manufacturing system (FMS) machine loading problem","authors":"U. K. Yusof, R. Budiarto, S. Deris","doi":"10.1109/DMO.2011.5976500","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976500","url":null,"abstract":"Product competitiveness, shorter product life cycle and increased product varieties are posing mere challenges to the manufacturing industries. The situation poses a need to improve the effectiveness and efficiency of capacity planning and resource optimization while still maintaining their flexibilities. Machine loading - one of the important components of capacity planning is known for its complexity that encompasses various types of flexibilities pertaining to part selection, machine and operation assignment along with constraints. The main objective of the flexible manufacturing system (FMS) is to balance the productivity of the production floor as well as maintaining its flexibility. From the literature, optimization-based methods tend to become impractical when the problem size increases while heuristic-based methods are more robust in their practicality although they may dependent on constraints of individual problems. We adopt a Harmony Search algorithm (HS) to solve this problem that aims on mapping the feasible solution vectors to the domain problem. The objectives are to minimize the system unbalance as well as increase throughput while satisfying the technological constraints such as machine time availability and tool slots. The performance of the proposed algorithm is tested on 10 sample problems available in FMS literature and compared with existing solution methods.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Applying Semantic Suffix Net to suffix tree clustering 语义后缀网在后缀树聚类中的应用
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976519
Jongkol Janruang, S. Guha
In this paper we consider the problem of clustering snippets returned from search engines. We propose a technique to invoke semantic similarity in the clustering process. Our technique improves on the well-known STC method, which is a highly efficient heuristic for clustering web search results. However, a weakness of STC is that it cannot cluster semantic similar documents. To solve this problem, we propose a new data structure to represent suffixes of a single string, called a Semantic Suffix Net (SSN). A generalized semantic suffix net is created to represent suffixes of a set of strings by using a new operator to partially combine nets. A key feature of this new operator is to find a joint point by using semantic similarity and string matching; net pairs combination then begins at that joint point. This logic causes the number of nodes and branches of a generalized semantic suffix net to decrease. The operator then uses the line of suffix links as a boundary to separate the net. A generalized semantic suffix net is then incorporated into the STC algorithm so that it can cluster semantically similar snippets. Experimental results show that the proposed algorithm improves upon conventional STC.
本文研究了搜索引擎返回的片段聚类问题。我们提出了一种在聚类过程中调用语义相似度的技术。我们的技术改进了著名的STC方法,这是一种高效的启发式聚类web搜索结果。然而,STC的一个缺点是不能聚类语义相似的文档。为了解决这个问题,我们提出了一种新的数据结构来表示单个字符串的后缀,称为语义后缀网(SSN)。通过使用一个新的运算符对网络进行部分组合,创建了一个广义语义后缀网络来表示一组字符串的后缀。该算子的一个关键特征是利用语义相似度和字符串匹配来寻找结合点;网对组合就从这个连接点开始。这种逻辑导致广义语义后缀网络的节点和分支数量减少。然后,操作员使用后缀链接线作为分隔网的边界。然后在STC算法中加入广义语义后缀网络,使其能够聚类语义相似的片段。实验结果表明,该算法比传统的STC算法有很大的改进。
{"title":"Applying Semantic Suffix Net to suffix tree clustering","authors":"Jongkol Janruang, S. Guha","doi":"10.1109/DMO.2011.5976519","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976519","url":null,"abstract":"In this paper we consider the problem of clustering snippets returned from search engines. We propose a technique to invoke semantic similarity in the clustering process. Our technique improves on the well-known STC method, which is a highly efficient heuristic for clustering web search results. However, a weakness of STC is that it cannot cluster semantic similar documents. To solve this problem, we propose a new data structure to represent suffixes of a single string, called a Semantic Suffix Net (SSN). A generalized semantic suffix net is created to represent suffixes of a set of strings by using a new operator to partially combine nets. A key feature of this new operator is to find a joint point by using semantic similarity and string matching; net pairs combination then begins at that joint point. This logic causes the number of nodes and branches of a generalized semantic suffix net to decrease. The operator then uses the line of suffix links as a boundary to separate the net. A generalized semantic suffix net is then incorporated into the STC algorithm so that it can cluster semantically similar snippets. Experimental results show that the proposed algorithm improves upon conventional STC.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129446694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
MPCA-ARDA for solving course timetabling problems MPCA-ARDA用于解决课程排课问题
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976523
A. Abuhamdah, M. Ayob
This work presents a hybridization between Multi-Neighborhood Particle Collision Algorithm (MPCA) and Adaptive Randomized Descent Algorithm (ARDA) acceptance criterion to solve university course timetabling problems. The aim of this work is to produce an effective algorithm for assigning a set of courses, lecturers and students to a specific number of rooms and timeslots, subject to a set of constraints. The structure of the MPCA-ARDA resembles a Hybrid Particle Collision Algorithm (HPCA) structure. The basic difference is that MPCA-ARDA hybridize MPCA and ARDA acceptance criterion, whilst HPCA, hybridize MPCA and great deluge acceptance criterion. In other words, MPCA-ARDA employ adaptive acceptance criterion, whilst HPCA, employ deterministic acceptance criterion. Therefore, MPCA-ARDA has better capability of escaping from local optima compared to HPCA and MPCA. MPCA-ARDA attempts to enhance the trial solution by exploring different neighborhood structures to overcome the limitation in HPCA and MPCA. Results tested on Socha benchmark datasets show that, MPCA-ARDA is able to produce significantly good quality solutions within a reasonable time and outperformed some other approaches in some instances.
提出了一种融合多邻域粒子碰撞算法(MPCA)和自适应随机下降算法(ARDA)的可接受准则来解决大学课程排课问题。这项工作的目的是产生一种有效的算法,在一组约束条件下,将一组课程、讲师和学生分配到特定数量的房间和时间段。MPCA-ARDA的结构类似于混合粒子碰撞算法(HPCA)的结构。基本区别在于MPCA-ARDA是MPCA和ARDA验收标准的杂交,而HPCA是MPCA和大洪水验收标准的杂交。即MPCA-ARDA采用自适应接受准则,HPCA采用确定性接受准则。因此,与HPCA和MPCA相比,MPCA- arda具有更好的逃避局部最优的能力。MPCA- arda试图通过探索不同的邻域结构来改进试验解,以克服HPCA和MPCA的局限性。在Socha基准数据集上的测试结果表明,MPCA-ARDA能够在合理的时间内产生明显的高质量的解决方案,并且在某些情况下优于其他方法。
{"title":"MPCA-ARDA for solving course timetabling problems","authors":"A. Abuhamdah, M. Ayob","doi":"10.1109/DMO.2011.5976523","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976523","url":null,"abstract":"This work presents a hybridization between Multi-Neighborhood Particle Collision Algorithm (MPCA) and Adaptive Randomized Descent Algorithm (ARDA) acceptance criterion to solve university course timetabling problems. The aim of this work is to produce an effective algorithm for assigning a set of courses, lecturers and students to a specific number of rooms and timeslots, subject to a set of constraints. The structure of the MPCA-ARDA resembles a Hybrid Particle Collision Algorithm (HPCA) structure. The basic difference is that MPCA-ARDA hybridize MPCA and ARDA acceptance criterion, whilst HPCA, hybridize MPCA and great deluge acceptance criterion. In other words, MPCA-ARDA employ adaptive acceptance criterion, whilst HPCA, employ deterministic acceptance criterion. Therefore, MPCA-ARDA has better capability of escaping from local optima compared to HPCA and MPCA. MPCA-ARDA attempts to enhance the trial solution by exploring different neighborhood structures to overcome the limitation in HPCA and MPCA. Results tested on Socha benchmark datasets show that, MPCA-ARDA is able to produce significantly good quality solutions within a reasonable time and outperformed some other approaches in some instances.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"83 5 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Soft skills recommendation systems for IT jobs: A Bayesian network approach IT工作软技能推荐系统:贝叶斯网络方法
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976509
Azuraini Abu Bakar, Choo-Yee Ting
Today, soft skills are crucial factors to the success of a project. For a certain set of jobs, soft skills are often considered more crucial than the hard skills or technical skills, in order to perform the job effectively. However, it is not a trivial task to identify the appropriate soft skills for each job. In this light, this study proposed a solution to assist employers when preparing advertisement via identification of suitable soft skills together with its relevancy to that particular job title. Bayesian network is employed to solve this problem because it is suitable for reasoning and decision making under uncertainty. The proposed Bayesian Network is trained using a dataset collected via extracting information from advertisements and also through interview sessions with a few identified experts.
如今,软技能是项目成功的关键因素。对于某些特定的工作,为了有效地完成工作,软技能通常被认为比硬技能或技术技能更重要。然而,为每个工作确定合适的软技能并不是一项简单的任务。鉴于此,本研究提出了一个解决方案,以帮助雇主在准备广告时,通过识别合适的软技能,以及其与特定职位的相关性。由于贝叶斯网络适用于不确定情况下的推理和决策,因此采用贝叶斯网络来解决这一问题。所提出的贝叶斯网络使用通过从广告中提取信息收集的数据集进行训练,也通过与一些确定的专家进行访谈。
{"title":"Soft skills recommendation systems for IT jobs: A Bayesian network approach","authors":"Azuraini Abu Bakar, Choo-Yee Ting","doi":"10.1109/DMO.2011.5976509","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976509","url":null,"abstract":"Today, soft skills are crucial factors to the success of a project. For a certain set of jobs, soft skills are often considered more crucial than the hard skills or technical skills, in order to perform the job effectively. However, it is not a trivial task to identify the appropriate soft skills for each job. In this light, this study proposed a solution to assist employers when preparing advertisement via identification of suitable soft skills together with its relevancy to that particular job title. Bayesian network is employed to solve this problem because it is suitable for reasoning and decision making under uncertainty. The proposed Bayesian Network is trained using a dataset collected via extracting information from advertisements and also through interview sessions with a few identified experts.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132000019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Intelligent Web caching using Adaptive Regression Trees, Splines, Random Forests and Tree Net 智能Web缓存使用自适应回归树,样条,随机森林和树网
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976513
Sarina Sulaiman, Siti Mariyam Hj. Shamsuddin, A. Abraham
Web caching is a technology for improving network traffic on the internet. It is a temporary storage of Web objects (such as HTML documents) for later retrieval. There are three significant advantages to Web caching; reduced bandwidth consumption, reduced server load, and reduced latency. These rewards have made the Web less expensive with better performance. The aim of this research is to introduce advanced machine learning approaches for Web caching to decide either to cache or not to the cache server, which could be modelled as a classification problem. The challenges include identifying attributes ranking and significant improvements in the classification accuracy. Four methods are employed in this research; Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) are used for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data and an excellent attribute to consider for an accomplishment of Web cache performance enhancement.
Web缓存是一种改善internet上网络流量的技术。它是用于以后检索的Web对象(如HTML文档)的临时存储。Web缓存有三个显著的优点;减少了带宽消耗、服务器负载和延迟。这些奖励使得Web更便宜,性能更好。本研究的目的是为Web缓存引入先进的机器学习方法,以决定是否缓存到缓存服务器,这可以建模为分类问题。挑战包括识别属性排序和显著提高分类精度。本研究采用了四种方法;分类与回归树(CART)、多元自适应回归样条(MARS)、随机森林(RF)和树网(TN)用于Web缓存的分类。实验结果表明,CART在从现有日志数据中分类Web对象方面表现优异,是实现Web缓存性能增强的一个很好的考虑因素。
{"title":"Intelligent Web caching using Adaptive Regression Trees, Splines, Random Forests and Tree Net","authors":"Sarina Sulaiman, Siti Mariyam Hj. Shamsuddin, A. Abraham","doi":"10.1109/DMO.2011.5976513","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976513","url":null,"abstract":"Web caching is a technology for improving network traffic on the internet. It is a temporary storage of Web objects (such as HTML documents) for later retrieval. There are three significant advantages to Web caching; reduced bandwidth consumption, reduced server load, and reduced latency. These rewards have made the Web less expensive with better performance. The aim of this research is to introduce advanced machine learning approaches for Web caching to decide either to cache or not to the cache server, which could be modelled as a classification problem. The challenges include identifying attributes ranking and significant improvements in the classification accuracy. Four methods are employed in this research; Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) are used for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data and an excellent attribute to consider for an accomplishment of Web cache performance enhancement.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133040425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Harmony Search algorithm for optimal word size in symbolic time series representation 符号时间序列表示中最优字长和声搜索算法
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976505
Almahdi Mohammed Ahmed, A. Bakar, Abdul Razak Hamdan
Fast and high quality time series representation is a crucial task in data mining pre-pre-processing. Recent studies have shown that most representation methods based on improving classification accuracy and compress data sets rather than maximize data information. We attempt to improve the number of SAX (time series representation method) word size and alphabet size by searching for the optimal word size. In this paper we propose a new representation algorithm (HSAX) that deals with Harmony Search algorithm (HS) to explore optimal word size (Ws) and alphabet size (a) for SAX time series. Harmony search algorithm is an optimization algorithm that generates randomly solutions (Ws, a) and select two best solutions. H SAX algorithm is developed to maximize information, rather than improve classification accuracy. We have applied HSAX algorithm on some standard time series data sets. We also compare the HSAX with other meta-heuristic GENEBLA and original SAX algorithms The experimental results showed that the HSAX Algorithm compare to SAX manage to generate more word size and achieve less error rates, whereas HSAX compared to GENEBLA the quality of error rate is comparable with the advantage that HSAX generated high number of word and alphabet size.
快速、高质量的时间序列表示是数据挖掘预处理中的关键任务。最近的研究表明,大多数表示方法都是基于提高分类精度和压缩数据集,而不是最大化数据信息。我们试图通过搜索最优字长来提高SAX(时间序列表示方法)字长和字母表大小的数量。在本文中,我们提出了一种新的表示算法(HSAX),该算法处理和谐搜索算法(HS)来探索SAX时间序列的最佳字长(Ws)和字母长(a)。和谐搜索算法是一种随机生成解(w, a)并从中选择两个最优解的优化算法。SAX算法是为了最大化信息而开发的,而不是为了提高分类精度。我们将HSAX算法应用于一些标准时间序列数据集。我们还将HSAX与其他元启发式GENEBLA和原始SAX算法进行了比较,实验结果表明,与SAX相比,HSAX算法能够生成更多的单词大小和更低的错误率,而与GENEBLA相比,HSAX的错误率质量与HSAX产生的单词数量和字母大小相当。
{"title":"Harmony Search algorithm for optimal word size in symbolic time series representation","authors":"Almahdi Mohammed Ahmed, A. Bakar, Abdul Razak Hamdan","doi":"10.1109/DMO.2011.5976505","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976505","url":null,"abstract":"Fast and high quality time series representation is a crucial task in data mining pre-pre-processing. Recent studies have shown that most representation methods based on improving classification accuracy and compress data sets rather than maximize data information. We attempt to improve the number of SAX (time series representation method) word size and alphabet size by searching for the optimal word size. In this paper we propose a new representation algorithm (HSAX) that deals with Harmony Search algorithm (HS) to explore optimal word size (Ws) and alphabet size (a) for SAX time series. Harmony search algorithm is an optimization algorithm that generates randomly solutions (Ws, a) and select two best solutions. H SAX algorithm is developed to maximize information, rather than improve classification accuracy. We have applied HSAX algorithm on some standard time series data sets. We also compare the HSAX with other meta-heuristic GENEBLA and original SAX algorithms The experimental results showed that the HSAX Algorithm compare to SAX manage to generate more word size and achieve less error rates, whereas HSAX compared to GENEBLA the quality of error rate is comparable with the advantage that HSAX generated high number of word and alphabet size.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
An efficient mining of transactional data using graph-based technique 使用基于图的技术有效地挖掘事务数据
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976508
W. Alzoubi, K. Omar, A. Bakar
Mining association rules is an essential task for knowledge discovery. Past transaction data can be analyzed to discover customer behaviors such that the quality of business decision can be improved. The approach of mining association rules focuses on discovering large itemsets, which are groups of items that appear together in an adequate number of transactions. In this paper, we propose a graph-based approach (DGARM) to generate Boolean association rules from a large database of customer transactions. This approach scans the database once to construct an association graph and then traverses the graph to generate all large itemsets. Practical evaluations show that the proposed algorithm outperforms other algorithms which need to make multiple passes over the database.
关联规则挖掘是知识发现的一项重要任务。通过分析过去的交易数据,可以发现客户的行为,从而提高业务决策的质量。挖掘关联规则的方法侧重于发现大型项目集,这些项目集是在足够数量的事务中一起出现的项目组。在本文中,我们提出了一种基于图的方法(DGARM)来从大型客户事务数据库中生成布尔关联规则。这种方法扫描数据库一次以构造关联图,然后遍历该图以生成所有大型项目集。实际评估表明,该算法优于其他需要多次遍历数据库的算法。
{"title":"An efficient mining of transactional data using graph-based technique","authors":"W. Alzoubi, K. Omar, A. Bakar","doi":"10.1109/DMO.2011.5976508","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976508","url":null,"abstract":"Mining association rules is an essential task for knowledge discovery. Past transaction data can be analyzed to discover customer behaviors such that the quality of business decision can be improved. The approach of mining association rules focuses on discovering large itemsets, which are groups of items that appear together in an adequate number of transactions. In this paper, we propose a graph-based approach (DGARM) to generate Boolean association rules from a large database of customer transactions. This approach scans the database once to construct an association graph and then traverses the graph to generate all large itemsets. Practical evaluations show that the proposed algorithm outperforms other algorithms which need to make multiple passes over the database.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using Tabu search with multi-neighborhood structures to solve University Course Timetable UKM case study (faculty of engineering) 基于多邻域结构的禁忌搜索求解大学课程时间表UKM案例研究(工程学院)
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976529
Hassan Al-Tarawneh, M. Ayob
In this work we apply a Tabu search and multi-neighborhood structure to solve University Course Timetable at the faculty of engineering, University Kebangsan Malaysia. The aim is to introduce the neighborhood structure according to the difference between the lengths of lectures (i.e. some lectures are one hour, while others are two hours). Therefore, the new neighborhood structure is required to handle this problem. The results have demonstrate the effectiveness of the proposed neighborhood structure.
在这项工作中,我们应用禁忌搜索和多邻域结构来解决马来西亚Kebangsan大学工程学院的大学课程时间表。目的是根据讲座长度的不同(即一些讲座是一个小时,而另一些是两个小时)引入邻域结构。因此,需要新的邻里结构来处理这个问题。结果表明了所提出的邻域结构的有效性。
{"title":"Using Tabu search with multi-neighborhood structures to solve University Course Timetable UKM case study (faculty of engineering)","authors":"Hassan Al-Tarawneh, M. Ayob","doi":"10.1109/DMO.2011.5976529","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976529","url":null,"abstract":"In this work we apply a Tabu search and multi-neighborhood structure to solve University Course Timetable at the faculty of engineering, University Kebangsan Malaysia. The aim is to introduce the neighborhood structure according to the difference between the lengths of lectures (i.e. some lectures are one hour, while others are two hours). Therefore, the new neighborhood structure is required to handle this problem. The results have demonstrate the effectiveness of the proposed neighborhood structure.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130273330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Hybrid integrated two-stage multi-neighbourhood tabu search-EMCQ technique for examination timetabling problem 混合集成两阶段多邻域禁忌搜索- emcq技术求解考试排课问题
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976533
A. Malik, Abdulqader Othman, M. Ayob, A. Hamdan
In this research, we introduce a hybrid integrated two-stage multi-neighbourhood tabu search, ITMTS, with EMCQ method in solving an examination timetabling problem. Two search mechanisms of this method, vertical neighbourhood search and horizontal neighbourhood search, will work alternately in different stages with several neighbourhood options. This procedure is based on the enhanced ITMTS with stratified random sampling technique (to represent selected exams to be evaluated) where EMCQ technique is used in the horizontal neighbourhood stage as a diversification search mechanism. We test and evaluate this technique with the uncapacitated Carter's benchmark datasets by using the standard Carter's proximity cost. The results of this technique are comparable with other approaches that have been reported in the literature and have shown that this technique has a potential to be further enhanced.
在本研究中,我们引入一种结合EMCQ方法的混合集成两阶段多邻域禁忌搜索(ITMTS)来解决考试排课问题。该方法的两种搜索机制,垂直邻域搜索和水平邻域搜索,在不同的阶段交替工作,有多个邻域选择。该程序是基于分层随机抽样技术(代表待评估的选定考试)的增强型ITMTS,其中EMCQ技术用于水平邻域阶段,作为多样化搜索机制。我们通过使用标准的卡特接近成本,在无能力卡特基准数据集上测试和评估了该技术。该技术的结果与文献中报道的其他方法相当,并表明该技术具有进一步增强的潜力。
{"title":"Hybrid integrated two-stage multi-neighbourhood tabu search-EMCQ technique for examination timetabling problem","authors":"A. Malik, Abdulqader Othman, M. Ayob, A. Hamdan","doi":"10.1109/DMO.2011.5976533","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976533","url":null,"abstract":"In this research, we introduce a hybrid integrated two-stage multi-neighbourhood tabu search, ITMTS, with EMCQ method in solving an examination timetabling problem. Two search mechanisms of this method, vertical neighbourhood search and horizontal neighbourhood search, will work alternately in different stages with several neighbourhood options. This procedure is based on the enhanced ITMTS with stratified random sampling technique (to represent selected exams to be evaluated) where EMCQ technique is used in the horizontal neighbourhood stage as a diversification search mechanism. We test and evaluate this technique with the uncapacitated Carter's benchmark datasets by using the standard Carter's proximity cost. The results of this technique are comparable with other approaches that have been reported in the literature and have shown that this technique has a potential to be further enhanced.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134156023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimisation model of selective cutting for Timber Harvest Planning in Peninsular Malaysia 马来西亚半岛木材采伐规划的选择性采伐优化模型
Pub Date : 2011-06-28 DOI: 10.1109/DMO.2011.5976536
Munaisyah Abdullah, S. Abdullah, A. Hamdan, R. Ismail
Timber Harvest Planning (THP) model is used to determine which forest areas to be harvested in different time periods with objective to maximize profit subject to harvesting regulations. Various THP models have been developed in the Western countries based on optimisation approach to generate an optimal or feasible harvest plan. However similar studies have gained less attention in Tropical countries. Thus, this study proposes an optimisation model of THP that reflects selective cutting in Peninsular Malaysia. The model was tested on seven blocks that consists a total of 636 trees with different size and species. We found that, optimisation approach generates selectively timber harvest plan with higher volume and less damage.
木材采伐计划(THP)模型用于确定在不同时期哪些森林区域采伐,目标是在采伐法规的约束下实现利润最大化。西方国家基于优化方法开发了各种THP模型,以产生最佳或可行的收获计划。然而,类似的研究在热带国家得到的关注较少。因此,本研究提出了一个反映马来西亚半岛选择性切割的THP优化模型。该模型在七个街区进行了测试,这些街区共有636棵不同大小和种类的树木。我们发现,优化方法可以产生具有更高体积和更少损害的选择性木材采伐计划。
{"title":"Optimisation model of selective cutting for Timber Harvest Planning in Peninsular Malaysia","authors":"Munaisyah Abdullah, S. Abdullah, A. Hamdan, R. Ismail","doi":"10.1109/DMO.2011.5976536","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976536","url":null,"abstract":"Timber Harvest Planning (THP) model is used to determine which forest areas to be harvested in different time periods with objective to maximize profit subject to harvesting regulations. Various THP models have been developed in the Western countries based on optimisation approach to generate an optimal or feasible harvest plan. However similar studies have gained less attention in Tropical countries. Thus, this study proposes an optimisation model of THP that reflects selective cutting in Peninsular Malaysia. The model was tested on seven blocks that consists a total of 636 trees with different size and species. We found that, optimisation approach generates selectively timber harvest plan with higher volume and less damage.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114933326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2011 3rd Conference on Data Mining and Optimization (DMO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1