Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976500
U. K. Yusof, R. Budiarto, S. Deris
Product competitiveness, shorter product life cycle and increased product varieties are posing mere challenges to the manufacturing industries. The situation poses a need to improve the effectiveness and efficiency of capacity planning and resource optimization while still maintaining their flexibilities. Machine loading - one of the important components of capacity planning is known for its complexity that encompasses various types of flexibilities pertaining to part selection, machine and operation assignment along with constraints. The main objective of the flexible manufacturing system (FMS) is to balance the productivity of the production floor as well as maintaining its flexibility. From the literature, optimization-based methods tend to become impractical when the problem size increases while heuristic-based methods are more robust in their practicality although they may dependent on constraints of individual problems. We adopt a Harmony Search algorithm (HS) to solve this problem that aims on mapping the feasible solution vectors to the domain problem. The objectives are to minimize the system unbalance as well as increase throughput while satisfying the technological constraints such as machine time availability and tool slots. The performance of the proposed algorithm is tested on 10 sample problems available in FMS literature and compared with existing solution methods.
{"title":"Harmony search algorithm for flexible manufacturing system (FMS) machine loading problem","authors":"U. K. Yusof, R. Budiarto, S. Deris","doi":"10.1109/DMO.2011.5976500","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976500","url":null,"abstract":"Product competitiveness, shorter product life cycle and increased product varieties are posing mere challenges to the manufacturing industries. The situation poses a need to improve the effectiveness and efficiency of capacity planning and resource optimization while still maintaining their flexibilities. Machine loading - one of the important components of capacity planning is known for its complexity that encompasses various types of flexibilities pertaining to part selection, machine and operation assignment along with constraints. The main objective of the flexible manufacturing system (FMS) is to balance the productivity of the production floor as well as maintaining its flexibility. From the literature, optimization-based methods tend to become impractical when the problem size increases while heuristic-based methods are more robust in their practicality although they may dependent on constraints of individual problems. We adopt a Harmony Search algorithm (HS) to solve this problem that aims on mapping the feasible solution vectors to the domain problem. The objectives are to minimize the system unbalance as well as increase throughput while satisfying the technological constraints such as machine time availability and tool slots. The performance of the proposed algorithm is tested on 10 sample problems available in FMS literature and compared with existing solution methods.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976519
Jongkol Janruang, S. Guha
In this paper we consider the problem of clustering snippets returned from search engines. We propose a technique to invoke semantic similarity in the clustering process. Our technique improves on the well-known STC method, which is a highly efficient heuristic for clustering web search results. However, a weakness of STC is that it cannot cluster semantic similar documents. To solve this problem, we propose a new data structure to represent suffixes of a single string, called a Semantic Suffix Net (SSN). A generalized semantic suffix net is created to represent suffixes of a set of strings by using a new operator to partially combine nets. A key feature of this new operator is to find a joint point by using semantic similarity and string matching; net pairs combination then begins at that joint point. This logic causes the number of nodes and branches of a generalized semantic suffix net to decrease. The operator then uses the line of suffix links as a boundary to separate the net. A generalized semantic suffix net is then incorporated into the STC algorithm so that it can cluster semantically similar snippets. Experimental results show that the proposed algorithm improves upon conventional STC.
{"title":"Applying Semantic Suffix Net to suffix tree clustering","authors":"Jongkol Janruang, S. Guha","doi":"10.1109/DMO.2011.5976519","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976519","url":null,"abstract":"In this paper we consider the problem of clustering snippets returned from search engines. We propose a technique to invoke semantic similarity in the clustering process. Our technique improves on the well-known STC method, which is a highly efficient heuristic for clustering web search results. However, a weakness of STC is that it cannot cluster semantic similar documents. To solve this problem, we propose a new data structure to represent suffixes of a single string, called a Semantic Suffix Net (SSN). A generalized semantic suffix net is created to represent suffixes of a set of strings by using a new operator to partially combine nets. A key feature of this new operator is to find a joint point by using semantic similarity and string matching; net pairs combination then begins at that joint point. This logic causes the number of nodes and branches of a generalized semantic suffix net to decrease. The operator then uses the line of suffix links as a boundary to separate the net. A generalized semantic suffix net is then incorporated into the STC algorithm so that it can cluster semantically similar snippets. Experimental results show that the proposed algorithm improves upon conventional STC.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129446694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976523
A. Abuhamdah, M. Ayob
This work presents a hybridization between Multi-Neighborhood Particle Collision Algorithm (MPCA) and Adaptive Randomized Descent Algorithm (ARDA) acceptance criterion to solve university course timetabling problems. The aim of this work is to produce an effective algorithm for assigning a set of courses, lecturers and students to a specific number of rooms and timeslots, subject to a set of constraints. The structure of the MPCA-ARDA resembles a Hybrid Particle Collision Algorithm (HPCA) structure. The basic difference is that MPCA-ARDA hybridize MPCA and ARDA acceptance criterion, whilst HPCA, hybridize MPCA and great deluge acceptance criterion. In other words, MPCA-ARDA employ adaptive acceptance criterion, whilst HPCA, employ deterministic acceptance criterion. Therefore, MPCA-ARDA has better capability of escaping from local optima compared to HPCA and MPCA. MPCA-ARDA attempts to enhance the trial solution by exploring different neighborhood structures to overcome the limitation in HPCA and MPCA. Results tested on Socha benchmark datasets show that, MPCA-ARDA is able to produce significantly good quality solutions within a reasonable time and outperformed some other approaches in some instances.
{"title":"MPCA-ARDA for solving course timetabling problems","authors":"A. Abuhamdah, M. Ayob","doi":"10.1109/DMO.2011.5976523","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976523","url":null,"abstract":"This work presents a hybridization between Multi-Neighborhood Particle Collision Algorithm (MPCA) and Adaptive Randomized Descent Algorithm (ARDA) acceptance criterion to solve university course timetabling problems. The aim of this work is to produce an effective algorithm for assigning a set of courses, lecturers and students to a specific number of rooms and timeslots, subject to a set of constraints. The structure of the MPCA-ARDA resembles a Hybrid Particle Collision Algorithm (HPCA) structure. The basic difference is that MPCA-ARDA hybridize MPCA and ARDA acceptance criterion, whilst HPCA, hybridize MPCA and great deluge acceptance criterion. In other words, MPCA-ARDA employ adaptive acceptance criterion, whilst HPCA, employ deterministic acceptance criterion. Therefore, MPCA-ARDA has better capability of escaping from local optima compared to HPCA and MPCA. MPCA-ARDA attempts to enhance the trial solution by exploring different neighborhood structures to overcome the limitation in HPCA and MPCA. Results tested on Socha benchmark datasets show that, MPCA-ARDA is able to produce significantly good quality solutions within a reasonable time and outperformed some other approaches in some instances.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"83 5 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976509
Azuraini Abu Bakar, Choo-Yee Ting
Today, soft skills are crucial factors to the success of a project. For a certain set of jobs, soft skills are often considered more crucial than the hard skills or technical skills, in order to perform the job effectively. However, it is not a trivial task to identify the appropriate soft skills for each job. In this light, this study proposed a solution to assist employers when preparing advertisement via identification of suitable soft skills together with its relevancy to that particular job title. Bayesian network is employed to solve this problem because it is suitable for reasoning and decision making under uncertainty. The proposed Bayesian Network is trained using a dataset collected via extracting information from advertisements and also through interview sessions with a few identified experts.
{"title":"Soft skills recommendation systems for IT jobs: A Bayesian network approach","authors":"Azuraini Abu Bakar, Choo-Yee Ting","doi":"10.1109/DMO.2011.5976509","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976509","url":null,"abstract":"Today, soft skills are crucial factors to the success of a project. For a certain set of jobs, soft skills are often considered more crucial than the hard skills or technical skills, in order to perform the job effectively. However, it is not a trivial task to identify the appropriate soft skills for each job. In this light, this study proposed a solution to assist employers when preparing advertisement via identification of suitable soft skills together with its relevancy to that particular job title. Bayesian network is employed to solve this problem because it is suitable for reasoning and decision making under uncertainty. The proposed Bayesian Network is trained using a dataset collected via extracting information from advertisements and also through interview sessions with a few identified experts.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132000019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976513
Sarina Sulaiman, Siti Mariyam Hj. Shamsuddin, A. Abraham
Web caching is a technology for improving network traffic on the internet. It is a temporary storage of Web objects (such as HTML documents) for later retrieval. There are three significant advantages to Web caching; reduced bandwidth consumption, reduced server load, and reduced latency. These rewards have made the Web less expensive with better performance. The aim of this research is to introduce advanced machine learning approaches for Web caching to decide either to cache or not to the cache server, which could be modelled as a classification problem. The challenges include identifying attributes ranking and significant improvements in the classification accuracy. Four methods are employed in this research; Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) are used for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data and an excellent attribute to consider for an accomplishment of Web cache performance enhancement.
{"title":"Intelligent Web caching using Adaptive Regression Trees, Splines, Random Forests and Tree Net","authors":"Sarina Sulaiman, Siti Mariyam Hj. Shamsuddin, A. Abraham","doi":"10.1109/DMO.2011.5976513","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976513","url":null,"abstract":"Web caching is a technology for improving network traffic on the internet. It is a temporary storage of Web objects (such as HTML documents) for later retrieval. There are three significant advantages to Web caching; reduced bandwidth consumption, reduced server load, and reduced latency. These rewards have made the Web less expensive with better performance. The aim of this research is to introduce advanced machine learning approaches for Web caching to decide either to cache or not to the cache server, which could be modelled as a classification problem. The challenges include identifying attributes ranking and significant improvements in the classification accuracy. Four methods are employed in this research; Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) are used for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data and an excellent attribute to consider for an accomplishment of Web cache performance enhancement.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133040425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976505
Almahdi Mohammed Ahmed, A. Bakar, Abdul Razak Hamdan
Fast and high quality time series representation is a crucial task in data mining pre-pre-processing. Recent studies have shown that most representation methods based on improving classification accuracy and compress data sets rather than maximize data information. We attempt to improve the number of SAX (time series representation method) word size and alphabet size by searching for the optimal word size. In this paper we propose a new representation algorithm (HSAX) that deals with Harmony Search algorithm (HS) to explore optimal word size (Ws) and alphabet size (a) for SAX time series. Harmony search algorithm is an optimization algorithm that generates randomly solutions (Ws, a) and select two best solutions. H SAX algorithm is developed to maximize information, rather than improve classification accuracy. We have applied HSAX algorithm on some standard time series data sets. We also compare the HSAX with other meta-heuristic GENEBLA and original SAX algorithms The experimental results showed that the HSAX Algorithm compare to SAX manage to generate more word size and achieve less error rates, whereas HSAX compared to GENEBLA the quality of error rate is comparable with the advantage that HSAX generated high number of word and alphabet size.
{"title":"Harmony Search algorithm for optimal word size in symbolic time series representation","authors":"Almahdi Mohammed Ahmed, A. Bakar, Abdul Razak Hamdan","doi":"10.1109/DMO.2011.5976505","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976505","url":null,"abstract":"Fast and high quality time series representation is a crucial task in data mining pre-pre-processing. Recent studies have shown that most representation methods based on improving classification accuracy and compress data sets rather than maximize data information. We attempt to improve the number of SAX (time series representation method) word size and alphabet size by searching for the optimal word size. In this paper we propose a new representation algorithm (HSAX) that deals with Harmony Search algorithm (HS) to explore optimal word size (Ws) and alphabet size (a) for SAX time series. Harmony search algorithm is an optimization algorithm that generates randomly solutions (Ws, a) and select two best solutions. H SAX algorithm is developed to maximize information, rather than improve classification accuracy. We have applied HSAX algorithm on some standard time series data sets. We also compare the HSAX with other meta-heuristic GENEBLA and original SAX algorithms The experimental results showed that the HSAX Algorithm compare to SAX manage to generate more word size and achieve less error rates, whereas HSAX compared to GENEBLA the quality of error rate is comparable with the advantage that HSAX generated high number of word and alphabet size.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976508
W. Alzoubi, K. Omar, A. Bakar
Mining association rules is an essential task for knowledge discovery. Past transaction data can be analyzed to discover customer behaviors such that the quality of business decision can be improved. The approach of mining association rules focuses on discovering large itemsets, which are groups of items that appear together in an adequate number of transactions. In this paper, we propose a graph-based approach (DGARM) to generate Boolean association rules from a large database of customer transactions. This approach scans the database once to construct an association graph and then traverses the graph to generate all large itemsets. Practical evaluations show that the proposed algorithm outperforms other algorithms which need to make multiple passes over the database.
{"title":"An efficient mining of transactional data using graph-based technique","authors":"W. Alzoubi, K. Omar, A. Bakar","doi":"10.1109/DMO.2011.5976508","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976508","url":null,"abstract":"Mining association rules is an essential task for knowledge discovery. Past transaction data can be analyzed to discover customer behaviors such that the quality of business decision can be improved. The approach of mining association rules focuses on discovering large itemsets, which are groups of items that appear together in an adequate number of transactions. In this paper, we propose a graph-based approach (DGARM) to generate Boolean association rules from a large database of customer transactions. This approach scans the database once to construct an association graph and then traverses the graph to generate all large itemsets. Practical evaluations show that the proposed algorithm outperforms other algorithms which need to make multiple passes over the database.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976529
Hassan Al-Tarawneh, M. Ayob
In this work we apply a Tabu search and multi-neighborhood structure to solve University Course Timetable at the faculty of engineering, University Kebangsan Malaysia. The aim is to introduce the neighborhood structure according to the difference between the lengths of lectures (i.e. some lectures are one hour, while others are two hours). Therefore, the new neighborhood structure is required to handle this problem. The results have demonstrate the effectiveness of the proposed neighborhood structure.
{"title":"Using Tabu search with multi-neighborhood structures to solve University Course Timetable UKM case study (faculty of engineering)","authors":"Hassan Al-Tarawneh, M. Ayob","doi":"10.1109/DMO.2011.5976529","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976529","url":null,"abstract":"In this work we apply a Tabu search and multi-neighborhood structure to solve University Course Timetable at the faculty of engineering, University Kebangsan Malaysia. The aim is to introduce the neighborhood structure according to the difference between the lengths of lectures (i.e. some lectures are one hour, while others are two hours). Therefore, the new neighborhood structure is required to handle this problem. The results have demonstrate the effectiveness of the proposed neighborhood structure.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130273330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976533
A. Malik, Abdulqader Othman, M. Ayob, A. Hamdan
In this research, we introduce a hybrid integrated two-stage multi-neighbourhood tabu search, ITMTS, with EMCQ method in solving an examination timetabling problem. Two search mechanisms of this method, vertical neighbourhood search and horizontal neighbourhood search, will work alternately in different stages with several neighbourhood options. This procedure is based on the enhanced ITMTS with stratified random sampling technique (to represent selected exams to be evaluated) where EMCQ technique is used in the horizontal neighbourhood stage as a diversification search mechanism. We test and evaluate this technique with the uncapacitated Carter's benchmark datasets by using the standard Carter's proximity cost. The results of this technique are comparable with other approaches that have been reported in the literature and have shown that this technique has a potential to be further enhanced.
{"title":"Hybrid integrated two-stage multi-neighbourhood tabu search-EMCQ technique for examination timetabling problem","authors":"A. Malik, Abdulqader Othman, M. Ayob, A. Hamdan","doi":"10.1109/DMO.2011.5976533","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976533","url":null,"abstract":"In this research, we introduce a hybrid integrated two-stage multi-neighbourhood tabu search, ITMTS, with EMCQ method in solving an examination timetabling problem. Two search mechanisms of this method, vertical neighbourhood search and horizontal neighbourhood search, will work alternately in different stages with several neighbourhood options. This procedure is based on the enhanced ITMTS with stratified random sampling technique (to represent selected exams to be evaluated) where EMCQ technique is used in the horizontal neighbourhood stage as a diversification search mechanism. We test and evaluate this technique with the uncapacitated Carter's benchmark datasets by using the standard Carter's proximity cost. The results of this technique are comparable with other approaches that have been reported in the literature and have shown that this technique has a potential to be further enhanced.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134156023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976536
Munaisyah Abdullah, S. Abdullah, A. Hamdan, R. Ismail
Timber Harvest Planning (THP) model is used to determine which forest areas to be harvested in different time periods with objective to maximize profit subject to harvesting regulations. Various THP models have been developed in the Western countries based on optimisation approach to generate an optimal or feasible harvest plan. However similar studies have gained less attention in Tropical countries. Thus, this study proposes an optimisation model of THP that reflects selective cutting in Peninsular Malaysia. The model was tested on seven blocks that consists a total of 636 trees with different size and species. We found that, optimisation approach generates selectively timber harvest plan with higher volume and less damage.
{"title":"Optimisation model of selective cutting for Timber Harvest Planning in Peninsular Malaysia","authors":"Munaisyah Abdullah, S. Abdullah, A. Hamdan, R. Ismail","doi":"10.1109/DMO.2011.5976536","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976536","url":null,"abstract":"Timber Harvest Planning (THP) model is used to determine which forest areas to be harvested in different time periods with objective to maximize profit subject to harvesting regulations. Various THP models have been developed in the Western countries based on optimisation approach to generate an optimal or feasible harvest plan. However similar studies have gained less attention in Tropical countries. Thus, this study proposes an optimisation model of THP that reflects selective cutting in Peninsular Malaysia. The model was tested on seven blocks that consists a total of 636 trees with different size and species. We found that, optimisation approach generates selectively timber harvest plan with higher volume and less damage.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114933326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}