Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976520
Lustiana Pratiwi, Y. Choo, A. Muda
Rough reducts has contributed significantly in numerous researches of feature selection analysis. It has been proven as a reliable reduction technique in identifying the importance of attributes set in an information system. The key factor for the success of reducts calculation in finding minimal reduct with minimal cardinality of attributes is an NP-Hard problem. This paper has proposed an improved PSO/ACO optimization framework to enhance rough reduct performance by reducing the computational complexities. The proposed framework consists of a three-stage optimization process, i.e. global optimization with PSO, local optimization with ACO and vaccination process on discernibility matrix.
{"title":"A framework of rough reducts optimization based on PSO/ACO hybridized algorithms","authors":"Lustiana Pratiwi, Y. Choo, A. Muda","doi":"10.1109/DMO.2011.5976520","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976520","url":null,"abstract":"Rough reducts has contributed significantly in numerous researches of feature selection analysis. It has been proven as a reliable reduction technique in identifying the importance of attributes set in an information system. The key factor for the success of reducts calculation in finding minimal reduct with minimal cardinality of attributes is an NP-Hard problem. This paper has proposed an improved PSO/ACO optimization framework to enhance rough reduct performance by reducing the computational complexities. The proposed framework consists of a three-stage optimization process, i.e. global optimization with PSO, local optimization with ACO and vaccination process on discernibility matrix.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131872168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976522
M. Hossin, M. Sulaiman, A. Mustapha, N. Mustapha, R. Rahmat
The accuracy metric has been widely used for discriminating and selecting an optimal solution in constructing an optimized classifier. However, the use of accuracy metric leads the searching process to the sub-optimal solutions due to its limited capability of discriminating values. In this study, we propose a hybrid evaluation metric, which combines the accuracy metric with the precision and recall metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP). This paper demonstrates that the OARP metric is more discriminating than the accuracy metric using two counter-examples. To verify this advantage, we conduct an empirical verification using a statistical discriminative analysis to prove that the OARP is statistically more discriminating than the accuracy metric. We also empirically demonstrate that a naive stochastic classification algorithm trained with the OARP metric is able to obtain better predictive results than the one trained with the conventional accuracy metric. The experiments have proved that the OARP metric is a better evaluator and optimizer in the constructing of optimized classifier.
{"title":"A hybrid evaluation metric for optimizing classifier","authors":"M. Hossin, M. Sulaiman, A. Mustapha, N. Mustapha, R. Rahmat","doi":"10.1109/DMO.2011.5976522","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976522","url":null,"abstract":"The accuracy metric has been widely used for discriminating and selecting an optimal solution in constructing an optimized classifier. However, the use of accuracy metric leads the searching process to the sub-optimal solutions due to its limited capability of discriminating values. In this study, we propose a hybrid evaluation metric, which combines the accuracy metric with the precision and recall metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP). This paper demonstrates that the OARP metric is more discriminating than the accuracy metric using two counter-examples. To verify this advantage, we conduct an empirical verification using a statistical discriminative analysis to prove that the OARP is statistically more discriminating than the accuracy metric. We also empirically demonstrate that a naive stochastic classification algorithm trained with the OARP metric is able to obtain better predictive results than the one trained with the conventional accuracy metric. The experiments have proved that the OARP metric is a better evaluator and optimizer in the constructing of optimized classifier.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131966476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976496
L. Abdullah, I. Taib
Fuzzy time series model has been employed by many researchers in various forecasting activities such as university enrolment, temperature, direct tax collection and the most popular stock price forecasting. However exchange rate forecasting especially using high order fuzzy time series has been given less attention despite its huge contribution in business transactions. The paper aims to test the forecasting of US dollar (USD) against Malaysian Ringgit (MYR) exchange rates using high order fuzzy time series and check its accuracy. Twenty five data set of the exchange rates USD against MYR was tested to the seven-step of high fuzzy time series. The results show that higher order fuzzy time series yield very small errors thereby the model does produce a good forecasting tool for the exchange rates.
{"title":"High order fuzzy time series for exchange rates forecasting","authors":"L. Abdullah, I. Taib","doi":"10.1109/DMO.2011.5976496","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976496","url":null,"abstract":"Fuzzy time series model has been employed by many researchers in various forecasting activities such as university enrolment, temperature, direct tax collection and the most popular stock price forecasting. However exchange rate forecasting especially using high order fuzzy time series has been given less attention despite its huge contribution in business transactions. The paper aims to test the forecasting of US dollar (USD) against Malaysian Ringgit (MYR) exchange rates using high order fuzzy time series and check its accuracy. Twenty five data set of the exchange rates USD against MYR was tested to the seven-step of high fuzzy time series. The results show that higher order fuzzy time series yield very small errors thereby the model does produce a good forecasting tool for the exchange rates.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976515
Ammar Fikrat Namik, Z. Othman
Increasing number of computer networks now a day has increased the effort of putting networks in secure with various attack risk. Intrusion Detection System (IDS) is a popular tool to secure network. Applying data mining has increased the quality of intrusion detection neither as anomaly detection or misused detection from large scale network traffic transaction. Association rules is a popular technique to produce a quality misused detection. However, the weaknesses of association rules is the fact that it often produced with thousands rules which reduce the performance of IDS. This paper aims to show applying post-mining to reduce the number of rules and remaining the most quality rules to produce quality signature. The experiment conducted using two data set collected from KDD Cup 99. Each data set is partitioned into 4 data sets based on type of attacks (PROB, UR2, R2L and DOS). Each partition is mining using Apriori Algorithm, which later performing post-mining using Chi-Squared (χ2) computation techniques. The quality of rules is measured based on Chi-Square value, which calculated according the support, confidence and lift of each association rule. The experiment results shows applying post-mining has reduced the rules up to 98% and remaining the quality rules.
如今,计算机网络的数量日益增加,这加大了网络安全防范各种攻击风险的努力。入侵检测系统(IDS)是一种流行的网络安全工具。数据挖掘的应用提高了入侵检测的质量,无论是作为异常检测还是大规模网络流量的误用检测。关联规则是产生质量误用检测的常用技术。然而,关联规则的弱点是它经常产生数千条规则,这降低了IDS的性能。本文旨在展示应用后挖掘来减少规则的数量,并保留最优质的规则来生成优质签名。实验采用KDD Cup 99收集的两组数据。每个数据集根据攻击类型(PROB、UR2、R2L和DOS)划分为4个数据集。每个分区使用Apriori算法进行挖掘,然后使用χ2 (χ2)计算技术进行后期挖掘。基于卡方值来衡量规则的质量,卡方值是根据每个关联规则的支持度、置信度和提升度来计算的。实验结果表明,采用后采法可减少98%的规则,保留质量规则。
{"title":"Reducing network intrusion detection association rules using Chi-Squared pruning technique","authors":"Ammar Fikrat Namik, Z. Othman","doi":"10.1109/DMO.2011.5976515","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976515","url":null,"abstract":"Increasing number of computer networks now a day has increased the effort of putting networks in secure with various attack risk. Intrusion Detection System (IDS) is a popular tool to secure network. Applying data mining has increased the quality of intrusion detection neither as anomaly detection or misused detection from large scale network traffic transaction. Association rules is a popular technique to produce a quality misused detection. However, the weaknesses of association rules is the fact that it often produced with thousands rules which reduce the performance of IDS. This paper aims to show applying post-mining to reduce the number of rules and remaining the most quality rules to produce quality signature. The experiment conducted using two data set collected from KDD Cup 99. Each data set is partitioned into 4 data sets based on type of attacks (PROB, UR2, R2L and DOS). Each partition is mining using Apriori Algorithm, which later performing post-mining using Chi-Squared (χ2) computation techniques. The quality of rules is measured based on Chi-Square value, which calculated according the support, confidence and lift of each association rule. The experiment results shows applying post-mining has reduced the rules up to 98% and remaining the quality rules.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115359149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976521
S. Mehdi Seyednejad, hamidreza musavi, S. Mohaddese Seyednejad, Tooraj Darabi
Today, data clustering problems became an important challenge in Data Mining domain. A kind of clustering is projective clustering. Since a lot of researches has done in this article but each of previous algorithms had some defects that we will be indicate in this paper. We propose a new algorithm based on fuzzy sets and at first using this approach detect and eliminate unimportant properties for all clusters. Then we remove outliers, finally we use weighted fuzzy c-mean algorithm according to offered formula for fuzzy calculations. Experimental results show that our approach has more performance and accuracy than similar algorithms.
{"title":"Fuzzy projective clustering in high dimension data using decrement size of data","authors":"S. Mehdi Seyednejad, hamidreza musavi, S. Mohaddese Seyednejad, Tooraj Darabi","doi":"10.1109/DMO.2011.5976521","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976521","url":null,"abstract":"Today, data clustering problems became an important challenge in Data Mining domain. A kind of clustering is projective clustering. Since a lot of researches has done in this article but each of previous algorithms had some defects that we will be indicate in this paper. We propose a new algorithm based on fuzzy sets and at first using this approach detect and eliminate unimportant properties for all clusters. Then we remove outliers, finally we use weighted fuzzy c-mean algorithm according to offered formula for fuzzy calculations. Experimental results show that our approach has more performance and accuracy than similar algorithms.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123429118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976526
A. Hatamlou, S. Abdullah, Z. Othman
In this paper, we present an efficient algorithm for cluster analysis, which is based on gravitational search and a heuristic search algorithm. In the proposed algorithm, called GSA-HS, the gravitational search algorithm is used to find a near optimal solution for clustering problem, and then at the next step a heuristic search algorithm is applied to improve the initial solution by searching around it. Four benchmark datasets are used to evaluate and to compare the performance of the presented algorithm with two other famous clustering algorithms, i.e. K-means and particle swarm optimization algorithm. The results show that the proposed algorithm can find high quality clusters in all the tested datasets.
{"title":"Gravitational search algorithm with heuristic search for clustering problems","authors":"A. Hatamlou, S. Abdullah, Z. Othman","doi":"10.1109/DMO.2011.5976526","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976526","url":null,"abstract":"In this paper, we present an efficient algorithm for cluster analysis, which is based on gravitational search and a heuristic search algorithm. In the proposed algorithm, called GSA-HS, the gravitational search algorithm is used to find a near optimal solution for clustering problem, and then at the next step a heuristic search algorithm is applied to improve the initial solution by searching around it. Four benchmark datasets are used to evaluate and to compare the performance of the presented algorithm with two other famous clustering algorithms, i.e. K-means and particle swarm optimization algorithm. The results show that the proposed algorithm can find high quality clusters in all the tested datasets.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976497
Said Fouchal, Murat Ahat, I. Lavallée, M. Bui
We propose in this paper a novel clustering algorithm in ultrametric spaces. It has a computational cost of O(n). This method is based on the ultratriangle inequality property. Using the order induced by an ultrametric in a given space, we demonstrate how we explore quickly data proximities in this space. We present an example of our results and show the efficiency and the consistency of our algorithm compared with another.
{"title":"An O(N) clustering method on ultrametric data","authors":"Said Fouchal, Murat Ahat, I. Lavallée, M. Bui","doi":"10.1109/DMO.2011.5976497","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976497","url":null,"abstract":"We propose in this paper a novel clustering algorithm in ultrametric spaces. It has a computational cost of O(n). This method is based on the ultratriangle inequality property. Using the order induced by an ultrametric in a given space, we demonstrate how we explore quickly data proximities in this space. We present an example of our results and show the efficiency and the consistency of our algorithm compared with another.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122219992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976498
Thanh Son Nguyen, T. Duong
In this paper, we introduce a new time series dimensionality reduction method, MP_C (Middle points and Clipping). This method is performed by dividing time series into segments, some points in each segment being extracted and then these points are transformed into a sequence of bits. In our method, we choose the points in each segment by dividing a segment into sub-segments and the middle points of these sub-segments are selected. We can prove that MP_C satisfies the lower bounding condition and make MP_C indexable by showing that a time series compressed by MP_C can be indexed with the support of Skyline index. Our experiments show that our MP_C method is better than PAA in terms of tightness of lower bound and pruning power, and in similarity search, MP_C with the support of Skyline index performs faster than PAA based on traditional R*-tree.
本文提出了一种新的时间序列降维方法MP_C (Middle points and Clipping)。该方法通过将时间序列分割成段,在每段中提取一些点,然后将这些点转换成比特序列来实现。在我们的方法中,我们通过将一个段划分为子段来选择每个段中的点,并选择这些子段的中间点。通过证明MP_C压缩后的时间序列在Skyline索引的支持下可以被索引,可以证明MP_C满足下边界条件,并使MP_C可被索引。实验表明,MP_C方法在下界紧密度和剪枝能力方面优于PAA方法,在相似性搜索方面,支持Skyline索引的MP_C方法比基于传统R*树的PAA方法更快。
{"title":"Time series similarity search based on Middle points and Clipping","authors":"Thanh Son Nguyen, T. Duong","doi":"10.1109/DMO.2011.5976498","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976498","url":null,"abstract":"In this paper, we introduce a new time series dimensionality reduction method, MP_C (Middle points and Clipping). This method is performed by dividing time series into segments, some points in each segment being extracted and then these points are transformed into a sequence of bits. In our method, we choose the points in each segment by dividing a segment into sub-segments and the middle points of these sub-segments are selected. We can prove that MP_C satisfies the lower bounding condition and make MP_C indexable by showing that a time series compressed by MP_C can be indexed with the support of Skyline index. Our experiments show that our MP_C method is better than PAA in terms of tightness of lower bound and pruning power, and in similarity search, MP_C with the support of Skyline index performs faster than PAA based on traditional R*-tree.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122343185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976530
Ghaith M. Jaradat, M. Ayob
Scatter Search (SS) is an evolutionary population-based metaheuristic that has been successfully applied to hard combinatorial optimization problems. In contrast to the genetic algorithm, it reduces the population of solutions size into a promising set of solutions in terms of quality and diversity to maintain a balance between diversification and intensification of the search. Also it avoids using random sampling mechanisms such as crossover and mutation in generating new solutions. Instead, it performs a crossover in the form of structured solution combinations based on two good quality and diverse solutions. In this study, we propose a SS approach for solving the course timetabling problem. The approach focuses on two main methods employed within it; the reference set update and solution combination methods. Both methods provide a deterministic search process by maintaining diversity of the population. This is achieved by manipulating a dynamic population size and performing a probabilistic selection procedure in order to generate a promising reference set (elite solutions). It is also interesting to incorporate an Iterated Local Search routine into the SS method to increase the exploitation of generated good quality solutions effectively to escape from local optima and to decrease the computational time. Experimental results showed that our SS approach produces good quality solutions, and outperforms some results reported in the literature (regarding Socha's instances) including population-based algorithms.
{"title":"Scatter search for solving the course timetabling problem","authors":"Ghaith M. Jaradat, M. Ayob","doi":"10.1109/DMO.2011.5976530","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976530","url":null,"abstract":"Scatter Search (SS) is an evolutionary population-based metaheuristic that has been successfully applied to hard combinatorial optimization problems. In contrast to the genetic algorithm, it reduces the population of solutions size into a promising set of solutions in terms of quality and diversity to maintain a balance between diversification and intensification of the search. Also it avoids using random sampling mechanisms such as crossover and mutation in generating new solutions. Instead, it performs a crossover in the form of structured solution combinations based on two good quality and diverse solutions. In this study, we propose a SS approach for solving the course timetabling problem. The approach focuses on two main methods employed within it; the reference set update and solution combination methods. Both methods provide a deterministic search process by maintaining diversity of the population. This is achieved by manipulating a dynamic population size and performing a probabilistic selection procedure in order to generate a promising reference set (elite solutions). It is also interesting to incorporate an Iterated Local Search routine into the SS method to increase the exploitation of generated good quality solutions effectively to escape from local optima and to decrease the computational time. Experimental results showed that our SS approach produces good quality solutions, and outperforms some results reported in the literature (regarding Socha's instances) including population-based algorithms.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116806423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-28DOI: 10.1109/DMO.2011.5976507
Rabiatul 'Adawiah Mat Noor, Zainal Ahmad
This work attempted on developing soft sensor for prediction of biopolymer molecular weight using neural network as the tool. Molecular weight is a parameter that cannot be measured online whereas it is difficult for most of us to develop and control this particular parameter. Alternatively, the molecular weight is predicted by utilizing inferential estimation method based on neural network model. In this work, temperature of biopolymerization process is used to bring a mutual relation to biopolymer molecular weight. The process involved the development of neural network model for estimation of molecular weight based on various reaction temperatures. In this study, the results are convincing and the soft sensor developed from neural network is really reliable in forecasting the biopolymer molecular weight.
{"title":"Neural network based soft sensor for prediction of biopolycaprolactone molecular weight using bootstrap neural network technique","authors":"Rabiatul 'Adawiah Mat Noor, Zainal Ahmad","doi":"10.1109/DMO.2011.5976507","DOIUrl":"https://doi.org/10.1109/DMO.2011.5976507","url":null,"abstract":"This work attempted on developing soft sensor for prediction of biopolymer molecular weight using neural network as the tool. Molecular weight is a parameter that cannot be measured online whereas it is difficult for most of us to develop and control this particular parameter. Alternatively, the molecular weight is predicted by utilizing inferential estimation method based on neural network model. In this work, temperature of biopolymerization process is used to bring a mutual relation to biopolymer molecular weight. The process involved the development of neural network model for estimation of molecular weight based on various reaction temperatures. In this study, the results are convincing and the soft sensor developed from neural network is really reliable in forecasting the biopolymer molecular weight.","PeriodicalId":436393,"journal":{"name":"2011 3rd Conference on Data Mining and Optimization (DMO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127895851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}