Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456582
P. K. De, M. Deb
This article proposes to handle multi objective linear fractional programming (MOLFP) problems in fuzzy environment. As a generalized mean value theorem first order Taylor series approach is used to convert multi objective linear fractional programming to multi objective linear programming problem by introducing imprecise aspiration level to each objective. Then additive weighted method has been used to get its solution. It has been observed that optimality reached for different weight values of the membership function for the different objective functions. The method has been presented by an algorithm and sensitivity analysis for the fuzzy multi objective linear fractional programming (FMOLFP) problem with respect to aspiration level and tolerance limit are also presented. The present approach is demonstrated with one numerical example.
{"title":"Solution of multi objective linear fractional programming problem by Taylor series approach","authors":"P. K. De, M. Deb","doi":"10.1109/MAMI.2015.7456582","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456582","url":null,"abstract":"This article proposes to handle multi objective linear fractional programming (MOLFP) problems in fuzzy environment. As a generalized mean value theorem first order Taylor series approach is used to convert multi objective linear fractional programming to multi objective linear programming problem by introducing imprecise aspiration level to each objective. Then additive weighted method has been used to get its solution. It has been observed that optimality reached for different weight values of the membership function for the different objective functions. The method has been presented by an algorithm and sensitivity analysis for the fuzzy multi objective linear fractional programming (FMOLFP) problem with respect to aspiration level and tolerance limit are also presented. The present approach is demonstrated with one numerical example.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131518783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456604
Amol D. Vibhute, K. Kale, Rajesh K. Dhumal, S. Mehrotra
Recently, Hyperspectral remote sensing technology has been proved to be a valuable tool to get reliable information with details for identifying different objects on the earth surface with high spectral resolution. Due to atmospheric effects the valuable information may be lost from hyperspectral data. Hence it is necessary to remove these effects from hyperspectral data for reliable identification of the objects on the earth surface. The atmospheric correction is a very critical task of hyperspectral images. The present paper highlights the advantages of hyperspectral data, challenges over it as a pre-processing with solutions through QUAC and FLAASH algorithms. The hyperspectral data acquired for Aurangabad district were used to test these algorithms. The result indicates that the size of hyperspectral image can be reduced. The ENVI 5.1 software with IDL language is an efficient way to visualize and analysis the hyperspectral images. Implementation of atmospheric correction algorithms like QUAC and FLAASH is successfully carried out. The QUAC model gives accurate and reliable results without any ancillary information but requires only wavelength and radiometric calibration with less time than FLAASH.
{"title":"Hyperspectral imaging data atmospheric correction challenges and solutions using QUAC and FLAASH algorithms","authors":"Amol D. Vibhute, K. Kale, Rajesh K. Dhumal, S. Mehrotra","doi":"10.1109/MAMI.2015.7456604","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456604","url":null,"abstract":"Recently, Hyperspectral remote sensing technology has been proved to be a valuable tool to get reliable information with details for identifying different objects on the earth surface with high spectral resolution. Due to atmospheric effects the valuable information may be lost from hyperspectral data. Hence it is necessary to remove these effects from hyperspectral data for reliable identification of the objects on the earth surface. The atmospheric correction is a very critical task of hyperspectral images. The present paper highlights the advantages of hyperspectral data, challenges over it as a pre-processing with solutions through QUAC and FLAASH algorithms. The hyperspectral data acquired for Aurangabad district were used to test these algorithms. The result indicates that the size of hyperspectral image can be reduced. The ENVI 5.1 software with IDL language is an efficient way to visualize and analysis the hyperspectral images. Implementation of atmospheric correction algorithms like QUAC and FLAASH is successfully carried out. The QUAC model gives accurate and reliable results without any ancillary information but requires only wavelength and radiometric calibration with less time than FLAASH.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132730815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456600
S. Mohanty, Suchismita Sengupta, S. K. Mohapatra
With the increasing number of Intellectual Property (IP) cores in the todays system on chip (SOC), verification of Interconnect Bus matrix becomes a critical and time consuming task. Development of verification platform for complex SOC Interconnect takes several weeks considering it supports different kinds of protocol, large number of master and slave ports with multiple transaction types. To reduce overall time-to-market for SOC delivery, it is crucial to verify Interconnect in a very narrow time frame. In this research article, we present Test Bench(TB) automation solution for verifying completeness and correctness of data as it pass through interconnect fabric. Automation reduces verification effort by automatically creating authenticated infrastructure, stimulus vector and coverage model to support all transactions exchanged between Masters and Slaves within an SOC. This approach enables a protocol independent scoreboard to check data integrity and verify different data path transactions fo and from each port of bus fabric. We applied the proposed solution to various bus matrix testing which lead to 40% save in verification cycle.
{"title":"Test bench automation to overcome verification challenge of SOC Interconnect","authors":"S. Mohanty, Suchismita Sengupta, S. K. Mohapatra","doi":"10.1109/MAMI.2015.7456600","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456600","url":null,"abstract":"With the increasing number of Intellectual Property (IP) cores in the todays system on chip (SOC), verification of Interconnect Bus matrix becomes a critical and time consuming task. Development of verification platform for complex SOC Interconnect takes several weeks considering it supports different kinds of protocol, large number of master and slave ports with multiple transaction types. To reduce overall time-to-market for SOC delivery, it is crucial to verify Interconnect in a very narrow time frame. In this research article, we present Test Bench(TB) automation solution for verifying completeness and correctness of data as it pass through interconnect fabric. Automation reduces verification effort by automatically creating authenticated infrastructure, stimulus vector and coverage model to support all transactions exchanged between Masters and Slaves within an SOC. This approach enables a protocol independent scoreboard to check data integrity and verify different data path transactions fo and from each port of bus fabric. We applied the proposed solution to various bus matrix testing which lead to 40% save in verification cycle.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127400494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456612
Bichitra Mandal, Srinivas Sethi, R. Sahoo
Understanding the characteristics of MapReduce workloads in a Hadoop, is the key in making optimal and efficient configuration decisions and improving the system efficiency. MapReduce is a very popular parallel processing framework for large-scale data analytics which has become an effective method for processing massive data by using cluster of computers. In the last decade, the amount of customers, services and information increasing rapidly, yielding the big data analysis problem for service systems. To keep up with the increasing volume of datasets, it requires efficient analytical capability to process and analyze data in two phases. They are mapping and reducing. Between mapping and reducing phases, MapReduce requires a shuffling to globally exchange the intermediate data generated by the mapping. In this paper, it is proposed a novel shuffling strategy to enable efficient data movement and reduce for MapReduce shuffling with number of consecutive words and their count in the word processor. To improve its scalability and efficiency of word processor in big data environment, repetition of consecutive words count with shuffling is implemented on Hadoop. It can be implemented in a widely-adopted distributed computing platform and also in single word processor big documents using the MapReduce parallel processing paradigm.
{"title":"Architecture of efficient word processing using Hadoop MapReduce for big data applications","authors":"Bichitra Mandal, Srinivas Sethi, R. Sahoo","doi":"10.1109/MAMI.2015.7456612","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456612","url":null,"abstract":"Understanding the characteristics of MapReduce workloads in a Hadoop, is the key in making optimal and efficient configuration decisions and improving the system efficiency. MapReduce is a very popular parallel processing framework for large-scale data analytics which has become an effective method for processing massive data by using cluster of computers. In the last decade, the amount of customers, services and information increasing rapidly, yielding the big data analysis problem for service systems. To keep up with the increasing volume of datasets, it requires efficient analytical capability to process and analyze data in two phases. They are mapping and reducing. Between mapping and reducing phases, MapReduce requires a shuffling to globally exchange the intermediate data generated by the mapping. In this paper, it is proposed a novel shuffling strategy to enable efficient data movement and reduce for MapReduce shuffling with number of consecutive words and their count in the word processor. To improve its scalability and efficiency of word processor in big data environment, repetition of consecutive words count with shuffling is implemented on Hadoop. It can be implemented in a widely-adopted distributed computing platform and also in single word processor big documents using the MapReduce parallel processing paradigm.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456581
S. Behera, P. Nanda
In this paper, we have proposed a combined Local Binary Pattern (LBP) and Weber Law Descriptor (WLD) feature based Conditional Random Field (CRF) model for detection of man made structures such as buildings in natural scenes. In natural scenes, the structure may have textural attributes or some portions of the object may be apparent as textures. The CRF model learning has been carried out in feature space. The spatial contextual dependencies of the structures has been taken care by the intrascale LBP features and interscale WLD features. The CRF model learning problem have been formulated in pseudolikelihood framework while the inferred labels have been obtained by maximizing the posterior distribution of the feature space. Iterated conditional mode algorithm (ICM) has been used to obtain the labels. The proposed algorithm could successfully be tested with many images and was found to be better than that of Kumar's algorithm in terms of detection accuracy.
{"title":"LBP and Weber law descriptor feature based CRF model for detection of man-made structures","authors":"S. Behera, P. Nanda","doi":"10.1109/MAMI.2015.7456581","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456581","url":null,"abstract":"In this paper, we have proposed a combined Local Binary Pattern (LBP) and Weber Law Descriptor (WLD) feature based Conditional Random Field (CRF) model for detection of man made structures such as buildings in natural scenes. In natural scenes, the structure may have textural attributes or some portions of the object may be apparent as textures. The CRF model learning has been carried out in feature space. The spatial contextual dependencies of the structures has been taken care by the intrascale LBP features and interscale WLD features. The CRF model learning problem have been formulated in pseudolikelihood framework while the inferred labels have been obtained by maximizing the posterior distribution of the feature space. Iterated conditional mode algorithm (ICM) has been used to obtain the labels. The proposed algorithm could successfully be tested with many images and was found to be better than that of Kumar's algorithm in terms of detection accuracy.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"107 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129096353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456602
Sasmita Pani, Jibitesh Mishra
There exists various web based agriculture information systems. These systems are providing required information to farmers about different crops, soil, different farming techniques etc. These web based agriculture information systems deal with numerous kinds of data but they don't maintain consistency and semantics in data. Hence ontology is used in web and provides meaningful annotations and vocabulary of terms about a certain domain. Here in this paper we are building ontology in agriculture system in web ontology language (OWL). This paper shows various classes and subclasses using OWL DL in protege5.0 for an e-agriculture information system. This paper also provides various classes and subclasses and relationship among the classes in UML class diagram for a web based agriculture information system or e-agriculture.
{"title":"Building semantics of E-agriculture in India: Semantics in e-agriculture","authors":"Sasmita Pani, Jibitesh Mishra","doi":"10.1109/MAMI.2015.7456602","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456602","url":null,"abstract":"There exists various web based agriculture information systems. These systems are providing required information to farmers about different crops, soil, different farming techniques etc. These web based agriculture information systems deal with numerous kinds of data but they don't maintain consistency and semantics in data. Hence ontology is used in web and provides meaningful annotations and vocabulary of terms about a certain domain. Here in this paper we are building ontology in agriculture system in web ontology language (OWL). This paper shows various classes and subclasses using OWL DL in protege5.0 for an e-agriculture information system. This paper also provides various classes and subclasses and relationship among the classes in UML class diagram for a web based agriculture information system or e-agriculture.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126551712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456613
Amarsinh Varpe, Yogesh D. Rajendra, Amol D. Vibhute, S. Gaikwad, K. Kale
Hyperspectral non-imaging data provides the spectral range from 400-2500nm which has the ability to identify each and every unique materials on the surface. The plant species identification is critical task manually and computationally. In the present paper, we have proposed plant species identification system based on non-imaging hyperspectral data and designed our own database for experiment. Also we have identified various plant species and performed support vector machine (SVM) algorithm on it for recognition. The overall accuracy 91% was achieved through SVM.
{"title":"Identification of plant species using non-imaging hyperspectral data","authors":"Amarsinh Varpe, Yogesh D. Rajendra, Amol D. Vibhute, S. Gaikwad, K. Kale","doi":"10.1109/MAMI.2015.7456613","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456613","url":null,"abstract":"Hyperspectral non-imaging data provides the spectral range from 400-2500nm which has the ability to identify each and every unique materials on the surface. The plant species identification is critical task manually and computationally. In the present paper, we have proposed plant species identification system based on non-imaging hyperspectral data and designed our own database for experiment. Also we have identified various plant species and performed support vector machine (SVM) algorithm on it for recognition. The overall accuracy 91% was achieved through SVM.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124555651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456599
Sanatan Mohanty, S. Mishra
Long Term Evolution (LTE) is the 4G wireless broadband access technology aimed at providing multimedia services based on IP networks. It has been designed to improve system capacity and coverage, improve user experience through higher data rates and reduced latency, reduced deployment and operating costs, and seamless integration with existing communication systems. This paper concerns about the wireless propagation models, which plays a very significant role in planning of any wireless network. In this paper, a comparison has been presented among the different propagation models both in terms of path loss and computational complexity using the NS-3 simulator.
LTE (Long Term Evolution)是基于IP网络提供多媒体业务的4G无线宽带接入技术。它旨在提高系统容量和覆盖范围,通过更高的数据速率和更低的延迟,降低部署和运营成本,以及与现有通信系统的无缝集成来改善用户体验。本文研究的无线传播模型在任何无线网络的规划中都起着非常重要的作用。本文利用NS-3仿真器对不同的传播模型在路径损耗和计算复杂度方面进行了比较。
{"title":"Performance evaluation of wireless propagation models for long term evolution using NS-3","authors":"Sanatan Mohanty, S. Mishra","doi":"10.1109/MAMI.2015.7456599","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456599","url":null,"abstract":"Long Term Evolution (LTE) is the 4G wireless broadband access technology aimed at providing multimedia services based on IP networks. It has been designed to improve system capacity and coverage, improve user experience through higher data rates and reduced latency, reduced deployment and operating costs, and seamless integration with existing communication systems. This paper concerns about the wireless propagation models, which plays a very significant role in planning of any wireless network. In this paper, a comparison has been presented among the different propagation models both in terms of path loss and computational complexity using the NS-3 simulator.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131625186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456608
A. Sahoo, Sundar Sourav Sarangi, Rachita Misra
As amount of data in different form is increased day by day; it is very difficult to process it. This unstructured form of data cannot be easily retrieved through query processing. Normally SQL query acts on structured data. To convert unstructured data into structured data, Hadoop provides map reduce approach. Instead using map function, we can use GPU approach for processing data in parallel and then we can use reduce function on processed data. Here we compare two approach i.e. map-reduce approach and gpu-reduce approach to calculate performance measurement for searching index file. As Hadoop is a framework which is purely based on Java, we use JCUDA programming language to implement gpu-reduce approach.
{"title":"A comparison study among GPU and map reduce approach for searching operation on index file in database query processing","authors":"A. Sahoo, Sundar Sourav Sarangi, Rachita Misra","doi":"10.1109/MAMI.2015.7456608","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456608","url":null,"abstract":"As amount of data in different form is increased day by day; it is very difficult to process it. This unstructured form of data cannot be easily retrieved through query processing. Normally SQL query acts on structured data. To convert unstructured data into structured data, Hadoop provides map reduce approach. Instead using map function, we can use GPU approach for processing data in parallel and then we can use reduce function on processed data. Here we compare two approach i.e. map-reduce approach and gpu-reduce approach to calculate performance measurement for searching index file. As Hadoop is a framework which is purely based on Java, we use JCUDA programming language to implement gpu-reduce approach.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130344469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/MAMI.2015.7456586
Wei Sun, Jian Hou
Practical production systems are usually complex, nonlinear and non-Gaussian. Different from some other fault diagnosis methods, particle filter can applied to nonlinear and non-Gaussian systems effectively. The particle impoverishment problem exists in the traditional particle filter algorithm, which influences the results of state estimation. In this paper, we conclude that the general particle impoverishment problem comes from the impoverishment of particle diversity by analyzing the particle filter algorithm. We then design an intelligent particle filter(IPF) to deal with particle impoverishment. IPF relieves the particle impoverishment problem using the genetic strategy. In fact, the general PF is a special case of IPF relieves the particular parameters. Experiment on 160 MW unit fuel model shows that the intelligent particle filter can increase the particles diversity and improve the state estimation results.
{"title":"Fault diagnosis based on intelligent particle filter","authors":"Wei Sun, Jian Hou","doi":"10.1109/MAMI.2015.7456586","DOIUrl":"https://doi.org/10.1109/MAMI.2015.7456586","url":null,"abstract":"Practical production systems are usually complex, nonlinear and non-Gaussian. Different from some other fault diagnosis methods, particle filter can applied to nonlinear and non-Gaussian systems effectively. The particle impoverishment problem exists in the traditional particle filter algorithm, which influences the results of state estimation. In this paper, we conclude that the general particle impoverishment problem comes from the impoverishment of particle diversity by analyzing the particle filter algorithm. We then design an intelligent particle filter(IPF) to deal with particle impoverishment. IPF relieves the particle impoverishment problem using the genetic strategy. In fact, the general PF is a special case of IPF relieves the particular parameters. Experiment on 160 MW unit fuel model shows that the intelligent particle filter can increase the particles diversity and improve the state estimation results.","PeriodicalId":108908,"journal":{"name":"2015 International Conference on Man and Machine Interfacing (MAMI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133178947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}