J. Handley, Marie-Luise Schneider, Victor Ciriza, J. Earl
A managed print service (MPS) manages the printing, scanning and facsimile devices in an enterprise to control cost and improve availability. Services include supplies replenishment, maintenance, repair, and use reporting. Customers are billed per page printed. Data are collected from a network of devices to facilitate management. The number of pages printed per device must be accurately counted to fairly bill the customer. Software errors, hardware changes, repairs, and human error all contribute to “meter reads” that are exceptionally high and are apt to be challenged by the customer were they to be billed. Account managers periodically review data for each device in an account. This process is tedious and time consuming and an automated solution is desired. Exceptional print volumes are not always salient and detecting them statistically is prone to errors owing to nonstationarity of the data. Mean levels and variances change over time and usage is highly auto correlated which precludes simple detection methods based on deviations from an average background. A solution must also be computationally inexpensive and require little auxiliary storage because hundreds of thousands of streams of device data must be processed. We present an algorithm and system for online detection of extreme print volumes that uses dynamic linear models (DLM) with variance learning. A DLM is a state space time series model comprising a random mean level system process and a random observation process. Both components are updated using Bayesian statistics. After each update, a forecasted value and its estimated variance are calculated. A read is flagged as exceptionally high if its value is highly unlikely with respect to a forecasted value and its standard deviation. We provide implementation details and results of a field test in which error rate was decreased from 26.4% to 0.5% on 728 observed meter reads.
{"title":"Extreme Volume Detection for Managed Print Services","authors":"J. Handley, Marie-Luise Schneider, Victor Ciriza, J. Earl","doi":"10.1109/ICMLA.2010.95","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.95","url":null,"abstract":"A managed print service (MPS) manages the printing, scanning and facsimile devices in an enterprise to control cost and improve availability. Services include supplies replenishment, maintenance, repair, and use reporting. Customers are billed per page printed. Data are collected from a network of devices to facilitate management. The number of pages printed per device must be accurately counted to fairly bill the customer. Software errors, hardware changes, repairs, and human error all contribute to “meter reads” that are exceptionally high and are apt to be challenged by the customer were they to be billed. Account managers periodically review data for each device in an account. This process is tedious and time consuming and an automated solution is desired. Exceptional print volumes are not always salient and detecting them statistically is prone to errors owing to nonstationarity of the data. Mean levels and variances change over time and usage is highly auto correlated which precludes simple detection methods based on deviations from an average background. A solution must also be computationally inexpensive and require little auxiliary storage because hundreds of thousands of streams of device data must be processed. We present an algorithm and system for online detection of extreme print volumes that uses dynamic linear models (DLM) with variance learning. A DLM is a state space time series model comprising a random mean level system process and a random observation process. Both components are updated using Bayesian statistics. After each update, a forecasted value and its estimated variance are calculated. A read is flagged as exceptionally high if its value is highly unlikely with respect to a forecasted value and its standard deviation. We provide implementation details and results of a field test in which error rate was decreased from 26.4% to 0.5% on 728 observed meter reads.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133610229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel algorithm for greedy forward feature selection for regularized least-squares (RLS) regression and classification, also known as the least-squares support vector machine or ridge regression. The algorithm, which we call greedy RLS, starts from the empty feature set, and on each iteration adds the feature whose addition provides the best leave-one-out cross-validation performance. Our method is considerably faster than the previously proposed ones, since its time complexity is linear in the number of training examples, the number of features in the original data set, and the desired size of the set of selected features. Therefore, as a side effect we obtain a new training algorithm for learning sparse linear RLS predictors which can be used for large scale learning. This speed is possible due to matrix calculus based short-cuts for leave-one-out and feature addition. We experimentally demonstrate the scalability of our algorithm compared to previously proposed implementations.
{"title":"Speeding Up Greedy Forward Selection for Regularized Least-Squares","authors":"T. Pahikkala, A. Airola, T. Salakoski","doi":"10.1109/ICMLA.2010.55","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.55","url":null,"abstract":"We propose a novel algorithm for greedy forward feature selection for regularized least-squares (RLS) regression and classification, also known as the least-squares support vector machine or ridge regression. The algorithm, which we call greedy RLS, starts from the empty feature set, and on each iteration adds the feature whose addition provides the best leave-one-out cross-validation performance. Our method is considerably faster than the previously proposed ones, since its time complexity is linear in the number of training examples, the number of features in the original data set, and the desired size of the set of selected features. Therefore, as a side effect we obtain a new training algorithm for learning sparse linear RLS predictors which can be used for large scale learning. This speed is possible due to matrix calculus based short-cuts for leave-one-out and feature addition. We experimentally demonstrate the scalability of our algorithm compared to previously proposed implementations.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MicroRNAs (miRNAs) are newly discovered endogenous small non-coding RNAs (21-25nt) that target their complementary gene transcripts for degradation or translational repression. The biogenesis of a functional miRNA is largely dependent on the secondary structure of the miRNA precursor (pre-miRNA). Recently, it has been shown that miRNAs are present in the genome as the form of polycistronic transcriptional units in plants and animals. It will be important to design methods to predict such structures for miRNA discovery and its applications in gene silencing. In this paper, we propose a parallel algorithm based on the master-slave architecture to predict the secondary structure from an input sequence. First, the master processor partitions the input sequence into subsequences and distributes them to the slave processors. The slave processors will then predict the secondary structure based on their individual task. Afterward, the slave processors will return their results to the master processor. Finally, the master processor will merge the partial structures from the slave processors into a whole candidate secondary structure. The optimal structure is obtained by sorting the candidate structures according to their scores. Our experimental results indicate that the actual speed-ups match the trend of theoretic values.
{"title":"A Parallel Algorithm for Predicting the Secondary Structure of Polycistronic MicroRNAs","authors":"Dianwei Han, G. Tang, Jun Zhang","doi":"10.1109/ICMLA.2010.80","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.80","url":null,"abstract":"MicroRNAs (miRNAs) are newly discovered endogenous small non-coding RNAs (21-25nt) that target their complementary gene transcripts for degradation or translational repression. The biogenesis of a functional miRNA is largely dependent on the secondary structure of the miRNA precursor (pre-miRNA). Recently, it has been shown that miRNAs are present in the genome as the form of polycistronic transcriptional units in plants and animals. It will be important to design methods to predict such structures for miRNA discovery and its applications in gene silencing. In this paper, we propose a parallel algorithm based on the master-slave architecture to predict the secondary structure from an input sequence. First, the master processor partitions the input sequence into subsequences and distributes them to the slave processors. The slave processors will then predict the secondary structure based on their individual task. Afterward, the slave processors will return their results to the master processor. Finally, the master processor will merge the partial structures from the slave processors into a whole candidate secondary structure. The optimal structure is obtained by sorting the candidate structures according to their scores. Our experimental results indicate that the actual speed-ups match the trend of theoretic values.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132600453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantum systems are promising candidates of future computing and information processing devices. In a large system, information about the quantum states and processes may be incomplete and scattered. To integrate the distributed information we propose a quantum version of probabilistic graphical models. Variables in the model (quantum states and measurement outcomes) are linked by several types of operators (unitary, measurement, and merge/split operators). We propose algorithms for three machine learning tasks in quantum probabilistic graphical models: a belief propagation algorithm for inference of unknown states, an iterative algorithm for simultaneous estimation of parameter values and hidden states, and an active learning algorithm to select measurement operators based on observed evidence. We validate these algorithms on simulated data and point out future extensions toward a more comprehensive theory of quantum probabilistic graphical models.
{"title":"A Probabilistic Graphical Model of Quantum Systems","authors":"Chen-Hsiang Yeang","doi":"10.1109/ICMLA.2010.30","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.30","url":null,"abstract":"Quantum systems are promising candidates of future computing and information processing devices. In a large system, information about the quantum states and processes may be incomplete and scattered. To integrate the distributed information we propose a quantum version of probabilistic graphical models. Variables in the model (quantum states and measurement outcomes) are linked by several types of operators (unitary, measurement, and merge/split operators). We propose algorithms for three machine learning tasks in quantum probabilistic graphical models: a belief propagation algorithm for inference of unknown states, an iterative algorithm for simultaneous estimation of parameter values and hidden states, and an active learning algorithm to select measurement operators based on observed evidence. We validate these algorithms on simulated data and point out future extensions toward a more comprehensive theory of quantum probabilistic graphical models.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124517947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Kurokawa, H. Maruta, J. Sakemi, Akihiro Nakamura, H. Osuga
This paper presents a novel prediction based digital control dc-dc converter. In this method, addition to the P-I-D control as the feedback control, the prediction based control is used as the feedfoward control. In the feedfoward control, the neural network based method is adopted. This works to improve the transient response very effectively when the load is changed quickly. As a result, the undershoot of the output voltage and the overshoot of the reactor current are suppressed effectively as compared with the conventional one in the step change of load resistance. It is confirmed that the prediction based control technique is useful to realize the high performance digital control method for the dc-dc converter.
{"title":"A New Prediction Based Digital Control DC-DC Converter","authors":"F. Kurokawa, H. Maruta, J. Sakemi, Akihiro Nakamura, H. Osuga","doi":"10.1109/ICMLA.2010.110","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.110","url":null,"abstract":"This paper presents a novel prediction based digital control dc-dc converter. In this method, addition to the P-I-D control as the feedback control, the prediction based control is used as the feedfoward control. In the feedfoward control, the neural network based method is adopted. This works to improve the transient response very effectively when the load is changed quickly. As a result, the undershoot of the output voltage and the overshoot of the reactor current are suppressed effectively as compared with the conventional one in the step change of load resistance. It is confirmed that the prediction based control technique is useful to realize the high performance digital control method for the dc-dc converter.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129161248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new loss function has been introduced for Minimum Classification Error, that approaches optimal Bayes' risk and also gives an improvement in performance over standard MCE systems when evaluated on the Aurora connected digits database.
{"title":"Multi-Class Classification Using a New Sigmoid Loss Function for Minimum Classification Error (MCE)","authors":"M. Ratnagiri, L. Rabiner, B. Juang","doi":"10.1109/ICMLA.2010.20","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.20","url":null,"abstract":"A new loss function has been introduced for Minimum Classification Error, that approaches optimal Bayes' risk and also gives an improvement in performance over standard MCE systems when evaluated on the Aurora connected digits database.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of learning-based vision techniques to real scenarios usually requires a tunning procedure, which involves the acquisition and labeling of new data and in situ experiments in order to adapt the learning algorithm to each scenario. We address an automatic update procedure of the L2boost algorithm that is able to adapt the initial models learned off-line. Our method is named UAL2Boost and present three new contributions: (i) an on-line and continuous procedure that updates recursively the current classifier, reducing the storage constraints, (ii) a probabilistic unsupervised update that eliminates the necessity of labeled data in order to adapt the classifier and (iii) a multi-class adaptation method. We show the applicability of the on-line unsupervised adaptation to human action recognition and demonstrate that the system is able to automatically update the parameters of the L2boost with linear temporal models, thus improving the output of the models learned off-line on new video sequences, in a recursive and continuous way. The automatic adaptation of UAL2Boost follows the idea of adapting the classifier incrementally: from simple to complex.
{"title":"Unsupervised and Online Update of Boosted Temporal Models: The UAL2Boost","authors":"P. Ribeiro, Plinio Moreno, J. Santos-Victor","doi":"10.1109/ICMLA.2010.143","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.143","url":null,"abstract":"The application of learning-based vision techniques to real scenarios usually requires a tunning procedure, which involves the acquisition and labeling of new data and in situ experiments in order to adapt the learning algorithm to each scenario. We address an automatic update procedure of the L2boost algorithm that is able to adapt the initial models learned off-line. Our method is named UAL2Boost and present three new contributions: (i) an on-line and continuous procedure that updates recursively the current classifier, reducing the storage constraints, (ii) a probabilistic unsupervised update that eliminates the necessity of labeled data in order to adapt the classifier and (iii) a multi-class adaptation method. We show the applicability of the on-line unsupervised adaptation to human action recognition and demonstrate that the system is able to automatically update the parameters of the L2boost with linear temporal models, thus improving the output of the models learned off-line on new video sequences, in a recursive and continuous way. The automatic adaptation of UAL2Boost follows the idea of adapting the classifier incrementally: from simple to complex.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129196496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a simulation-based empirical study of the performance profile of random sub sample ensembles with a hybrid mix of base learner composition in high dimensional feature spaces. The performance of hybrid random sub sample ensemble that uses a combination of C4.5, k-nearest neighbor (kNN) and naïve Bayes base learners is assessed through statistical testing in comparison to those of homogeneous random sub sample ensembles that employ only one type of base learner. Simulation study employs five datasets with up to 20K features from the UCI Machine Learning Repository. Random sub sampling without replacement is used to map the original high dimensional feature space of the five datasets to a multiplicity of lower dimensional feature subspaces. The simulation study explores the effect of certain design parameters that include the count of base classifiers and sub sampling rate on the performance of the hybrid random subspace ensemble. The ensemble architecture utilizes the voting combiner in all cases. Simulation results indicate that hybridization of base learners for random sub sample ensemble improves the prediction accuracy rates and projects a more robust performance.
{"title":"Hybridization of Base Classifiers of Random Subsample Ensembles for Enhanced Performance in High Dimensional Feature Spaces","authors":"Santhosh Pathical, G. Serpen","doi":"10.1109/ICMLA.2010.118","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.118","url":null,"abstract":"This paper presents a simulation-based empirical study of the performance profile of random sub sample ensembles with a hybrid mix of base learner composition in high dimensional feature spaces. The performance of hybrid random sub sample ensemble that uses a combination of C4.5, k-nearest neighbor (kNN) and naïve Bayes base learners is assessed through statistical testing in comparison to those of homogeneous random sub sample ensembles that employ only one type of base learner. Simulation study employs five datasets with up to 20K features from the UCI Machine Learning Repository. Random sub sampling without replacement is used to map the original high dimensional feature space of the five datasets to a multiplicity of lower dimensional feature subspaces. The simulation study explores the effect of certain design parameters that include the count of base classifiers and sub sampling rate on the performance of the hybrid random subspace ensemble. The ensemble architecture utilizes the voting combiner in all cases. Simulation results indicate that hybridization of base learners for random sub sample ensemble improves the prediction accuracy rates and projects a more robust performance.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Buettner, S. Gulliford, S. Webb, M. Partridge, A. Miah, K. Harrington, C. Nutting
A reduction in salivary flow and xerostomia are common side-effects after radiotherapy of head and neck tumours. Xerostomia can be modeled based on the dose to the parotid glands. To date, all spatial information has been discarded and dose-response models are usually reduced to the mean dose. We present novel morphological dose-response models and use multivariate Bayesian logistic regression to model xerostomia. We use 3D invariant statistical moments as morphometric descriptors to quantify the shape of the 3D dose distribution. As this results in a very high number of potential predictors, we apply a Bayesian variable-selection algorithm to find the best model based on any subset of all potential predictors. To do this, we determine the posterior probabilities of being the best model for all potential models and calculate the marginal probabilities that a variable should be included in a model. This was done using a Reversible Jump Markov Chain Monte Carlo algorithm. The performance of the best model was quantified using the deviance information criterion and a leave-one-out cross-validation (LOOCV). This methodology was applied to 64 head and neck cancer patients treated with either intensity-modulated radiotherapy (IMRT) or conventional radiotherapy. Results show a substantial increase in both model-fit and area under the curve (AUC) when including morphological information compared to conventional mean-dose models. The best mean-dose model for IMRT patients only resulted in an AUC of 0.63 after LOOCV while the best morphological model had an AUC of 0.90. For conventional patients the mean-dose model and the morphological model had AUC of 0.55 and 0.86 respectively. For a joint model with all patients pooled together, the mean dose model had an AUC of 0.75 and the morphological model an AUC of 0.88. We have shown that invariant statistical moments are a good morphometric descriptor and by using Bayesian variable selection we were able to identify models with a substantially higher predictive power than conventional mean-dose models.
{"title":"Using a Bayesian Feature-selection Algorithm to Identify Dose-response Models Based on the Shape of the 3D Dose-distribution: An Example from a Head-and-neck Cancer Trial","authors":"F. Buettner, S. Gulliford, S. Webb, M. Partridge, A. Miah, K. Harrington, C. Nutting","doi":"10.1109/ICMLA.2010.113","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.113","url":null,"abstract":"A reduction in salivary flow and xerostomia are common side-effects after radiotherapy of head and neck tumours. Xerostomia can be modeled based on the dose to the parotid glands. To date, all spatial information has been discarded and dose-response models are usually reduced to the mean dose. We present novel morphological dose-response models and use multivariate Bayesian logistic regression to model xerostomia. We use 3D invariant statistical moments as morphometric descriptors to quantify the shape of the 3D dose distribution. As this results in a very high number of potential predictors, we apply a Bayesian variable-selection algorithm to find the best model based on any subset of all potential predictors. To do this, we determine the posterior probabilities of being the best model for all potential models and calculate the marginal probabilities that a variable should be included in a model. This was done using a Reversible Jump Markov Chain Monte Carlo algorithm. The performance of the best model was quantified using the deviance information criterion and a leave-one-out cross-validation (LOOCV). This methodology was applied to 64 head and neck cancer patients treated with either intensity-modulated radiotherapy (IMRT) or conventional radiotherapy. Results show a substantial increase in both model-fit and area under the curve (AUC) when including morphological information compared to conventional mean-dose models. The best mean-dose model for IMRT patients only resulted in an AUC of 0.63 after LOOCV while the best morphological model had an AUC of 0.90. For conventional patients the mean-dose model and the morphological model had AUC of 0.55 and 0.86 respectively. For a joint model with all patients pooled together, the mean dose model had an AUC of 0.75 and the morphological model an AUC of 0.88. We have shown that invariant statistical moments are a good morphometric descriptor and by using Bayesian variable selection we were able to identify models with a substantially higher predictive power than conventional mean-dose models.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spatial time series data can be viewed as a set of time series collected simultaneously at a number of spatial locations with time. For example, The Mumps data have a feature to infect adjacent broader regions in accordance with spatial location and time. Therefore, The spatial time series models have many parameters of space and time. In this paper, We propose the method of bayesian inferences and prediction in spatial time series models with a Gibbs Sampler in order to overcome convergence problem in numerical methods. Our results are illustrated by using the data set of mumps cases reported from the Korea Center for Disease Control and Prevention monthly over the years 2001-2009, as well as a simulation study.
{"title":"Bayesian Inferences and Forecasting in Spatial Time Series Models","authors":"Sung Duck Lee, Duck-Ki Kim","doi":"10.1109/ICMLA.2010.170","DOIUrl":"https://doi.org/10.1109/ICMLA.2010.170","url":null,"abstract":"The spatial time series data can be viewed as a set of time series collected simultaneously at a number of spatial locations with time. For example, The Mumps data have a feature to infect adjacent broader regions in accordance with spatial location and time. Therefore, The spatial time series models have many parameters of space and time. In this paper, We propose the method of bayesian inferences and prediction in spatial time series models with a Gibbs Sampler in order to overcome convergence problem in numerical methods. Our results are illustrated by using the data set of mumps cases reported from the Korea Center for Disease Control and Prevention monthly over the years 2001-2009, as well as a simulation study.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"522 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114623439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}