Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775646
S. Krijestorac, J. Bagby
Ethernet passive optical network (EPON) is an access network that delivers essential services of voice, video, and data communications reliably, while at the same time providing expected guarantees of the delivery of those services in terms of defined quality of service measures (QOS). This paper compares performance criteria such as delay, queue size, and packet loss ratio for two dynamic bandwidth allocation (DBA) algorithms: interleaved polling with adaptive cycle time (IPACT), and a new cyclic demand proportionality algorithm (CDP). CDP gives the best performances in terms delay vs. offered load, queue size vs. offered load and packet loss ratio vs. offered load compared to IPACT with the Fixed Allocation Window size for transmission. Improvement is seen for these three parameters when CDP is compared with IPACT with limited allocation window size. A traffic generator that gives traffic with self-similar and long range dependency properties is used in the simulations.
{"title":"Comparison of Interleaved Polling with adaptive Cycle Time and Cyclic Demand Proportionality Algorithms","authors":"S. Krijestorac, J. Bagby","doi":"10.1109/ISSPIT.2008.4775646","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775646","url":null,"abstract":"Ethernet passive optical network (EPON) is an access network that delivers essential services of voice, video, and data communications reliably, while at the same time providing expected guarantees of the delivery of those services in terms of defined quality of service measures (QOS). This paper compares performance criteria such as delay, queue size, and packet loss ratio for two dynamic bandwidth allocation (DBA) algorithms: interleaved polling with adaptive cycle time (IPACT), and a new cyclic demand proportionality algorithm (CDP). CDP gives the best performances in terms delay vs. offered load, queue size vs. offered load and packet loss ratio vs. offered load compared to IPACT with the Fixed Allocation Window size for transmission. Improvement is seen for these three parameters when CDP is compared with IPACT with limited allocation window size. A traffic generator that gives traffic with self-similar and long range dependency properties is used in the simulations.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133253576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775702
Fatemeh Rahemi, A. Sedigh, Alireza Fatehi, F. Razzazi
Tracking moving objects in variable cluttered environments is an active area of research. It is common to use some simplifying assumption in such environments to facilitate the design. In this paper a new method for simulating the completely non-Gaussian cluttered environments is presented. The method is based on using the variable variance of process noise as a description of variability in such environments. The key objective is to find an effective algorithm for tracking a single moving object in variable cluttered environments, with utilization of the presented method. The new methodology is presented in two steps. In the first step we compare the accuracy of estimators in tracking a moving object, and in the second step, the goal is to find the best algorithm for tracking a single moving target in variable cluttered environments.
{"title":"An Improved Method for Tracking a Single Target in Variable Cluttered Environments","authors":"Fatemeh Rahemi, A. Sedigh, Alireza Fatehi, F. Razzazi","doi":"10.1109/ISSPIT.2008.4775702","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775702","url":null,"abstract":"Tracking moving objects in variable cluttered environments is an active area of research. It is common to use some simplifying assumption in such environments to facilitate the design. In this paper a new method for simulating the completely non-Gaussian cluttered environments is presented. The method is based on using the variable variance of process noise as a description of variability in such environments. The key objective is to find an effective algorithm for tracking a single moving object in variable cluttered environments, with utilization of the presented method. The new methodology is presented in two steps. In the first step we compare the accuracy of estimators in tracking a moving object, and in the second step, the goal is to find the best algorithm for tracking a single moving target in variable cluttered environments.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775704
Z. Babic, A. Avramović, P. Bulić
This paper presents a new multiplier with possibility to achieve an arbitrary accuracy. The multiplier is based upon the same idea of numbers representation as Mitchell's algorithm, but does not use logarithm approximation. The proposed iterative algorithm is simple and efficient, achieving an error percentage as small as required, until the exact result. Hardware solution involves adders and shifters, so it is not gate and power consuming. Parallel circuits are used for error correction. The error summary for operands ranging from 8-bits to 16-bits operands indicates very low error percentage with only two parallel correction circuits.
{"title":"An Iterative Mitchell's Algorithm Based Multiplier","authors":"Z. Babic, A. Avramović, P. Bulić","doi":"10.1109/ISSPIT.2008.4775704","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775704","url":null,"abstract":"This paper presents a new multiplier with possibility to achieve an arbitrary accuracy. The multiplier is based upon the same idea of numbers representation as Mitchell's algorithm, but does not use logarithm approximation. The proposed iterative algorithm is simple and efficient, achieving an error percentage as small as required, until the exact result. Hardware solution involves adders and shifters, so it is not gate and power consuming. Parallel circuits are used for error correction. The error summary for operands ranging from 8-bits to 16-bits operands indicates very low error percentage with only two parallel correction circuits.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775673
S. Rossignol, O. Pietquin
This paper proposes a voiced - unvoiced measure based on the Analytic Signal computation. This voiced - unvoiced feature can be useful for many speech processing applications. For instance, considering speech recognition, it could be incorporated into commonly used acoustic feature vectors, such as for example the Mel Frequency Cepstral Coefficients (MFCC) and their first two derivatives, in order to improve the performance of the overall system. The evaluation of the developed measure has been performed on the TIMIT database. TIMIT has been manually segmented into phones. The voicing information can easily be derived from this segmentation. It is shown in this paper that the automatic voiced - unvoiced segmentation obtained using the method described in the next sections and the manual voiced - unvoiced segmentation provided by TIMIT are very similar.
{"title":"Precise Voicing Information Extraction in Speech Signals Using the Analytic Signal","authors":"S. Rossignol, O. Pietquin","doi":"10.1109/ISSPIT.2008.4775673","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775673","url":null,"abstract":"This paper proposes a voiced - unvoiced measure based on the Analytic Signal computation. This voiced - unvoiced feature can be useful for many speech processing applications. For instance, considering speech recognition, it could be incorporated into commonly used acoustic feature vectors, such as for example the Mel Frequency Cepstral Coefficients (MFCC) and their first two derivatives, in order to improve the performance of the overall system. The evaluation of the developed measure has been performed on the TIMIT database. TIMIT has been manually segmented into phones. The voicing information can easily be derived from this segmentation. It is shown in this paper that the automatic voiced - unvoiced segmentation obtained using the method described in the next sections and the manual voiced - unvoiced segmentation provided by TIMIT are very similar.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123388083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775679
L. Alexandre
A new learning principle was introduced recently called the Zero-Error Density Maximization (Z-EDM) and was proposed in the framework of MLP backpropagation. In this paper we present the adaptation of this principle to online learning in recurrent neural networks, more precisely, to the Real Time Recurrent Learning (RTRL) approach. We show how to modify the RTRL learning algorithm in order to make it learn using Z-EDM criteria by using a sliding time window of previous error values. We present experiments showing that this new approach improves the convergence rate of the RNNs and improves the prediction performance in time series forecast.
{"title":"Maximizing the Zero-Error Density for RTRL","authors":"L. Alexandre","doi":"10.1109/ISSPIT.2008.4775679","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775679","url":null,"abstract":"A new learning principle was introduced recently called the Zero-Error Density Maximization (Z-EDM) and was proposed in the framework of MLP backpropagation. In this paper we present the adaptation of this principle to online learning in recurrent neural networks, more precisely, to the Real Time Recurrent Learning (RTRL) approach. We show how to modify the RTRL learning algorithm in order to make it learn using Z-EDM criteria by using a sliding time window of previous error values. We present experiments showing that this new approach improves the convergence rate of the RNNs and improves the prediction performance in time series forecast.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121859993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775727
Hossein Mamaghanian, M. Shamsollahi, S. Hajipour
In this work we used the Liley EEG model as a dynamical model of EEG. Two parameters of the model which are candidates for change during an epileptic seizure are defined as new states in state space representation of this dynamical model. Then SIS particle filter is applied for estimating the defined states over time using the recorded epileptic EEG as the observation of the system. A method for fast numerical solution of the nonlinear coupled equation of the model is proposed. This model is used for tracking the dynamical properties of brain during epileptic seizure. Tracking the changes of these new defined states of the model have good information about the state transition of the model (interictal/preictal/ictal) and can be used in online monitoring algorithms for predicting seizures in epilepsy.
{"title":"Tracking Dynamical Transition of Epileptic EEG Using Particle Filter","authors":"Hossein Mamaghanian, M. Shamsollahi, S. Hajipour","doi":"10.1109/ISSPIT.2008.4775727","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775727","url":null,"abstract":"In this work we used the Liley EEG model as a dynamical model of EEG. Two parameters of the model which are candidates for change during an epileptic seizure are defined as new states in state space representation of this dynamical model. Then SIS particle filter is applied for estimating the defined states over time using the recorded epileptic EEG as the observation of the system. A method for fast numerical solution of the nonlinear coupled equation of the model is proposed. This model is used for tracking the dynamical properties of brain during epileptic seizure. Tracking the changes of these new defined states of the model have good information about the state transition of the model (interictal/preictal/ictal) and can be used in online monitoring algorithms for predicting seizures in epilepsy.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132798924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775731
S. M. Mirrezaie, K. Faez, Amir Asnaashari, Ali Ziaei
Adaptive Multi-Rate (AMR) codec was standardized for GSM in 1999. AMR offers substantial improvement over previous GSM speech codecs in error robustness by adapting speech and channel coding depending on channel conditions. The Adaptive Multi-Rate speech codec is adopted as a standard for IMT-2000 by ETSI and 3GPP and consists of eight source codecs with bit rates from 4.75 to 12.2 kbit/s. In this paper, we present an approach comprising of particle swarm optimization (PSO), which encodes possible segmentations of an audio record, and measures mutual information between these segments and the audio data. This measure is used as the fitness function for the PSO. A compact encoding of the solution for PSO which decreases the length of the PSO individuals and enhances the PSO convergence properties is adopted. The algorithm has been tested on two actual sets of data with AMR format for speaker segmentation, obtaining very good results in all test problems. The results have been compared to the widely used a genetic algorithm-based in several practical situations. No assumptions have been made about prior knowledge of speech signal characteristics. However, we assume that the speakers do not speak simultaneously and that we have no real-time constraints.
{"title":"A Particle Swarm Optimization-Based Approach to Speaker Segmentation Based on Independent Component Analysis on GSM Digital Speech","authors":"S. M. Mirrezaie, K. Faez, Amir Asnaashari, Ali Ziaei","doi":"10.1109/ISSPIT.2008.4775731","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775731","url":null,"abstract":"Adaptive Multi-Rate (AMR) codec was standardized for GSM in 1999. AMR offers substantial improvement over previous GSM speech codecs in error robustness by adapting speech and channel coding depending on channel conditions. The Adaptive Multi-Rate speech codec is adopted as a standard for IMT-2000 by ETSI and 3GPP and consists of eight source codecs with bit rates from 4.75 to 12.2 kbit/s. In this paper, we present an approach comprising of particle swarm optimization (PSO), which encodes possible segmentations of an audio record, and measures mutual information between these segments and the audio data. This measure is used as the fitness function for the PSO. A compact encoding of the solution for PSO which decreases the length of the PSO individuals and enhances the PSO convergence properties is adopted. The algorithm has been tested on two actual sets of data with AMR format for speaker segmentation, obtaining very good results in all test problems. The results have been compared to the widely used a genetic algorithm-based in several practical situations. No assumptions have been made about prior knowledge of speech signal characteristics. However, we assume that the speakers do not speak simultaneously and that we have no real-time constraints.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132931892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775687
M. Fahmy, G. Raheem, O. S. Mohammed, O. Fahmy, G. Fahmy
In this paper, two approaches are proposed for digital image watermarking. In the first approach, we rely on embedding all the watermarking information in the approximation coefficients of the host's image wavelet decomposition. This is achieved by combining a weighted least squares Bspline coefficient expansion of the watermarking image, to the host's approximation coefficients. In order to make the size of Bspline expansion less or equal to the size of the host's approximation matrix, the watermarking image has to be decimated. The second approach relies on applying natural preserving transforms NPT, in a symmetrical manner to the host's image. In this case, the logo or the secret key replaces some of the host's image bottom lines. After applying NPT, the original host image bottom lines, replace the watermarked ones to make the host image looks natural. A novel fast least squares algorithm is proposed for watermark extraction. Illustrative examples are given, to show the effectiveness of these methods. Thes results show that the proposed Bspline data hiding technique is robust to compression, as well as the abilities of watermark extraction of any NPT watermarked images.
{"title":"Watermarking Via Bspline Expansion and Natural Preserving Transforms","authors":"M. Fahmy, G. Raheem, O. S. Mohammed, O. Fahmy, G. Fahmy","doi":"10.1109/ISSPIT.2008.4775687","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775687","url":null,"abstract":"In this paper, two approaches are proposed for digital image watermarking. In the first approach, we rely on embedding all the watermarking information in the approximation coefficients of the host's image wavelet decomposition. This is achieved by combining a weighted least squares Bspline coefficient expansion of the watermarking image, to the host's approximation coefficients. In order to make the size of Bspline expansion less or equal to the size of the host's approximation matrix, the watermarking image has to be decimated. The second approach relies on applying natural preserving transforms NPT, in a symmetrical manner to the host's image. In this case, the logo or the secret key replaces some of the host's image bottom lines. After applying NPT, the original host image bottom lines, replace the watermarked ones to make the host image looks natural. A novel fast least squares algorithm is proposed for watermark extraction. Illustrative examples are given, to show the effectiveness of these methods. Thes results show that the proposed Bspline data hiding technique is robust to compression, as well as the abilities of watermark extraction of any NPT watermarked images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133834277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775707
V. Muñoz-Jiménez, A. Mokraoui-Zergainoh, J. Astruc
This paper concentrates on the bidirectional motion estimation problem for applications at very low bit rate. The missing frames in the original video sequence are predicted by the decoder using only the received frames with not additional information. The same selected moving objects on each decoded frames, are initially meshed from quadrilateral blocks which are then deformed using specific warping functions. The positions of the mesh nodes are adapted to the object's edges in such a way that the reconstruction error is as small as possible. Afterwards, the position displacements of these nodes are used to predict those of the moving objects in the missing frames. Finally, the meshed objects are reconstructed thanks to the predicted nodes. The proposed approach is integrated in the H.264/AVC video coding standard. Simulation results present the performance of the proposed bidirectional motion estimation.
{"title":"Bidirectional Motion Estimation Approach Using Warping Mesh Combined to Frame Interpolation","authors":"V. Muñoz-Jiménez, A. Mokraoui-Zergainoh, J. Astruc","doi":"10.1109/ISSPIT.2008.4775707","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775707","url":null,"abstract":"This paper concentrates on the bidirectional motion estimation problem for applications at very low bit rate. The missing frames in the original video sequence are predicted by the decoder using only the received frames with not additional information. The same selected moving objects on each decoded frames, are initially meshed from quadrilateral blocks which are then deformed using specific warping functions. The positions of the mesh nodes are adapted to the object's edges in such a way that the reconstruction error is as small as possible. Afterwards, the position displacements of these nodes are used to predict those of the moving objects in the missing frames. Finally, the meshed objects are reconstructed thanks to the predicted nodes. The proposed approach is integrated in the H.264/AVC video coding standard. Simulation results present the performance of the proposed bidirectional motion estimation.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131974623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775678
A. Méndez, B. García, I. Ruiz, I. Iturricha
This paper describes a method to automatically obtain the glottal space segmentation without user initialization from healthy and pathological vocal folds video sequences captured by the laryngoscope. The segmentation is mainly based on a Gabor filter bank, studying the texture differences inside vocal folds images, and combining it with others advanced image processing techniques to achieve the expected results. The authors want to emphasize that the proposed algorithm is independent of images' resolution and zoom, but the quality of them depends on specialist experience with the instrumentation. Our proposal has worked correctly in all database test videos and it shows a great advance in design, and in the nearby future, a complete method to diagnose vocal folds pathologies.
{"title":"Glottal Area Segmentation without Initialization using Gabor Filters","authors":"A. Méndez, B. García, I. Ruiz, I. Iturricha","doi":"10.1109/ISSPIT.2008.4775678","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775678","url":null,"abstract":"This paper describes a method to automatically obtain the glottal space segmentation without user initialization from healthy and pathological vocal folds video sequences captured by the laryngoscope. The segmentation is mainly based on a Gabor filter bank, studying the texture differences inside vocal folds images, and combining it with others advanced image processing techniques to achieve the expected results. The authors want to emphasize that the proposed algorithm is independent of images' resolution and zoom, but the quality of them depends on specialist experience with the instrumentation. Our proposal has worked correctly in all database test videos and it shows a great advance in design, and in the nearby future, a complete method to diagnose vocal folds pathologies.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123712636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}