Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447544
C. Teixeira, W. Pereira, A. Ruano, M. Ruano
The safe and effective application of thermal therapies are limited by the existence of precise non-invasive temperature estimators. Such estimators would enable a correct power deposition on the region of interest by means of a correct instrumentation control. In multi-layered media, the temperature should be estimated at each layer and especially at the interfaces, where significant temperature changes should occur during therapy. In this work, a non-linear autoregressive structure with exogenous inputs (NARX) was applied to non-invasively estimate temperature in a multi-layered (non-homogeneous) medium, while submitted to physiotherapeutic ultrasound. The NARX structure is composed by a static feed-forward radial basis functions neural network (RBFNN), with external dynamics induced by its inputs. The NARX structure parameters were optimized by means of a multi-objective genetic algorithm. The best attained models reached a maximum absolute error inferior to 0.5degC (proposed threshold in hyperthermia/diathermia) at both the interface and inner layer points, at four radiation intensities. These models present also a small computational complexity as desired for real-time applications. To the best of ours knowledge this is the first non-invasive estimation approach in multi-layered media using ultrasound for both heating and estimation.
{"title":"NARX structures for non-invasive temperature estimation in non-homogeneous media","authors":"C. Teixeira, W. Pereira, A. Ruano, M. Ruano","doi":"10.1109/WISP.2007.4447544","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447544","url":null,"abstract":"The safe and effective application of thermal therapies are limited by the existence of precise non-invasive temperature estimators. Such estimators would enable a correct power deposition on the region of interest by means of a correct instrumentation control. In multi-layered media, the temperature should be estimated at each layer and especially at the interfaces, where significant temperature changes should occur during therapy. In this work, a non-linear autoregressive structure with exogenous inputs (NARX) was applied to non-invasively estimate temperature in a multi-layered (non-homogeneous) medium, while submitted to physiotherapeutic ultrasound. The NARX structure is composed by a static feed-forward radial basis functions neural network (RBFNN), with external dynamics induced by its inputs. The NARX structure parameters were optimized by means of a multi-objective genetic algorithm. The best attained models reached a maximum absolute error inferior to 0.5degC (proposed threshold in hyperthermia/diathermia) at both the interface and inner layer points, at four radiation intensities. These models present also a small computational complexity as desired for real-time applications. To the best of ours knowledge this is the first non-invasive estimation approach in multi-layered media using ultrasound for both heating and estimation.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126012501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447513
Dimitrios Alexios Karras
This paper investigates a novel feature extraction approach to MRI segmentation based on identifying the critical image edges by using textural (cooccurrence matrices) analysis of the discrete wavelet transform (DWT) domain. Furthermore, the presented approach is based on formulating the problem as a two-stage unsupervised classification task using a modified Kohonen's self organizing feature map (SOFM) along with independent component analysis (ICA). The main goal of such a research effort is to better identify abrupt textural image changes without increasing the presence of noise in the resulting image. The suggested methodology is based on novel discrete wavelet descriptors involving the discrete k-level 2-D wavelet transform and cooccurrence matrices analysis applied to sliding windows raster scanning the original image. The proposed two-stage classification scheme applied to such textural wavelet descriptors and using a modified vector quantizing self-organizing feature map (SOFM) and ICA analysis is compared with a corresponding two-stage scheme involving PCA analysis and the widely used SOFM, trained with Kohonen's algorithm. The feasibility of this novel two-stage proposed approach is studied by applying it to the edge structure segmentation problem of brain slice MRI images. The promising results presented in the experimental study illustrate a performance favourably compared, also, to that of traditional Sobel edge detectors supported by usual contour tracing methods.
{"title":"On Improved MRI Segmentation Using Hierarchical Computational Intelligence Techniques and Textural Analysis of the Discrete Wavelet Transform Domain","authors":"Dimitrios Alexios Karras","doi":"10.1109/WISP.2007.4447513","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447513","url":null,"abstract":"This paper investigates a novel feature extraction approach to MRI segmentation based on identifying the critical image edges by using textural (cooccurrence matrices) analysis of the discrete wavelet transform (DWT) domain. Furthermore, the presented approach is based on formulating the problem as a two-stage unsupervised classification task using a modified Kohonen's self organizing feature map (SOFM) along with independent component analysis (ICA). The main goal of such a research effort is to better identify abrupt textural image changes without increasing the presence of noise in the resulting image. The suggested methodology is based on novel discrete wavelet descriptors involving the discrete k-level 2-D wavelet transform and cooccurrence matrices analysis applied to sliding windows raster scanning the original image. The proposed two-stage classification scheme applied to such textural wavelet descriptors and using a modified vector quantizing self-organizing feature map (SOFM) and ICA analysis is compared with a corresponding two-stage scheme involving PCA analysis and the widely used SOFM, trained with Kohonen's algorithm. The feasibility of this novel two-stage proposed approach is studied by applying it to the edge structure segmentation problem of brain slice MRI images. The promising results presented in the experimental study illustrate a performance favourably compared, also, to that of traditional Sobel edge detectors supported by usual contour tracing methods.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130099454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447611
E. Gelso, S. M. Castillo, J. Armengol
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system.
{"title":"Robust fault detection using consistency techniques for uncertainty handling","authors":"E. Gelso, S. M. Castillo, J. Armengol","doi":"10.1109/WISP.2007.4447611","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447611","url":null,"abstract":"Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"68 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447559
J. C. Mourão, A. Ruano
The purpose of this paper is to forecast the load and the price of electricity, 49 hours ahead. To accomplish these goals, computational intelligence techniques were used, specifically artificial neural networks and genetic algorithms. The neural networks employed are RBFs (radial basis functions), fully connected and with just one hidden layer. The genetic algorithm used was MOGA (multiple objective genetic algorithm), which, as the name indicates, minimizes not a single objective but several. The neural networks are trained for one step ahead, and its output is feedback until 49 hours are calculated. MOGA is used for the input selection and for topology determination. The data used was kindly given by the University of Auburn, USA, and refers to real data from some North-American states.
{"title":"Application of Computation Intelligence Techniques for Energy Load and Price Forecast in some States of USA","authors":"J. C. Mourão, A. Ruano","doi":"10.1109/WISP.2007.4447559","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447559","url":null,"abstract":"The purpose of this paper is to forecast the load and the price of electricity, 49 hours ahead. To accomplish these goals, computational intelligence techniques were used, specifically artificial neural networks and genetic algorithms. The neural networks employed are RBFs (radial basis functions), fully connected and with just one hidden layer. The genetic algorithm used was MOGA (multiple objective genetic algorithm), which, as the name indicates, minimizes not a single objective but several. The neural networks are trained for one step ahead, and its output is feedback until 49 hours are calculated. MOGA is used for the input selection and for topology determination. The data used was kindly given by the University of Auburn, USA, and refers to real data from some North-American states.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"33 7-8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447556
H. Martinsson, F. Gaspard, A. Bartoli, J. Lavest
In the area of quality control by vision, the reconstruction of 3D curves is a convenient tool to detect and quantify possible anomalies. Whereas other methods exist that allow us to describe surface elements, the contour approach will prove to be useful to reconstruct the object close to discontinuities, such as holes or edges. We present an algorithm for the reconstruction of 3D parametric curves, based on a fixed complexity model, embedded in an iterative framework of control point insertion. The successive increase of degrees of freedom provides for a good precision while avoiding to over-parameterize the model. The curve is reconstructed by adapting the projections of a 3D NURBS snake to the observed curves in a multi-view setting. The sampling of the curve is adjusted as a function of the local visibility in the different views. The optimization of the curve is performed with respect to the control points using an gradient-based energy minimization method, whereas the insertion procedure relies on the computation of the distance from the curve to the image edges.
{"title":"Adaptive Evolution of 3D Curves for Quality Control","authors":"H. Martinsson, F. Gaspard, A. Bartoli, J. Lavest","doi":"10.1109/WISP.2007.4447556","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447556","url":null,"abstract":"In the area of quality control by vision, the reconstruction of 3D curves is a convenient tool to detect and quantify possible anomalies. Whereas other methods exist that allow us to describe surface elements, the contour approach will prove to be useful to reconstruct the object close to discontinuities, such as holes or edges. We present an algorithm for the reconstruction of 3D parametric curves, based on a fixed complexity model, embedded in an iterative framework of control point insertion. The successive increase of degrees of freedom provides for a good precision while avoiding to over-parameterize the model. The curve is reconstructed by adapting the projections of a 3D NURBS snake to the observed curves in a multi-view setting. The sampling of the curve is adjusted as a function of the local visibility in the different views. The optimization of the curve is performed with respect to the control points using an gradient-based energy minimization method, whereas the insertion procedure relies on the computation of the distance from the curve to the image edges.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130713761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447618
A. Le Duff, G. Plantier, J. Valière, B. Gazengel
This paper propose a digital calibration procedure for errors compensation of the output signals of an analogical quadrature demodulation (QD) hardware. This kind of device is used for laser Doppler velocimetry (LDV) measurements in acoustics. The method developed is based on the use of a maximum likelihood estimator (MLE) in order to estimate the amplitudes, the tension offsets, and the phase shift of two quadrature signals. Such a technique provides a good and a simple way for QD calibration.
{"title":"Digital Calibration Procedure for Laser Doppler Velocimetry in Acoustics","authors":"A. Le Duff, G. Plantier, J. Valière, B. Gazengel","doi":"10.1109/WISP.2007.4447618","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447618","url":null,"abstract":"This paper propose a digital calibration procedure for errors compensation of the output signals of an analogical quadrature demodulation (QD) hardware. This kind of device is used for laser Doppler velocimetry (LDV) measurements in acoustics. The method developed is based on the use of a maximum likelihood estimator (MLE) in order to estimate the amplitudes, the tension offsets, and the phase shift of two quadrature signals. Such a technique provides a good and a simple way for QD calibration.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132693031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447632
O. Demuynck, J. Lázaro
In this paper, we present a novel approach for a real time dynamical infrared/visible images fusion. The advantage of the proposed system compared with previous studies are its huge flexibility and probably lowest cost, since that it only employ both an infrared and a conventional visible cameras. A lot of external applications like security, surveillance..., etc. need this kind of alternative to complete the visible information in some cases like a sudden illumination variation from a bright image to an almost dark situation, or a foggy or smoked environment, where usual computer vision algorithm are not robust anymore. In all those cases, the further image treatments depending on the application are not detailed in this paper, since we just focus this study on the way to make both images fitting in a resulting image and to render the image for a user to easily observe a warm area. We describe in the following paragraphs each step of this treatment, and show the results of this image fusion.
{"title":"An Efficient Approach Technique for Dynamical Infrared/Visible Images Fusion","authors":"O. Demuynck, J. Lázaro","doi":"10.1109/WISP.2007.4447632","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447632","url":null,"abstract":"In this paper, we present a novel approach for a real time dynamical infrared/visible images fusion. The advantage of the proposed system compared with previous studies are its huge flexibility and probably lowest cost, since that it only employ both an infrared and a conventional visible cameras. A lot of external applications like security, surveillance..., etc. need this kind of alternative to complete the visible information in some cases like a sudden illumination variation from a bright image to an almost dark situation, or a foggy or smoked environment, where usual computer vision algorithm are not robust anymore. In all those cases, the further image treatments depending on the application are not detailed in this paper, since we just focus this study on the way to make both images fitting in a resulting image and to render the image for a user to easily observe a warm area. We describe in the following paragraphs each step of this treatment, and show the results of this image fusion.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133787061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447583
A. Di Bella, L. Fortuna, S. Graziani, G. Napoli, M. Xibilia
In the paper the Soft Sensor design strategy for an industrial process, via neural NMA model, is described. In details, the hydrogen sulphide (H2S percentage) in the tail stream of a Sulfur Recovery Unit (SRU) of a refinery located in Sicily, Italy, is estimated by a Soft Sensor, that was designed to replace the online analyzer during maintenance operations. A general design strategy, based on the automatic selection of regressors of a NMA model is proposed. It is based on the minimization of the Lipschitz numbers by a Genetic Algorithms (GA) approach. A comparative analysis with an empirical model, developed on the basis of suggestions given by plant experts, is included to show the validity of the proposed procedure.
{"title":"Soft Sensor design for a Sulfur Recovery Unit using Genetic Algorithms","authors":"A. Di Bella, L. Fortuna, S. Graziani, G. Napoli, M. Xibilia","doi":"10.1109/WISP.2007.4447583","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447583","url":null,"abstract":"In the paper the Soft Sensor design strategy for an industrial process, via neural NMA model, is described. In details, the hydrogen sulphide (H2S percentage) in the tail stream of a Sulfur Recovery Unit (SRU) of a refinery located in Sicily, Italy, is estimated by a Soft Sensor, that was designed to replace the online analyzer during maintenance operations. A general design strategy, based on the automatic selection of regressors of a NMA model is proposed. It is based on the minimization of the Lipschitz numbers by a Genetic Algorithms (GA) approach. A comparative analysis with an empirical model, developed on the basis of suggestions given by plant experts, is included to show the validity of the proposed procedure.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114623026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447572
G. D. Leo, A. Pietrosanto, P. Sommella
All the more advanced traffic management techniques base their efficiency on reliable traffic monitoring systems. In a so important and emerging field, where the technical solutions proposed to measure and collect traffic primary data are not few, a unique methodology to evaluate and compare their performance still misses. In the paper the problem of the metrological characterization of traffic monitoring systems is widely treated, founding inspiration on the ISO Guide to Uncertainty in Measurement. Experimental results are reported to demonstrate the applicability of the suggested procedure to some commercial instruments.
{"title":"Metrological Characterization of Traffic Monitoring Systems","authors":"G. D. Leo, A. Pietrosanto, P. Sommella","doi":"10.1109/WISP.2007.4447572","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447572","url":null,"abstract":"All the more advanced traffic management techniques base their efficiency on reliable traffic monitoring systems. In a so important and emerging field, where the technical solutions proposed to measure and collect traffic primary data are not few, a unique methodology to evaluate and compare their performance still misses. In the paper the problem of the metrological characterization of traffic monitoring systems is widely treated, founding inspiration on the ISO Guide to Uncertainty in Measurement. Experimental results are reported to demonstrate the applicability of the suggested procedure to some commercial instruments.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117170454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/WISP.2007.4447520
M. Marrón, J.C. Garcia, M. Sotelo, M. Cabello, D. Pizarro, F. Huerta, J. Cerro
Two of the most important solutions in position estimation are compared, in this paper, in order to test their efficiency in a multi-tracking application in an unstructured and complex environment. A particle filter is extended and adapted with a clustering process in order to track a variable number of objects. The other approach is to use a Kalman filter with an association algorithm for each of the objects to track. Both algorithms are described in the paper and the results obtained with their real-time execution in the mentioned application are shown. Finally interesting conclusions extracted from this comparison are remarked at the end.
{"title":"Comparing a Kalman Filter and a Particle Filter in a Multiple Objects Tracking Application","authors":"M. Marrón, J.C. Garcia, M. Sotelo, M. Cabello, D. Pizarro, F. Huerta, J. Cerro","doi":"10.1109/WISP.2007.4447520","DOIUrl":"https://doi.org/10.1109/WISP.2007.4447520","url":null,"abstract":"Two of the most important solutions in position estimation are compared, in this paper, in order to test their efficiency in a multi-tracking application in an unstructured and complex environment. A particle filter is extended and adapted with a clustering process in order to track a variable number of objects. The other approach is to use a Kalman filter with an association algorithm for each of the objects to track. Both algorithms are described in the paper and the results obtained with their real-time execution in the mentioned application are shown. Finally interesting conclusions extracted from this comparison are remarked at the end.","PeriodicalId":164902,"journal":{"name":"2007 IEEE International Symposium on Intelligent Signal Processing","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121902766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}