Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422314
Morteza Behnam, H. Pourghassem
Offline algorithm to detect the intractable epileptic seizure of children has vital role for surgical intervention. In this paper, after preprocessing and windowing procedure by Discrete Wavelet Transform (DWT), EEG signal is decomposed to five brain rhythms. These rhythms are formed to 2D pattern by upsampling idea. We have proposed a novel scenario for feature extraction that is called Singular Lorenz Measures Method (SLMM). In our method, by Chan's Singular Value Decomposition (Chan's SVD) in two phases including of QR factorization and Golub-Kahan-Reinsch algorithm, the singular values as energies of the signal on orthogonal space for pattern of rhythms in all windows are obtained. The Lorenz curve as a depiction of Cumulative Distribution Function (CDF) of singular values set is computed. With regard to the relative inequality measures, the Lorenz inconsistent and consistent features are extracted. Moreover, the hybrid approach of K-Nearest Neighbor (KNN) and Scatter Search (SS) is applied as optimization algorithm. The Multi-Layer Perceptron (MLP) neural network is also optimized on the hidden layer and learning algorithm. The optimal selected attributes using the optimized MLP classifier are employed to recognize the seizure attack. Ultimately, the seizure and non-seizure signals are classified in offline mode with accuracy rate of 90.0% and variance of MSE 1.47×10-4.
{"title":"Singular Lorenz Measures Method for seizure detection using KNN-Scatter Search optimization algorithm","authors":"Morteza Behnam, H. Pourghassem","doi":"10.1109/SPIS.2015.7422314","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422314","url":null,"abstract":"Offline algorithm to detect the intractable epileptic seizure of children has vital role for surgical intervention. In this paper, after preprocessing and windowing procedure by Discrete Wavelet Transform (DWT), EEG signal is decomposed to five brain rhythms. These rhythms are formed to 2D pattern by upsampling idea. We have proposed a novel scenario for feature extraction that is called Singular Lorenz Measures Method (SLMM). In our method, by Chan's Singular Value Decomposition (Chan's SVD) in two phases including of QR factorization and Golub-Kahan-Reinsch algorithm, the singular values as energies of the signal on orthogonal space for pattern of rhythms in all windows are obtained. The Lorenz curve as a depiction of Cumulative Distribution Function (CDF) of singular values set is computed. With regard to the relative inequality measures, the Lorenz inconsistent and consistent features are extracted. Moreover, the hybrid approach of K-Nearest Neighbor (KNN) and Scatter Search (SS) is applied as optimization algorithm. The Multi-Layer Perceptron (MLP) neural network is also optimized on the hidden layer and learning algorithm. The optimal selected attributes using the optimized MLP classifier are employed to recognize the seizure attack. Ultimately, the seizure and non-seizure signals are classified in offline mode with accuracy rate of 90.0% and variance of MSE 1.47×10-4.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126088544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422320
E. Rajaby, S. Ahadi, H. Aghaeinia
Image segmentation is a task of grouping pixels based on similarity. In this paper the problem of segmentation of color images, especially noisy images, is studied. In order to improve the speed of segmentation and avoid redundant calculations, our method only uses two color components, hue and intensity, which are chosen rationally. These two color components are combined in a specially defined cost function. The impact of each color component (hue and intensity) is controlled by weights (called hue weight and intensity weight). These weights lead to focusing on the color component that is more informative and consequently the speed and accuracy of segmentation is improved. We have also used entropy maximization in the core of the cost function to improve the performance of segmentation. Furthermore we have suggested a fast initialization scheme based on peak finding of two dimensional histogram that prevents Fuzzy C-means from converging to a local minimum. Our experiments indicate that the proposed method performs superior to some related state-of-the-art methods.
{"title":"Entropy-based fuzzy C-means with weighted hue and intensity for color image segmentation","authors":"E. Rajaby, S. Ahadi, H. Aghaeinia","doi":"10.1109/SPIS.2015.7422320","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422320","url":null,"abstract":"Image segmentation is a task of grouping pixels based on similarity. In this paper the problem of segmentation of color images, especially noisy images, is studied. In order to improve the speed of segmentation and avoid redundant calculations, our method only uses two color components, hue and intensity, which are chosen rationally. These two color components are combined in a specially defined cost function. The impact of each color component (hue and intensity) is controlled by weights (called hue weight and intensity weight). These weights lead to focusing on the color component that is more informative and consequently the speed and accuracy of segmentation is improved. We have also used entropy maximization in the core of the cost function to improve the performance of segmentation. Furthermore we have suggested a fast initialization scheme based on peak finding of two dimensional histogram that prevents Fuzzy C-means from converging to a local minimum. Our experiments indicate that the proposed method performs superior to some related state-of-the-art methods.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128918108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422317
Souad Benabdelkader, Ouarda Soltani
The classical wavelet denoising scheme estimates the noise level in the wavelet domain using only the upper detail subband. In this paper, we present a hybrid method for wavelet image denoising in which the standard deviation of the noise is estimated on the entire image pixels in the spatial domain within an adaptive edge preservation scheme. Thereafter, that estimation is used to calculate the threshold for wavelet coefficients shrinkage.
{"title":"Wavelet image denoising based spatial noise estimation","authors":"Souad Benabdelkader, Ouarda Soltani","doi":"10.1109/SPIS.2015.7422317","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422317","url":null,"abstract":"The classical wavelet denoising scheme estimates the noise level in the wavelet domain using only the upper detail subband. In this paper, we present a hybrid method for wavelet image denoising in which the standard deviation of the noise is estimated on the entire image pixels in the spatial domain within an adaptive edge preservation scheme. Thereafter, that estimation is used to calculate the threshold for wavelet coefficients shrinkage.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133397633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422334
A. Mokari, A. Ahmadyfard
In this paper we propose an adaptive method for single image super resolution by exploiting the self-similarity. By using similarity between patches of input image and a down sampled version of the input image, we create super-resolution image. In the proposed method, first we segment input image. For each segment if variance of intensity is significant, we increase overlap between patches and reduce the patch size. On the contrary, for image segments with low detail we decrease the overlap between patches and increase the patch size. The experimental result showed, the proposed method is significantly faster than the existing methods whereas the performance in terms of PSNR criterions is comparable with the existing methods.
{"title":"An adaptive single image method for super resolution","authors":"A. Mokari, A. Ahmadyfard","doi":"10.1109/SPIS.2015.7422334","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422334","url":null,"abstract":"In this paper we propose an adaptive method for single image super resolution by exploiting the self-similarity. By using similarity between patches of input image and a down sampled version of the input image, we create super-resolution image. In the proposed method, first we segment input image. For each segment if variance of intensity is significant, we increase overlap between patches and reduce the patch size. On the contrary, for image segments with low detail we decrease the overlap between patches and increase the patch size. The experimental result showed, the proposed method is significantly faster than the existing methods whereas the performance in terms of PSNR criterions is comparable with the existing methods.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115364087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422308
N. Mozayani, Hossein Parineh
Protein Structure Prediction (PSP) is one of the most studied topics in the field of bioinformatics. Regarding the intrinsic hardness of the problem, during last decades several computational methods mainly based on artificial intelligence have been proposed to approach the problem. In this paper we broke the main process of PSP into two steps. The first step is making a bias in the sequence, i.e. providing a very fast yet considerably better energy of conformation compared to the primary sequence with zero energy. The second step, which is studied in the other essay, is feeding this biased sequence to another algorithm to find the best possible conformation. For the first step, we developed a new heuristic method to find a low-energy structure of a protein. The main concept of this method is based on rule extraction from previously determined conformations. We'll call this method Fast-Bias-Algorithm (FBA) mainly because it provides a modified structure with better energy from a primary (linear) structure of a protein in a remarkably short time, comparing to the time needed for the whole process. This method was implemented in Netlogo. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic Hydrophilic lattice models. Comparing with the result of the other algorithms, our method in less than 2% of their time reached up to 62% of the energy of their best conformation.
{"title":"A heuristic method to bias protein's primary sequence in protein structure prediction","authors":"N. Mozayani, Hossein Parineh","doi":"10.1109/SPIS.2015.7422308","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422308","url":null,"abstract":"Protein Structure Prediction (PSP) is one of the most studied topics in the field of bioinformatics. Regarding the intrinsic hardness of the problem, during last decades several computational methods mainly based on artificial intelligence have been proposed to approach the problem. In this paper we broke the main process of PSP into two steps. The first step is making a bias in the sequence, i.e. providing a very fast yet considerably better energy of conformation compared to the primary sequence with zero energy. The second step, which is studied in the other essay, is feeding this biased sequence to another algorithm to find the best possible conformation. For the first step, we developed a new heuristic method to find a low-energy structure of a protein. The main concept of this method is based on rule extraction from previously determined conformations. We'll call this method Fast-Bias-Algorithm (FBA) mainly because it provides a modified structure with better energy from a primary (linear) structure of a protein in a remarkably short time, comparing to the time needed for the whole process. This method was implemented in Netlogo. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic Hydrophilic lattice models. Comparing with the result of the other algorithms, our method in less than 2% of their time reached up to 62% of the energy of their best conformation.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422315
M. Behdadfar, Ehsan Faghihi, M. E. Sadeghi
Managing VoIP service using QoS analysis is a vital issue for obtaining voice desired quality. Quality improvement and rational resource utilization are two parameters involved in determining VoIP service functionality. Which parameter outweighs the other one is a permanent challenge affecting developers' decisions. Hence every now and then different approaches are introduced impacting on VoIP service quality and resource utilization. This paper proposes a new approach to improve VoIP quality utilizing the least possible resource. Changing packet sizes and codecs adaptively from sender side lead to acquire the acceptable quality in receiver side. The results show how successful the proposed algorithm is and its positive impacts on QoS parameters are evaluated.
{"title":"QoS parameters analysis in VoIP network using adaptive quality improvement","authors":"M. Behdadfar, Ehsan Faghihi, M. E. Sadeghi","doi":"10.1109/SPIS.2015.7422315","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422315","url":null,"abstract":"Managing VoIP service using QoS analysis is a vital issue for obtaining voice desired quality. Quality improvement and rational resource utilization are two parameters involved in determining VoIP service functionality. Which parameter outweighs the other one is a permanent challenge affecting developers' decisions. Hence every now and then different approaches are introduced impacting on VoIP service quality and resource utilization. This paper proposes a new approach to improve VoIP quality utilizing the least possible resource. Changing packet sizes and codecs adaptively from sender side lead to acquire the acceptable quality in receiver side. The results show how successful the proposed algorithm is and its positive impacts on QoS parameters are evaluated.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125656801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422318
N. Faraji, S. Ahadi
In this paper a new approach is presented to develop the subspace-based speech enhancement for non-stationary noise cases. The new method updates the noise correlation matrix segment-by-segment assuming that only the eigenvalues of the matrix are varying with time. In other words, the characteristic of varying loudness of noise signals is just considered, as it is observed in the modulated white noise case where the eigenvectors are invariant over time. The proposed scheme for updating noise correlation matrix is embedded in the framework of a soft model order based subspace approach for speech enhancement. The experiments show significant improvement in different non-stationary noise types.
{"title":"Improved subspace-based speech enhancement using a novel updating approach for noise correlation matrix","authors":"N. Faraji, S. Ahadi","doi":"10.1109/SPIS.2015.7422318","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422318","url":null,"abstract":"In this paper a new approach is presented to develop the subspace-based speech enhancement for non-stationary noise cases. The new method updates the noise correlation matrix segment-by-segment assuming that only the eigenvalues of the matrix are varying with time. In other words, the characteristic of varying loudness of noise signals is just considered, as it is observed in the modulated white noise case where the eigenvectors are invariant over time. The proposed scheme for updating noise correlation matrix is embedded in the framework of a soft model order based subspace approach for speech enhancement. The experiments show significant improvement in different non-stationary noise types.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132306665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422332
F. Nouri, K. Kazemi, H. Danyali
In this paper, we propose an unsupervised bottom-up method which formulates salient object detection problem as finding salient vertices of a graph. Global contrast is extracted in a novel graph-based framework to determine localization of salient objects. Saliency values are assigned to regions in terms of nodes degrees on graph. The proposed method has been applied on SED2 dataset. The qualitative and quantitative evaluation of the proposed method show that it can detect the salient objects appropriately in comparison with 5 state-of-art saliency models.
{"title":"Salient object detection via global contrast graph","authors":"F. Nouri, K. Kazemi, H. Danyali","doi":"10.1109/SPIS.2015.7422332","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422332","url":null,"abstract":"In this paper, we propose an unsupervised bottom-up method which formulates salient object detection problem as finding salient vertices of a graph. Global contrast is extracted in a novel graph-based framework to determine localization of salient objects. Saliency values are assigned to regions in terms of nodes degrees on graph. The proposed method has been applied on SED2 dataset. The qualitative and quantitative evaluation of the proposed method show that it can detect the salient objects appropriately in comparison with 5 state-of-art saliency models.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133456085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422304
Mojtaba Azimifar, Babak Nadjar Araabi, Hadi Moradi
Intelligent stock trading systems use soft computing techniques for forecasting the trend of the stock price. But the so-called noise in the market usually results in overtrading and loss of profit. In order to reduce the effect of noise on the trading decisions, high level representations can be used for the output of the trading systems. But the technical indicators which act as the inputs of the trading system, suffer from these short term irregularities as well. This paper suggests a high level representation for the technical indicators to match the level of information in the outputs. Digital low pass filters are carefully designed to remove the transient fluctuations of the technical indicators without losing too much information. Several experiments on different stocks in Tehran Stock Exchange shows a major improvement in the performance of the intelligent stock trading systems.
{"title":"Improving the performance of intelligent stock trading systems by using a high level representation for the inputs","authors":"Mojtaba Azimifar, Babak Nadjar Araabi, Hadi Moradi","doi":"10.1109/SPIS.2015.7422304","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422304","url":null,"abstract":"Intelligent stock trading systems use soft computing techniques for forecasting the trend of the stock price. But the so-called noise in the market usually results in overtrading and loss of profit. In order to reduce the effect of noise on the trading decisions, high level representations can be used for the output of the trading systems. But the technical indicators which act as the inputs of the trading system, suffer from these short term irregularities as well. This paper suggests a high level representation for the technical indicators to match the level of information in the outputs. Digital low pass filters are carefully designed to remove the transient fluctuations of the technical indicators without losing too much information. Several experiments on different stocks in Tehran Stock Exchange shows a major improvement in the performance of the intelligent stock trading systems.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122491341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/SPIS.2015.7422336
A. Motamedi, Mohsen Najafi, N. Erami
Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.
{"title":"Parallel secure turbo code for security enhancement in physical layer","authors":"A. Motamedi, Mohsen Najafi, N. Erami","doi":"10.1109/SPIS.2015.7422336","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422336","url":null,"abstract":"Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122526392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}