首页 > 最新文献

2015 Signal Processing and Intelligent Systems Conference (SPIS)最新文献

英文 中文
Singular Lorenz Measures Method for seizure detection using KNN-Scatter Search optimization algorithm 基于knn -散射搜索优化算法的奇异洛伦兹测度癫痫检测方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422314
Morteza Behnam, H. Pourghassem
Offline algorithm to detect the intractable epileptic seizure of children has vital role for surgical intervention. In this paper, after preprocessing and windowing procedure by Discrete Wavelet Transform (DWT), EEG signal is decomposed to five brain rhythms. These rhythms are formed to 2D pattern by upsampling idea. We have proposed a novel scenario for feature extraction that is called Singular Lorenz Measures Method (SLMM). In our method, by Chan's Singular Value Decomposition (Chan's SVD) in two phases including of QR factorization and Golub-Kahan-Reinsch algorithm, the singular values as energies of the signal on orthogonal space for pattern of rhythms in all windows are obtained. The Lorenz curve as a depiction of Cumulative Distribution Function (CDF) of singular values set is computed. With regard to the relative inequality measures, the Lorenz inconsistent and consistent features are extracted. Moreover, the hybrid approach of K-Nearest Neighbor (KNN) and Scatter Search (SS) is applied as optimization algorithm. The Multi-Layer Perceptron (MLP) neural network is also optimized on the hidden layer and learning algorithm. The optimal selected attributes using the optimized MLP classifier are employed to recognize the seizure attack. Ultimately, the seizure and non-seizure signals are classified in offline mode with accuracy rate of 90.0% and variance of MSE 1.47×10-4.
离线算法检测儿童顽固性癫痫发作对手术干预具有重要作用。本文采用离散小波变换(DWT)对脑电信号进行预处理和加窗处理,将脑电信号分解为5个脑节律。这些节奏是通过上采样思想形成的二维模式。我们提出了一种新的特征提取方案,称为奇异洛伦兹测度法(SLMM)。在我们的方法中,通过QR分解和Golub-Kahan-Reinsch算法两个阶段的Chan奇异值分解(Chan’s SVD),得到了信号在正交空间上的奇异值作为所有窗口的节奏模式的能量。计算了Lorenz曲线作为奇异值集的累积分布函数(CDF)的描述。对于相对不等式测度,提取了Lorenz不一致和一致特征。采用k -最近邻(KNN)和散点搜索(SS)的混合方法作为优化算法。多层感知器(MLP)神经网络也在隐层和学习算法上进行了优化。使用优化的MLP分类器选择最优属性来识别癫痫发作。最终在离线模式下对癫痫发作和非癫痫发作信号进行分类,准确率为90.0%,方差为MSE 1.47×10-4。
{"title":"Singular Lorenz Measures Method for seizure detection using KNN-Scatter Search optimization algorithm","authors":"Morteza Behnam, H. Pourghassem","doi":"10.1109/SPIS.2015.7422314","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422314","url":null,"abstract":"Offline algorithm to detect the intractable epileptic seizure of children has vital role for surgical intervention. In this paper, after preprocessing and windowing procedure by Discrete Wavelet Transform (DWT), EEG signal is decomposed to five brain rhythms. These rhythms are formed to 2D pattern by upsampling idea. We have proposed a novel scenario for feature extraction that is called Singular Lorenz Measures Method (SLMM). In our method, by Chan's Singular Value Decomposition (Chan's SVD) in two phases including of QR factorization and Golub-Kahan-Reinsch algorithm, the singular values as energies of the signal on orthogonal space for pattern of rhythms in all windows are obtained. The Lorenz curve as a depiction of Cumulative Distribution Function (CDF) of singular values set is computed. With regard to the relative inequality measures, the Lorenz inconsistent and consistent features are extracted. Moreover, the hybrid approach of K-Nearest Neighbor (KNN) and Scatter Search (SS) is applied as optimization algorithm. The Multi-Layer Perceptron (MLP) neural network is also optimized on the hidden layer and learning algorithm. The optimal selected attributes using the optimized MLP classifier are employed to recognize the seizure attack. Ultimately, the seizure and non-seizure signals are classified in offline mode with accuracy rate of 90.0% and variance of MSE 1.47×10-4.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126088544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Entropy-based fuzzy C-means with weighted hue and intensity for color image segmentation 基于熵的加权色相和灰度模糊c均值彩色图像分割
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422320
E. Rajaby, S. Ahadi, H. Aghaeinia
Image segmentation is a task of grouping pixels based on similarity. In this paper the problem of segmentation of color images, especially noisy images, is studied. In order to improve the speed of segmentation and avoid redundant calculations, our method only uses two color components, hue and intensity, which are chosen rationally. These two color components are combined in a specially defined cost function. The impact of each color component (hue and intensity) is controlled by weights (called hue weight and intensity weight). These weights lead to focusing on the color component that is more informative and consequently the speed and accuracy of segmentation is improved. We have also used entropy maximization in the core of the cost function to improve the performance of segmentation. Furthermore we have suggested a fast initialization scheme based on peak finding of two dimensional histogram that prevents Fuzzy C-means from converging to a local minimum. Our experiments indicate that the proposed method performs superior to some related state-of-the-art methods.
图像分割是一种基于相似度对像素进行分组的任务。本文研究了彩色图像,特别是噪声图像的分割问题。为了提高分割速度和避免重复计算,我们的方法只使用两个颜色分量,即色调和强度,并合理选择。这两种颜色成分组合在一个特殊定义的成本函数中。每个颜色分量(色调和强度)的影响是由权重(称为色调权重和强度权重)控制的。这些权重导致更关注信息丰富的颜色成分,从而提高分割的速度和准确性。我们还在代价函数的核心使用了熵最大化来提高分割的性能。此外,我们还提出了一种基于二维直方图峰值发现的快速初始化方案,该方案可以防止模糊c均值收敛到局部最小值。我们的实验表明,该方法优于一些相关的最新方法。
{"title":"Entropy-based fuzzy C-means with weighted hue and intensity for color image segmentation","authors":"E. Rajaby, S. Ahadi, H. Aghaeinia","doi":"10.1109/SPIS.2015.7422320","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422320","url":null,"abstract":"Image segmentation is a task of grouping pixels based on similarity. In this paper the problem of segmentation of color images, especially noisy images, is studied. In order to improve the speed of segmentation and avoid redundant calculations, our method only uses two color components, hue and intensity, which are chosen rationally. These two color components are combined in a specially defined cost function. The impact of each color component (hue and intensity) is controlled by weights (called hue weight and intensity weight). These weights lead to focusing on the color component that is more informative and consequently the speed and accuracy of segmentation is improved. We have also used entropy maximization in the core of the cost function to improve the performance of segmentation. Furthermore we have suggested a fast initialization scheme based on peak finding of two dimensional histogram that prevents Fuzzy C-means from converging to a local minimum. Our experiments indicate that the proposed method performs superior to some related state-of-the-art methods.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128918108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wavelet image denoising based spatial noise estimation 基于空间噪声估计的小波图像去噪
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422317
Souad Benabdelkader, Ouarda Soltani
The classical wavelet denoising scheme estimates the noise level in the wavelet domain using only the upper detail subband. In this paper, we present a hybrid method for wavelet image denoising in which the standard deviation of the noise is estimated on the entire image pixels in the spatial domain within an adaptive edge preservation scheme. Thereafter, that estimation is used to calculate the threshold for wavelet coefficients shrinkage.
经典的小波去噪方案仅利用上细节子带估计小波域内的噪声电平。在本文中,我们提出了一种混合的小波图像去噪方法,该方法在自适应边缘保留方案中估计噪声在整个图像像素的空间域上的标准差。然后,使用该估计计算小波系数收缩的阈值。
{"title":"Wavelet image denoising based spatial noise estimation","authors":"Souad Benabdelkader, Ouarda Soltani","doi":"10.1109/SPIS.2015.7422317","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422317","url":null,"abstract":"The classical wavelet denoising scheme estimates the noise level in the wavelet domain using only the upper detail subband. In this paper, we present a hybrid method for wavelet image denoising in which the standard deviation of the noise is estimated on the entire image pixels in the spatial domain within an adaptive edge preservation scheme. Thereafter, that estimation is used to calculate the threshold for wavelet coefficients shrinkage.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133397633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An adaptive single image method for super resolution 一种超分辨率自适应单幅图像方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422334
A. Mokari, A. Ahmadyfard
In this paper we propose an adaptive method for single image super resolution by exploiting the self-similarity. By using similarity between patches of input image and a down sampled version of the input image, we create super-resolution image. In the proposed method, first we segment input image. For each segment if variance of intensity is significant, we increase overlap between patches and reduce the patch size. On the contrary, for image segments with low detail we decrease the overlap between patches and increase the patch size. The experimental result showed, the proposed method is significantly faster than the existing methods whereas the performance in terms of PSNR criterions is comparable with the existing methods.
本文提出了一种利用自相似度实现单幅图像超分辨率的自适应方法。通过利用输入图像的块与下采样版本之间的相似性,我们创建了超分辨率图像。在该方法中,首先对输入图像进行分割。对于每个片段,如果强度方差显著,则增加斑块之间的重叠并减小斑块大小。相反,对于低细节的图像片段,我们减少了补丁之间的重叠,增加了补丁的大小。实验结果表明,该方法的速度明显快于现有方法,而在PSNR准则方面的性能与现有方法相当。
{"title":"An adaptive single image method for super resolution","authors":"A. Mokari, A. Ahmadyfard","doi":"10.1109/SPIS.2015.7422334","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422334","url":null,"abstract":"In this paper we propose an adaptive method for single image super resolution by exploiting the self-similarity. By using similarity between patches of input image and a down sampled version of the input image, we create super-resolution image. In the proposed method, first we segment input image. For each segment if variance of intensity is significant, we increase overlap between patches and reduce the patch size. On the contrary, for image segments with low detail we decrease the overlap between patches and increase the patch size. The experimental result showed, the proposed method is significantly faster than the existing methods whereas the performance in terms of PSNR criterions is comparable with the existing methods.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115364087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A heuristic method to bias protein's primary sequence in protein structure prediction 蛋白质结构预测中一阶序列偏差的启发式方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422308
N. Mozayani, Hossein Parineh
Protein Structure Prediction (PSP) is one of the most studied topics in the field of bioinformatics. Regarding the intrinsic hardness of the problem, during last decades several computational methods mainly based on artificial intelligence have been proposed to approach the problem. In this paper we broke the main process of PSP into two steps. The first step is making a bias in the sequence, i.e. providing a very fast yet considerably better energy of conformation compared to the primary sequence with zero energy. The second step, which is studied in the other essay, is feeding this biased sequence to another algorithm to find the best possible conformation. For the first step, we developed a new heuristic method to find a low-energy structure of a protein. The main concept of this method is based on rule extraction from previously determined conformations. We'll call this method Fast-Bias-Algorithm (FBA) mainly because it provides a modified structure with better energy from a primary (linear) structure of a protein in a remarkably short time, comparing to the time needed for the whole process. This method was implemented in Netlogo. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic Hydrophilic lattice models. Comparing with the result of the other algorithms, our method in less than 2% of their time reached up to 62% of the energy of their best conformation.
蛋白质结构预测是生物信息学领域研究最多的课题之一。由于这一问题的内在困难,在过去的几十年中,人们提出了几种主要基于人工智能的计算方法来解决这一问题。在本文中,我们将PSP的主要过程分为两个步骤。第一步是在序列中进行偏置,即与零能量的主序列相比,提供非常快但明显更好的构象能量。第二步是在另一篇文章中研究的,将这个有偏差的序列输入到另一个算法中,以找到可能的最佳构象。第一步,我们开发了一种新的启发式方法来寻找蛋白质的低能量结构。该方法的主要概念是基于从先前确定的构象中提取规则。我们将这种方法称为Fast-Bias-Algorithm (FBA),主要是因为与整个过程所需的时间相比,它可以在非常短的时间内从蛋白质的初级(线性)结构中提供具有更好能量的修饰结构。该方法在Netlogo中实现。我们已经在几个基准序列上测试了该算法,范围从20到50-mers的二维疏水亲水性晶格模型。与其他算法的结果相比,我们的方法在不到2%的时间内达到了最佳构象能量的62%。
{"title":"A heuristic method to bias protein's primary sequence in protein structure prediction","authors":"N. Mozayani, Hossein Parineh","doi":"10.1109/SPIS.2015.7422308","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422308","url":null,"abstract":"Protein Structure Prediction (PSP) is one of the most studied topics in the field of bioinformatics. Regarding the intrinsic hardness of the problem, during last decades several computational methods mainly based on artificial intelligence have been proposed to approach the problem. In this paper we broke the main process of PSP into two steps. The first step is making a bias in the sequence, i.e. providing a very fast yet considerably better energy of conformation compared to the primary sequence with zero energy. The second step, which is studied in the other essay, is feeding this biased sequence to another algorithm to find the best possible conformation. For the first step, we developed a new heuristic method to find a low-energy structure of a protein. The main concept of this method is based on rule extraction from previously determined conformations. We'll call this method Fast-Bias-Algorithm (FBA) mainly because it provides a modified structure with better energy from a primary (linear) structure of a protein in a remarkably short time, comparing to the time needed for the whole process. This method was implemented in Netlogo. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic Hydrophilic lattice models. Comparing with the result of the other algorithms, our method in less than 2% of their time reached up to 62% of the energy of their best conformation.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QoS parameters analysis in VoIP network using adaptive quality improvement 基于自适应质量改进的VoIP网络QoS参数分析
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422315
M. Behdadfar, Ehsan Faghihi, M. E. Sadeghi
Managing VoIP service using QoS analysis is a vital issue for obtaining voice desired quality. Quality improvement and rational resource utilization are two parameters involved in determining VoIP service functionality. Which parameter outweighs the other one is a permanent challenge affecting developers' decisions. Hence every now and then different approaches are introduced impacting on VoIP service quality and resource utilization. This paper proposes a new approach to improve VoIP quality utilizing the least possible resource. Changing packet sizes and codecs adaptively from sender side lead to acquire the acceptable quality in receiver side. The results show how successful the proposed algorithm is and its positive impacts on QoS parameters are evaluated.
利用QoS分析对VoIP业务进行管理是获得理想语音质量的关键问题。质量的提高和资源的合理利用是决定VoIP业务功能的两个参数。哪个参数比另一个更重要是影响开发人员决策的永久性挑战。因此,不时引入不同的方法来影响VoIP的服务质量和资源利用率。本文提出了一种利用尽可能少的资源提高VoIP质量的新方法。从发送端自适应地改变数据包大小和编解码器,使接收端获得可接受的质量。结果表明,该算法是成功的,并评估了其对QoS参数的积极影响。
{"title":"QoS parameters analysis in VoIP network using adaptive quality improvement","authors":"M. Behdadfar, Ehsan Faghihi, M. E. Sadeghi","doi":"10.1109/SPIS.2015.7422315","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422315","url":null,"abstract":"Managing VoIP service using QoS analysis is a vital issue for obtaining voice desired quality. Quality improvement and rational resource utilization are two parameters involved in determining VoIP service functionality. Which parameter outweighs the other one is a permanent challenge affecting developers' decisions. Hence every now and then different approaches are introduced impacting on VoIP service quality and resource utilization. This paper proposes a new approach to improve VoIP quality utilizing the least possible resource. Changing packet sizes and codecs adaptively from sender side lead to acquire the acceptable quality in receiver side. The results show how successful the proposed algorithm is and its positive impacts on QoS parameters are evaluated.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125656801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Improved subspace-based speech enhancement using a novel updating approach for noise correlation matrix 基于噪声相关矩阵更新方法的改进子空间语音增强
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422318
N. Faraji, S. Ahadi
In this paper a new approach is presented to develop the subspace-based speech enhancement for non-stationary noise cases. The new method updates the noise correlation matrix segment-by-segment assuming that only the eigenvalues of the matrix are varying with time. In other words, the characteristic of varying loudness of noise signals is just considered, as it is observed in the modulated white noise case where the eigenvectors are invariant over time. The proposed scheme for updating noise correlation matrix is embedded in the framework of a soft model order based subspace approach for speech enhancement. The experiments show significant improvement in different non-stationary noise types.
本文提出了一种针对非平稳噪声情况下基于子空间的语音增强方法。该方法在假设噪声相关矩阵只有特征值随时间变化的前提下,逐段更新噪声相关矩阵。换句话说,噪声信号的变化响度的特性只是考虑,因为它是观察到的调制白噪声的情况下,特征向量是不变的随时间。所提出的噪声相关矩阵更新方案被嵌入到基于软模型阶数的语音增强子空间方法框架中。实验表明,在不同的非平稳噪声类型下,该方法都有显著的改善。
{"title":"Improved subspace-based speech enhancement using a novel updating approach for noise correlation matrix","authors":"N. Faraji, S. Ahadi","doi":"10.1109/SPIS.2015.7422318","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422318","url":null,"abstract":"In this paper a new approach is presented to develop the subspace-based speech enhancement for non-stationary noise cases. The new method updates the noise correlation matrix segment-by-segment assuming that only the eigenvalues of the matrix are varying with time. In other words, the characteristic of varying loudness of noise signals is just considered, as it is observed in the modulated white noise case where the eigenvectors are invariant over time. The proposed scheme for updating noise correlation matrix is embedded in the framework of a soft model order based subspace approach for speech enhancement. The experiments show significant improvement in different non-stationary noise types.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132306665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Salient object detection via global contrast graph 基于全局对比图的显著目标检测
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422332
F. Nouri, K. Kazemi, H. Danyali
In this paper, we propose an unsupervised bottom-up method which formulates salient object detection problem as finding salient vertices of a graph. Global contrast is extracted in a novel graph-based framework to determine localization of salient objects. Saliency values are assigned to regions in terms of nodes degrees on graph. The proposed method has been applied on SED2 dataset. The qualitative and quantitative evaluation of the proposed method show that it can detect the salient objects appropriately in comparison with 5 state-of-art saliency models.
本文提出了一种无监督的自底向上方法,该方法将显著目标检测问题表述为寻找图的显著顶点。在一种新的基于图的框架中提取全局对比度,以确定显著目标的定位。显著性值是根据图上的节点度分配给区域的。该方法已在SED2数据集上得到应用。定性和定量评价表明,与现有的5种显著性模型相比,该方法能较好地检测出显著性目标。
{"title":"Salient object detection via global contrast graph","authors":"F. Nouri, K. Kazemi, H. Danyali","doi":"10.1109/SPIS.2015.7422332","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422332","url":null,"abstract":"In this paper, we propose an unsupervised bottom-up method which formulates salient object detection problem as finding salient vertices of a graph. Global contrast is extracted in a novel graph-based framework to determine localization of salient objects. Saliency values are assigned to regions in terms of nodes degrees on graph. The proposed method has been applied on SED2 dataset. The qualitative and quantitative evaluation of the proposed method show that it can detect the salient objects appropriately in comparison with 5 state-of-art saliency models.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133456085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving the performance of intelligent stock trading systems by using a high level representation for the inputs 通过对输入使用高级表示来提高智能股票交易系统的性能
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422304
Mojtaba Azimifar, Babak Nadjar Araabi, Hadi Moradi
Intelligent stock trading systems use soft computing techniques for forecasting the trend of the stock price. But the so-called noise in the market usually results in overtrading and loss of profit. In order to reduce the effect of noise on the trading decisions, high level representations can be used for the output of the trading systems. But the technical indicators which act as the inputs of the trading system, suffer from these short term irregularities as well. This paper suggests a high level representation for the technical indicators to match the level of information in the outputs. Digital low pass filters are carefully designed to remove the transient fluctuations of the technical indicators without losing too much information. Several experiments on different stocks in Tehran Stock Exchange shows a major improvement in the performance of the intelligent stock trading systems.
智能股票交易系统使用软计算技术来预测股票价格的趋势。但所谓的市场噪音通常会导致过度交易和利润损失。为了减少噪声对交易决策的影响,可以对交易系统的输出使用高级表示。但作为交易系统输入的技术指标也受到这些短期违规行为的影响。本文提出了一种与产出信息水平相匹配的技术指标的高层次表示。数字低通滤波器经过精心设计,可以消除技术指标的瞬态波动,而不会丢失太多信息。对德黑兰证券交易所不同股票的实验表明,智能股票交易系统的性能有了很大的提高。
{"title":"Improving the performance of intelligent stock trading systems by using a high level representation for the inputs","authors":"Mojtaba Azimifar, Babak Nadjar Araabi, Hadi Moradi","doi":"10.1109/SPIS.2015.7422304","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422304","url":null,"abstract":"Intelligent stock trading systems use soft computing techniques for forecasting the trend of the stock price. But the so-called noise in the market usually results in overtrading and loss of profit. In order to reduce the effect of noise on the trading decisions, high level representations can be used for the output of the trading systems. But the technical indicators which act as the inputs of the trading system, suffer from these short term irregularities as well. This paper suggests a high level representation for the technical indicators to match the level of information in the outputs. Digital low pass filters are carefully designed to remove the transient fluctuations of the technical indicators without losing too much information. Several experiments on different stocks in Tehran Stock Exchange shows a major improvement in the performance of the intelligent stock trading systems.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122491341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Parallel secure turbo code for security enhancement in physical layer 用于物理层安全增强的并行安全涡轮码
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422336
A. Motamedi, Mohsen Najafi, N. Erami
Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.
自1993年以来,Turbo码一直是编码理论的重要研究课题之一。该码具有较低的误码率(BER),但解码复杂度和延迟是一大挑战。另一方面,考虑到独立块进行编码和加密的复杂性和延迟,如果将这些过程结合起来,则可以保证通信系统的安全性和可靠性。提出了一种基于通用图形处理器(GPGPU)的安全并行解码算法。这是快速并行联合信道安全编码(JCSC)系统的第一个原型。该算法在加密过程中保持了理想的误码率,提高了译码速度。我们考虑了几种并行技术:(1)在多个核之间分配一个码字的解码负载,(2)同时解码多个码字,(3)使用保护技术防止性能下降。我们还提出了两种优化来提高解码速度:(1)内存访问改进,(2)使用新的GPU属性,如并发内核执行和高级原子来补偿缓冲延迟。
{"title":"Parallel secure turbo code for security enhancement in physical layer","authors":"A. Motamedi, Mohsen Najafi, N. Erami","doi":"10.1109/SPIS.2015.7422336","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422336","url":null,"abstract":"Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122526392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2015 Signal Processing and Intelligent Systems Conference (SPIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1