首页 > 最新文献

2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)最新文献

英文 中文
Sparse signal recovery from compressed measurements using hybrid particle swarm optimization 基于混合粒子群优化的压缩测量稀疏信号恢复
Hassaan Haider, J. Shah, Shahid Ikram, Idris Abd Latif
The computationally intensive part of compressed sensing (CS) deals with the sparse signal reconstruction from lesser number of random projections. Finding sparse solution to such an underdetermined system is highly ill-conditioned and therefore requires additional regularization constraints. This research paper introduces a new approach for recovering a K-sparse signal from compressed samples using particle swarm optimization (PSO) along with separable surrogate functionals (SSF) algorithm. The suggested hybrid mechanism applied with appropriate regularization constraints speeds up the convergence of PSO. The estimated original sparse signal is also recovered with great precision. Simulation results show that the signal estimated with PSO-SSF combination outperforms the signal recovery through PSO, SSF and parallel coordinate descent (PCD) methods in terms of reconstruction accuracy. Finally, the efficiency of the proposed algorithm is validated experimentally by exactly recovering a one-dimensional K-sparse signal from only a few number of non-adaptive random measurements.
压缩感知(CS)的计算密集型部分处理由较少数量的随机投影重建稀疏信号。寻找这样一个欠定系统的稀疏解是高度病态的,因此需要额外的正则化约束。本文介绍了一种利用粒子群优化(PSO)和可分离替代泛函(SSF)算法从压缩样本中恢复k稀疏信号的新方法。所提出的混合机制在适当的正则化约束下加快了粒子群算法的收敛速度。估计的原始稀疏信号也能以很高的精度恢复。仿真结果表明,采用PSO-SSF组合估计的信号在重建精度方面优于采用PSO、SSF和平行坐标下降(PCD)方法恢复的信号。最后,通过实验验证了该算法的有效性,仅从少量非自适应随机测量中精确恢复一维k稀疏信号。
{"title":"Sparse signal recovery from compressed measurements using hybrid particle swarm optimization","authors":"Hassaan Haider, J. Shah, Shahid Ikram, Idris Abd Latif","doi":"10.1109/ICSIPA.2017.8120649","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120649","url":null,"abstract":"The computationally intensive part of compressed sensing (CS) deals with the sparse signal reconstruction from lesser number of random projections. Finding sparse solution to such an underdetermined system is highly ill-conditioned and therefore requires additional regularization constraints. This research paper introduces a new approach for recovering a K-sparse signal from compressed samples using particle swarm optimization (PSO) along with separable surrogate functionals (SSF) algorithm. The suggested hybrid mechanism applied with appropriate regularization constraints speeds up the convergence of PSO. The estimated original sparse signal is also recovered with great precision. Simulation results show that the signal estimated with PSO-SSF combination outperforms the signal recovery through PSO, SSF and parallel coordinate descent (PCD) methods in terms of reconstruction accuracy. Finally, the efficiency of the proposed algorithm is validated experimentally by exactly recovering a one-dimensional K-sparse signal from only a few number of non-adaptive random measurements.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116313681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic classification of diabetic macular edema using a modified completed Local Binary Pattern (CLBP) 基于改进的局部完全二值模式(CLBP)的糖尿病黄斑水肿自动分类
S. T. Lim, M. K. Ahmed, Sungbin Lim
Diabetic macular edema is the leading cause of visual loss for patients with diabetic retinopathy, a complication of diabetes. Early screening and treatment has been shown to prevent blindness in diabetic retinopathy and diabetic macular edema. The Early Treatment Diabetic Retinopathy Study (ETDRS) and the Diabetic Macular Edema Disease Severity Scale are the common screening standards based on the distance of exudates from the fovea. Instead of focusing on the macula region, this research adopts a global approach using texture classification to grade the fundus images into three stages: normal, moderate diabetic macular edema and severe diabetic macular edema. The proposed algorithm starts with a modified completed Local Binary Pattern (CLBP) to extract the image local gray level for all RGB channels. The obtained feature vector will then be fed into a multiclass Support Vector Machine (SVM) for classification. The 100 fundus images selected to be utilized for training and testing set were taken from MESSIDOR and these images were reviewed by an ophthalmologist for cross-validation. The algorithm using the CLBP demonstrates a sensitivity of 67% with a specificity of 30% while the proposed modified CLBP yields a higher sensitivity and specificity of 80% and 70% respectively.
糖尿病性黄斑水肿是糖尿病视网膜病变(糖尿病的一种并发症)患者视力丧失的主要原因。早期筛查和治疗已被证明可以预防失明的糖尿病视网膜病变和糖尿病黄斑水肿。早期治疗糖尿病视网膜病变研究(ETDRS)和糖尿病黄斑水肿疾病严重程度量表是基于中央凹渗出物距离的常用筛查标准。本研究不关注黄斑区域,而是采用纹理分类的全局方法将眼底图像分为正常、中度和重度糖尿病黄斑水肿三个阶段。该算法从改进的局部二值模式(CLBP)开始提取所有RGB通道的图像局部灰度。然后将得到的特征向量送入多类支持向量机(SVM)进行分类。选择用于训练和测试集的100张眼底图像取自MESSIDOR,这些图像由眼科医生进行交叉验证。使用CLBP的算法灵敏度为67%,特异性为30%,而提出的改进CLBP的灵敏度和特异性分别为80%和70%。
{"title":"Automatic classification of diabetic macular edema using a modified completed Local Binary Pattern (CLBP)","authors":"S. T. Lim, M. K. Ahmed, Sungbin Lim","doi":"10.1109/ICSIPA.2017.8120570","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120570","url":null,"abstract":"Diabetic macular edema is the leading cause of visual loss for patients with diabetic retinopathy, a complication of diabetes. Early screening and treatment has been shown to prevent blindness in diabetic retinopathy and diabetic macular edema. The Early Treatment Diabetic Retinopathy Study (ETDRS) and the Diabetic Macular Edema Disease Severity Scale are the common screening standards based on the distance of exudates from the fovea. Instead of focusing on the macula region, this research adopts a global approach using texture classification to grade the fundus images into three stages: normal, moderate diabetic macular edema and severe diabetic macular edema. The proposed algorithm starts with a modified completed Local Binary Pattern (CLBP) to extract the image local gray level for all RGB channels. The obtained feature vector will then be fed into a multiclass Support Vector Machine (SVM) for classification. The 100 fundus images selected to be utilized for training and testing set were taken from MESSIDOR and these images were reviewed by an ophthalmologist for cross-validation. The algorithm using the CLBP demonstrates a sensitivity of 67% with a specificity of 30% while the proposed modified CLBP yields a higher sensitivity and specificity of 80% and 70% respectively.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"331 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multiple trials in event related fMRI for different conditions 不同条件下事件相关fMRI的多项试验
R. Zafar, A. Malik, Aliyu Nuhu Shuaibu, M. J. U. Rehman, S. Dass
Experiment design has a key role in the functional magnetic resonance imaging (fMRI) data analyses. Block designs are suitable to localize functional areas but are not able to measure the transient changes in the brain activity. Event related design is a better approach and saves time and resources like single trial analyses. In this study, we explored the event related design with single, and multi trials with different order. In multi trials, instead of using lot of trials, we did analyses with two trials per image. The result suggest that the combination of multiple trials, order of trials and selection of significant voxels can give better results in terms of classification accuracy. Moreover, single and two trials per image saves resources as compared to many trials.
实验设计在功能磁共振成像(fMRI)数据分析中起着关键作用。块设计适用于定位功能区域,但不能测量大脑活动的瞬时变化。事件相关设计是一种更好的方法,可以节省时间和资源,就像单次试验分析一样。在本研究中,我们探讨了不同顺序的单试验和多试验的事件相关设计。在多次试验中,我们没有使用大量的试验,而是对每张图像进行两次试验分析。结果表明,结合多次试验、试验顺序和选择显著体素可以获得更好的分类准确率。此外,与多次试验相比,每个图像进行一次和两次试验可以节省资源。
{"title":"Multiple trials in event related fMRI for different conditions","authors":"R. Zafar, A. Malik, Aliyu Nuhu Shuaibu, M. J. U. Rehman, S. Dass","doi":"10.1109/ICSIPA.2017.8120628","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120628","url":null,"abstract":"Experiment design has a key role in the functional magnetic resonance imaging (fMRI) data analyses. Block designs are suitable to localize functional areas but are not able to measure the transient changes in the brain activity. Event related design is a better approach and saves time and resources like single trial analyses. In this study, we explored the event related design with single, and multi trials with different order. In multi trials, instead of using lot of trials, we did analyses with two trials per image. The result suggest that the combination of multiple trials, order of trials and selection of significant voxels can give better results in terms of classification accuracy. Moreover, single and two trials per image saves resources as compared to many trials.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115488130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooperative non-orthogonal multiple access using two-way relay 采用双向中继的协作非正交多址
Chun Yeen Ho, C. Leow
The existing work on cooperative non-orthogonal multiple access (NOMA) considers one-way relaying which consumes extra channel resources for the relaying operation. The use of extra channel resource results in degradation in spectral efficiency. This paper proposes two-way relaying in cooperative NOMA to enhance the spectral efficiency. The proposed scheme enable two-way information exchange between base station and users without consuming extra channel resource. In additional, the NOMA power allocation region is proposed to achieve a better rate compared to orthogonal multiple access scheme. Based on Monte Carlo simulation, the proposed scheme is shown to achieve better sum rate compared to OMA scheme and conventional cooperative NOMA.
现有的协同非正交多址(NOMA)工作考虑的是单向中继,这种中继操作需要消耗额外的信道资源。额外信道资源的使用导致了频谱效率的下降。为了提高频谱效率,本文提出了合作NOMA中的双向中继。该方案在不消耗额外信道资源的情况下实现了基站与用户之间的双向信息交换。此外,为了获得比正交多址方案更好的速率,提出了NOMA功率分配区域。基于蒙特卡罗仿真,与OMA方案和传统的协作式NOMA方案相比,该方案具有更好的求和速率。
{"title":"Cooperative non-orthogonal multiple access using two-way relay","authors":"Chun Yeen Ho, C. Leow","doi":"10.1109/ICSIPA.2017.8120655","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120655","url":null,"abstract":"The existing work on cooperative non-orthogonal multiple access (NOMA) considers one-way relaying which consumes extra channel resources for the relaying operation. The use of extra channel resource results in degradation in spectral efficiency. This paper proposes two-way relaying in cooperative NOMA to enhance the spectral efficiency. The proposed scheme enable two-way information exchange between base station and users without consuming extra channel resource. In additional, the NOMA power allocation region is proposed to achieve a better rate compared to orthogonal multiple access scheme. Based on Monte Carlo simulation, the proposed scheme is shown to achieve better sum rate compared to OMA scheme and conventional cooperative NOMA.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127288535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Tumor detection and whole slide classification of H&E lymph node images using convolutional neural network 基于卷积神经网络的H&E淋巴结图像肿瘤检测及全片分类
Mohammad F. Jamaluddin, M. F. A. Fauzi, F. S. Abas
Histopathological analysis of tissues has been gaining a lot of interests recently, from developing computer algorithms to assist pathologists from cell detection and counting, to tissue classification and cancer grading. With the advent of whole slide imaging, the field of digital pathology has gained enormous popularity, and is currently regarded as one of the most promising avenues of diagnostic medicine. Deep learning advancement on image set today has successfully evolved as many models has been proposed and produced state-of-the-art object classifying results. This is not limited to large database such as Imagenet but also has seen applications in other medical image analysis related areas. In this paper we have carefully constructed and expanded the deep model network to classify normal and tumor slides in histology images of lymph nodes tissue. We have proposed our own deep learning model based on convolutional neural network with smaller requirement using 64×64×3 input image with 12 convolutional layer with max pooling and ReLU activation function. Our method has better AUC result at 0.94 than the winner of Camelyon16 Challenge with AUC of 0.925.
组织病理学分析最近得到了很多关注,从开发计算机算法来帮助病理学家进行细胞检测和计数,到组织分类和癌症分级。随着全切片成像技术的出现,数字病理领域已经获得了巨大的普及,目前被认为是最有前途的诊断医学途径之一。随着许多模型被提出并产生了最先进的对象分类结果,今天图像集的深度学习进展已经成功地发展起来。这不仅局限于像Imagenet这样的大型数据库,而且在其他医学图像分析相关领域也有应用。在本文中,我们精心构建并扩展了深度模型网络,用于对淋巴结组织组织学图像中的正常切片和肿瘤切片进行分类。我们提出了自己的基于卷积神经网络的深度学习模型,要求更小,使用64×64×3输入图像,具有12个卷积层,最大池化和ReLU激活函数。该方法的AUC为0.94,优于Camelyon16挑战赛冠军的AUC为0.925。
{"title":"Tumor detection and whole slide classification of H&E lymph node images using convolutional neural network","authors":"Mohammad F. Jamaluddin, M. F. A. Fauzi, F. S. Abas","doi":"10.1109/ICSIPA.2017.8120585","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120585","url":null,"abstract":"Histopathological analysis of tissues has been gaining a lot of interests recently, from developing computer algorithms to assist pathologists from cell detection and counting, to tissue classification and cancer grading. With the advent of whole slide imaging, the field of digital pathology has gained enormous popularity, and is currently regarded as one of the most promising avenues of diagnostic medicine. Deep learning advancement on image set today has successfully evolved as many models has been proposed and produced state-of-the-art object classifying results. This is not limited to large database such as Imagenet but also has seen applications in other medical image analysis related areas. In this paper we have carefully constructed and expanded the deep model network to classify normal and tumor slides in histology images of lymph nodes tissue. We have proposed our own deep learning model based on convolutional neural network with smaller requirement using 64×64×3 input image with 12 convolutional layer with max pooling and ReLU activation function. Our method has better AUC result at 0.94 than the winner of Camelyon16 Challenge with AUC of 0.925.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121904647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A modified direct data domain STAP approach based on cost function reconstruction 基于成本函数重构的直接数据域STAP改进方法
Jie He, Da-Zheng Feng, Xiao-Jun Yang
In this paper, a hybrid space time adaptive processing (STAP) algorithm of direct data domain (DDD) approach and cost function reconstruction is presented to provide a solution to sample support problem at a low cost of space-time aperture loss. The correlation matrix estimated in DDD approach is partitioned into sub-matrices and two equivalent cost functions are reconstructed. By iteratively solving cost functions, sample support requirements and computational burden can be mitigated. The experiments results on the real data show that the proposed algorithm outperforms conventional DDD method and DDD-JDL with low aperture loss.
本文提出了一种直接数据域(DDD)方法与代价函数重构相结合的时空自适应处理(STAP)算法,以低时空孔径损失为代价解决采样支持问题。将DDD方法估计的相关矩阵划分为子矩阵,重构两个等价的代价函数。通过迭代求解成本函数,可以减少样本支持需求和计算负担。在实际数据上的实验结果表明,该算法优于传统的DDD方法和低孔径损失的DDD- jdl方法。
{"title":"A modified direct data domain STAP approach based on cost function reconstruction","authors":"Jie He, Da-Zheng Feng, Xiao-Jun Yang","doi":"10.1109/ICSIPA.2017.8120654","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120654","url":null,"abstract":"In this paper, a hybrid space time adaptive processing (STAP) algorithm of direct data domain (DDD) approach and cost function reconstruction is presented to provide a solution to sample support problem at a low cost of space-time aperture loss. The correlation matrix estimated in DDD approach is partitioned into sub-matrices and two equivalent cost functions are reconstructed. By iteratively solving cost functions, sample support requirements and computational burden can be mitigated. The experiments results on the real data show that the proposed algorithm outperforms conventional DDD method and DDD-JDL with low aperture loss.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121490559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimized low computational algorithm for human fall detection from depth images based on Support Vector Machine classification 基于支持向量机分类的深度图像人体跌倒检测优化算法
M. N. Mohd, Yoosuf Nizam, S. Suhaila, M. M. Jamil
Systems developed to classify human activities to identify unintentional falls are highly demanding and play an important role in our daily life. Human falls are the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. Different approaches are used to develop human fall detection systems for elderly and people with special needs. The three basic approaches used include some sort of wearable devices, ambient based devices or non-invasive vision based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. This paper proposes a fall detection system based on an algorithm using combination of machine learning and human activity measurements such as changes of human height and rate of change of the subject during any of the activity. Classification of human fall from other activities of daily life is accomplished using height, changes in velocity and acceleration of the subject extracted from the depth information. Finally position of the subject and SVM classification is used for fall confirmation. From the experimental results, the proposed system was able to achieve an average accuracy of 97.39% with sensitivity of 100% and specificity of 96.61%.
开发用于对人类活动进行分类以识别无意跌倒的系统要求很高,并且在我们的日常生活中发挥着重要作用。人跌倒是老年人独立生活的主要障碍,也是人口老龄化带来的一个主要健康问题。为老年人和有特殊需要的人开发人体跌倒检测系统采用了不同的方法。使用的三种基本方法包括某种可穿戴设备、基于环境的设备或使用实时摄像头的基于非侵入性视觉的设备。这类系统大多是基于可穿戴式或环境传感器,但由于高虚警和在日常生活活动中携带困难而经常被用户拒绝。本文提出了一种基于机器学习和人类活动测量相结合的算法的跌倒检测系统,如人类身高的变化和受试者在任何活动期间的变化率。人类从其他日常生活活动中跌倒的分类是利用从深度信息中提取的受试者的高度、速度和加速度变化来完成的。最后利用目标位置和支持向量机分类进行落点确认。实验结果表明,该系统的平均准确率为97.39%,灵敏度为100%,特异性为96.61%。
{"title":"An optimized low computational algorithm for human fall detection from depth images based on Support Vector Machine classification","authors":"M. N. Mohd, Yoosuf Nizam, S. Suhaila, M. M. Jamil","doi":"10.1109/ICSIPA.2017.8120645","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120645","url":null,"abstract":"Systems developed to classify human activities to identify unintentional falls are highly demanding and play an important role in our daily life. Human falls are the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. Different approaches are used to develop human fall detection systems for elderly and people with special needs. The three basic approaches used include some sort of wearable devices, ambient based devices or non-invasive vision based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. This paper proposes a fall detection system based on an algorithm using combination of machine learning and human activity measurements such as changes of human height and rate of change of the subject during any of the activity. Classification of human fall from other activities of daily life is accomplished using height, changes in velocity and acceleration of the subject extracted from the depth information. Finally position of the subject and SVM classification is used for fall confirmation. From the experimental results, the proposed system was able to achieve an average accuracy of 97.39% with sensitivity of 100% and specificity of 96.61%.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"4 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120978695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Software profiling analysis for DNA microarray image processing algorithm DNA微阵列图像处理算法的软件分析
Omar Salem Baans, A. B. Jambek
Microarray analysis is one of the most suitable tools available for scientists concerned with DNA sequences to study and examine gene expression. Through microarray analysis, the gene expression sequence can be obtained and biological information on many diseases can be acquired. The gene expression information contained in the microarray can be extracted using image-processing techniques. Microarray image processing consists of three main steps: gridding, segmentation and intensity extraction. This paper analyses the computational time for this microarray image processing. The results show that the intensity extraction consumes majority of the overall computational time. More detail analysis reveals that this high computational time is due to the background correction part of the process, as discussed in the second part of this paper.
微阵列分析是科学家关注DNA序列研究和检查基因表达最合适的工具之一。通过微阵列分析,可以获得基因表达序列,获得许多疾病的生物学信息。利用图像处理技术可以提取芯片中包含的基因表达信息。微阵列图像处理包括三个主要步骤:网格划分、分割和强度提取。本文分析了该微阵列图像处理的计算时间。结果表明,强度提取消耗了大部分的计算时间。更详细的分析表明,这种高计算时间是由于该过程的背景校正部分,如本文第二部分所讨论的。
{"title":"Software profiling analysis for DNA microarray image processing algorithm","authors":"Omar Salem Baans, A. B. Jambek","doi":"10.1109/ICSIPA.2017.8120592","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120592","url":null,"abstract":"Microarray analysis is one of the most suitable tools available for scientists concerned with DNA sequences to study and examine gene expression. Through microarray analysis, the gene expression sequence can be obtained and biological information on many diseases can be acquired. The gene expression information contained in the microarray can be extracted using image-processing techniques. Microarray image processing consists of three main steps: gridding, segmentation and intensity extraction. This paper analyses the computational time for this microarray image processing. The results show that the intensity extraction consumes majority of the overall computational time. More detail analysis reveals that this high computational time is due to the background correction part of the process, as discussed in the second part of this paper.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132931602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intra block prediction using first-second row non-directional samples in HEVC video coding HEVC视频编码中第一-第二行非定向样本的块内预测
E. Jaja, A. Rahman, Z. Omar, M. Zabidi, U. U. Sheikh
This paper presents two intra prediction algorithms in High Efficiency Video Coding (HEVC) encoder for reducing the computational complexity and increase the encoding speed. The first algorithm takes advantage of the high spatial correlation among neighboring pixels to substitute the reference samples as the first row or first and second rows of the current block to be predicted, while the pixels intensities in the remaining rows or columns as in the case of horizontal predictions, are extrapolated as usual. Secondly, due to spatial correlations in video block data among adjacent blocks, it has been established that the intra prediction mode of the current block has a high probability of being a member of the most probable mode set or a slight variation of one of the most probable modes. These algorithms are combined and implemented on the HM16 reference software, and show speedup of 23.4% and 22.7% in encoding time using the all-intra-main configuration, with minimal reduction in bitrate of 0.21% and 0.22% respectively.
本文提出了高效视频编码(HEVC)编码器中的两种帧内预测算法,以降低计算复杂度,提高编码速度。第一种算法利用相邻像素之间的高空间相关性,将参考样本替换为待预测当前块的第一行或第一、第二行,而在水平预测的情况下,其余行或列中的像素强度照常外推。其次,由于相邻块之间视频块数据的空间相关性,确定了当前块的内预测模式很有可能是最可能模式集的成员,或者是其中一个最可能模式的微小变化。在HM16参考软件上对这些算法进行了组合和实现,结果表明,采用全主内配置的编码时间加快了23.4%和22.7%,比特率分别降低了0.21%和0.22%。
{"title":"Intra block prediction using first-second row non-directional samples in HEVC video coding","authors":"E. Jaja, A. Rahman, Z. Omar, M. Zabidi, U. U. Sheikh","doi":"10.1109/ICSIPA.2017.8120582","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120582","url":null,"abstract":"This paper presents two intra prediction algorithms in High Efficiency Video Coding (HEVC) encoder for reducing the computational complexity and increase the encoding speed. The first algorithm takes advantage of the high spatial correlation among neighboring pixels to substitute the reference samples as the first row or first and second rows of the current block to be predicted, while the pixels intensities in the remaining rows or columns as in the case of horizontal predictions, are extrapolated as usual. Secondly, due to spatial correlations in video block data among adjacent blocks, it has been established that the intra prediction mode of the current block has a high probability of being a member of the most probable mode set or a slight variation of one of the most probable modes. These algorithms are combined and implemented on the HM16 reference software, and show speedup of 23.4% and 22.7% in encoding time using the all-intra-main configuration, with minimal reduction in bitrate of 0.21% and 0.22% respectively.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133310242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep architecture for face recognition based on multiple feature extraction techniques 基于多特征提取技术的人脸识别深度体系结构
Saleh Albelwi, A. Mahmood
Some of the best current face recognition approaches use feature extraction techniques based on either Principle Component Analysis (PCA), Local Binary Patterns (LBP), Autoencoder (non-linear PCA), etc. While each of these feature techniques works fairly well, we propose to combine multiple feature extractors with deep learning in a system so that the overall face recognition accuracy can be improved. The output from multiple feature extractions is classified using a deep learning approach. Deep learning algorithms possess high capability to learn more complex functions in order to handle difficult computer vison tasks. Our proposed method integrates the output of three different feature extractors, specifically PCA, LBP+PCA, and dimensionality reduction of LBP features using a Neural Network (NN). The features from the above three techniques are concatenated to form a joint feature vector. This feature vector is fed into a deep Sacked Sparse Autoencoder (SSA) as a classifier to generate the recognition results. Our proposed approach is evaluated by ORL and AR face databases. The experimental results indicate that our system outperforms existing ones based on individual feature techniques as well as reported systems employing multiple feature types.
目前一些最好的人脸识别方法使用基于主成分分析(PCA)、局部二值模式(LBP)、自动编码器(非线性PCA)等的特征提取技术。虽然这些特征技术中的每一种都工作得相当好,但我们建议将多个特征提取器与系统中的深度学习相结合,以便提高整体人脸识别的准确性。使用深度学习方法对多个特征提取的输出进行分类。深度学习算法具有学习更复杂函数的能力,可以处理复杂的计算机视觉任务。我们提出的方法集成了三种不同特征提取器的输出,特别是PCA, LBP+PCA,以及使用神经网络(NN)对LBP特征进行降维。将上述三种技术的特征连接起来形成一个联合特征向量。将该特征向量作为分类器送入深度sack稀疏自编码器(SSA)中生成识别结果。我们提出的方法通过ORL和AR人脸数据库进行了评估。实验结果表明,我们的系统优于现有的基于单个特征技术的系统以及采用多种特征类型的系统。
{"title":"A deep architecture for face recognition based on multiple feature extraction techniques","authors":"Saleh Albelwi, A. Mahmood","doi":"10.1109/ICSIPA.2017.8120642","DOIUrl":"https://doi.org/10.1109/ICSIPA.2017.8120642","url":null,"abstract":"Some of the best current face recognition approaches use feature extraction techniques based on either Principle Component Analysis (PCA), Local Binary Patterns (LBP), Autoencoder (non-linear PCA), etc. While each of these feature techniques works fairly well, we propose to combine multiple feature extractors with deep learning in a system so that the overall face recognition accuracy can be improved. The output from multiple feature extractions is classified using a deep learning approach. Deep learning algorithms possess high capability to learn more complex functions in order to handle difficult computer vison tasks. Our proposed method integrates the output of three different feature extractors, specifically PCA, LBP+PCA, and dimensionality reduction of LBP features using a Neural Network (NN). The features from the above three techniques are concatenated to form a joint feature vector. This feature vector is fed into a deep Sacked Sparse Autoencoder (SSA) as a classifier to generate the recognition results. Our proposed approach is evaluated by ORL and AR face databases. The experimental results indicate that our system outperforms existing ones based on individual feature techniques as well as reported systems employing multiple feature types.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114931147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1