Automatic human activity detection is one of the difficult tasks in image segmentation application due to variations in size, type, shape and location of objects. In the traditional probabilistic graphical segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also, both directed and undirected graphical models such as Markov model, conditional random field have limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have studied and proposed a natural solution for automatic human activity segmentation using the enhanced probabilistic chain graphical model. This system has three main phases, namely activity pre-processing, iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental results show that proposed system efficiently detects the human activities at different levels of the action datasets.
{"title":"A Novel Probabilistic Based Image Segmentation Model for Realtime Human Activity Detection","authors":"D. Ratnakishore, M. ChandraMohan, A. A. Rao","doi":"10.5121/sipij.2016.7602","DOIUrl":"https://doi.org/10.5121/sipij.2016.7602","url":null,"abstract":"Automatic human activity detection is one of the difficult tasks in image segmentation application due to variations in size, type, shape and location of objects. In the traditional probabilistic graphical segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also, both directed and undirected graphical models such as Markov model, conditional random field have limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have studied and proposed a natural solution for automatic human activity segmentation using the enhanced probabilistic chain graphical model. This system has three main phases, namely activity pre-processing, iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental results show that proposed system efficiently detects the human activities at different levels of the action datasets.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"89 1","pages":"11-27"},"PeriodicalIF":0.0,"publicationDate":"2016-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83886782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is important to find buried magnetic material’s geometric features that are parallel to the soil surface in order to determine anti-tank and anti-personnel mine compatible to standards. So that it is possible to decrease the number of false alarms by separating the samples that have got non-standard geometries. For this purpose, in this study the anomalies occurred at horizontal component of the earth’s magnetic field by buried samples are determined with magnetic sensor. In the study, KMZ51 AMR is used as the magnetic sensor. The position-controlled movement of the sensor along x-y axis is provided with 2D scanning system. Trigger values of sensor output are evaluated with respect to the scanning field. The experiments are redone for the samples at different geometries and variables are defined for geometric analysis. The experimental conclusions obtained from this paper will be discussed in detail.
{"title":"Determination of Buried Magnetic Materials Geometric Dimensions","authors":"Y. Ege, A. Kakilli, H. Citak, M. Coramik","doi":"10.5121/SIPIJ.2016.7502","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7502","url":null,"abstract":"It is important to find buried magnetic material’s geometric features that are parallel to the soil surface in order to determine anti-tank and anti-personnel mine compatible to standards. So that it is possible to decrease the number of false alarms by separating the samples that have got non-standard geometries. For this purpose, in this study the anomalies occurred at horizontal component of the earth’s magnetic field by buried samples are determined with magnetic sensor. In the study, KMZ51 AMR is used as the magnetic sensor. The position-controlled movement of the sensor along x-y axis is provided with 2D scanning system. Trigger values of sensor output are evaluated with respect to the scanning field. The experiments are redone for the samples at different geometries and variables are defined for geometric analysis. The experimental conclusions obtained from this paper will be discussed in detail.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"136 1","pages":"11-22"},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73626078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-31DOI: 10.6084/M9.FIGSHARE.6170777.V1
T. Ghosh, Subir Saha, A. Ferdous
To provide new technological benefits to the mass people, nowadays, regional and local language recognition draws attention to the researchers. Similarly to other languages, Bangla speech recognition scheme is demandable. A formant is considered as the resonance frequency of vocal tract. Formant frequencies play an important role for the purpose of automatic speech recognition, due to its noise robust characteristics. In this paper, Bangla vowels are investigated to acquire formant frequencies and its corresponding bandwidth from continuous Bangla sentences, which are considered as potential parameters for wide voice applications. For the purpose of formant analysis, cepstrum based formant estimation and Linear Predictive Coding (LPC) techniques are used. In order to acquire formant characteristics, enrich continuous sentences and widely available Bangla language corpus namely “SHRUTI” is considered. Intensive experimentation is carried out to determine formant characteristics (frequency and bandwidth) of Bangla vowels for both male and female speakers. Finally, vowel recognition accuracy of Bangla language is reported considering first three formants..
{"title":"FORMANT ANALYSIS OF BANGLA VOWEL FOR AUTOMATIC SPEECH RECOGNITION","authors":"T. Ghosh, Subir Saha, A. Ferdous","doi":"10.6084/M9.FIGSHARE.6170777.V1","DOIUrl":"https://doi.org/10.6084/M9.FIGSHARE.6170777.V1","url":null,"abstract":"To provide new technological benefits to the mass people, nowadays, regional and local language recognition draws attention to the researchers. Similarly to other languages, Bangla speech recognition scheme is demandable. A formant is considered as the resonance frequency of vocal tract. Formant frequencies play an important role for the purpose of automatic speech recognition, due to its noise robust characteristics. In this paper, Bangla vowels are investigated to acquire formant frequencies and its corresponding bandwidth from continuous Bangla sentences, which are considered as potential parameters for wide voice applications. For the purpose of formant analysis, cepstrum based formant estimation and Linear Predictive Coding (LPC) techniques are used. In order to acquire formant characteristics, enrich continuous sentences and widely available Bangla language corpus namely “SHRUTI” is considered. Intensive experimentation is carried out to determine formant characteristics (frequency and bandwidth) of Bangla vowels for both male and female speakers. Finally, vowel recognition accuracy of Bangla language is reported considering first three formants..","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"12 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78515029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mammography is the primary and most reliable technique for detection of breast cancer. Mammograms are examined for the presence of malignant masses and indirect signs of malignancy such as micro calcifications, architectural distortion and bilateral asymmetry. However, Mammograms are X-ray images taken with low radiation dosage which results in low contrast, noisy images. Also, malignancies in dense breast are difficult to detect due to opaque uniform background in mammograms. Hence, techniques for improving visual screening of mammograms are essential. Image enhancement techniques are used to improve the visual quality of the images. This paper presents the comparative study of different preprocessing techniques used for enhancement of mammograms in mini-MIAS data base. Performance of the image enhancement techniques is evaluated using objective image quality assessment techniques. They include simple statistical error metrics like PSNR and human visual system (HVS) feature based metrics such as SSIM, NCC, UIQI, and Discrete Entropy
{"title":"Objective Quality Assessment of Image Enhancement Methods in Digital Mammography - A Comparative Study","authors":"Sheba K.U, S. GladstonRaj.","doi":"10.5121/SIPIJ.2016.7401","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7401","url":null,"abstract":"Mammography is the primary and most reliable technique for detection of breast cancer. Mammograms are examined for the presence of malignant masses and indirect signs of malignancy such as micro calcifications, architectural distortion and bilateral asymmetry. However, Mammograms are X-ray images taken with low radiation dosage which results in low contrast, noisy images. Also, malignancies in dense breast are difficult to detect due to opaque uniform background in mammograms. Hence, techniques for improving visual screening of mammograms are essential. Image enhancement techniques are used to improve the visual quality of the images. This paper presents the comparative study of different preprocessing techniques used for enhancement of mammograms in mini-MIAS data base. Performance of the image enhancement techniques is evaluated using objective image quality assessment techniques. They include simple statistical error metrics like PSNR and human visual system (HVS) feature based metrics such as SSIM, NCC, UIQI, and Discrete Entropy","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"32 1","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2016-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76610029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. SujathaB, C. T. Madiwalar, K. Sureshbabu, B. RajaK, R. VenugopalK.
The biometric is used to identify a person effectively and employ in almost all applications of day to day activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person into one image using averaging technique is introduced to reduce execution time and memory. The DWT is applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test image features with database image features to compute performance parameters. It is observed that, the proposed algorithm is better in terms of performance compared to existing algorithms .
{"title":"COMPRESSION BASED FACE RECOGNITION USING DWT AND SVM","authors":"M. SujathaB, C. T. Madiwalar, K. Sureshbabu, B. RajaK, R. VenugopalK.","doi":"10.5121/SIPIJ.2016.7304","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7304","url":null,"abstract":"The biometric is used to identify a person effectively and employ in almost all applications of day to day activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person into one image using averaging technique is introduced to reduce execution time and memory. The DWT is applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test image features with database image features to compute performance parameters. It is observed that, the proposed algorithm is better in terms of performance compared to existing algorithms .","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"193 1","pages":"45-62"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86235828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To assess quality of the denoised image is one of the important task in image denoising application. Numerous quality metrics are proposed by researchers with their particular characteristics till today. In practice, image acquisition system is different for natural and medical images. Hence noise introduced in these images is also different in nature. Considering this fact, authors in this paper tried to identify the suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to develop noise dependent quality metric is also identified in this work.
{"title":"IDENTIFICATION OF SUITED QUALITY METRICS FOR NATURAL AND MEDICAL IMAGES","authors":"K. Thakur, Omkar H. Damodare, A. Sapkal","doi":"10.5121/SIPIJ.2016.7303","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7303","url":null,"abstract":"To assess quality of the denoised image is one of the important task in image denoising application. Numerous quality metrics are proposed by researchers with their particular characteristics till today. In practice, image acquisition system is different for natural and medical images. Hence noise introduced in these images is also different in nature. Considering this fact, authors in this paper tried to identify the suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to develop noise dependent quality metric is also identified in this work.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"11 1","pages":"29-43"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72654156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ultimate goal of this study is to afford enhanced video object detection and tracking by eliminating the limitations which are existing nowadays. Although high performance ratio for video object detection and tracking is achieved in the earlier work it takes more time for computation. Consequently we are in need to propose a novel video object detection and tracking technique so as to minimize the computational complexity. Our proposed technique covers five stages they are preprocessing, segmentation, feature extraction, background subtraction and hole filling. Originally the video clip in the database is split into frames. Then preprocessing is performed so as to get rid of noise, an adaptive median filter is used in this stage to eliminate the noise. The preprocessed image then undergoes segmentation by means of modified region growing algorithm. The segmented image is subjected to feature extraction phase so as to extract the multi features from the segmented image and the background image, the feature value thus obtained are compared so as to attain optimal value, consequently a foreground image is attained in this stage. The foreground image is then subjected to morphological operations of erosion and dilation so as to fill the holes and to get the object accurately as these foreground image contains holes and discontinuities. Thus the moving object is tracked in this stage. This method will be employed in MATLAB platform and the outcomes will be studied and compared with the existing techniques so as to reveal the performance of the novel video object detection and tracking technique.
{"title":"AN INNOVATIVE MOVING OBJECT DETECTION AND TRACKING SYSTEM BY USING MODIFIED REGION GROWING ALGORITHM","authors":"G. Sujatha, Valli Kumari Vatsavayi","doi":"10.5121/SIPIJ.2016.7203","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7203","url":null,"abstract":"The ultimate goal of this study is to afford enhanced video object detection and tracking by eliminating the limitations which are existing nowadays. Although high performance ratio for video object detection and tracking is achieved in the earlier work it takes more time for computation. Consequently we are in need to propose a novel video object detection and tracking technique so as to minimize the computational complexity. Our proposed technique covers five stages they are preprocessing, segmentation, feature extraction, background subtraction and hole filling. Originally the video clip in the database is split into frames. Then preprocessing is performed so as to get rid of noise, an adaptive median filter is used in this stage to eliminate the noise. The preprocessed image then undergoes segmentation by means of modified region growing algorithm. The segmented image is subjected to feature extraction phase so as to extract the multi features from the segmented image and the background image, the feature value thus obtained are compared so as to attain optimal value, consequently a foreground image is attained in this stage. The foreground image is then subjected to morphological operations of erosion and dilation so as to fill the holes and to get the object accurately as these foreground image contains holes and discontinuities. Thus the moving object is tracked in this stage. This method will be employed in MATLAB platform and the outcomes will be studied and compared with the existing techniques so as to reveal the performance of the novel video object detection and tracking technique.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"33 1","pages":"39-55"},"PeriodicalIF":0.0,"publicationDate":"2016-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87975610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The biometrics are used to identify a person effectively. In this paper, we propose optimised Face recognition system based on log transformation and combination of face image features vectors. The face images are preprocessed using Gaussian filter to enhance the quality of an image. The log transformation is applied on enhanced image to generate features. The feature vectors of many images of a single person image are converted into single vector using average arithmetic addition. The Euclidian distance(ED) is used to compare test image feature vector with database feature vectors to identify a person. It is experimented that, the performance of proposed algorithm is better compared to existing algorithms.
{"title":"Optimized Biometric System Based on Combination of Face Images and Log Transformation","authors":"C. SateeshKumarH, B. RajaK, R. VenugopalK.","doi":"10.5121/SIPIJ.2016.7204","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7204","url":null,"abstract":"The biometrics are used to identify a person effectively. In this paper, we propose optimised Face recognition system based on log transformation and combination of face image features vectors. The face images are preprocessed using Gaussian filter to enhance the quality of an image. The log transformation is applied on enhanced image to generate features. The feature vectors of many images of a single person image are converted into single vector using average arithmetic addition. The Euclidian distance(ED) is used to compare test image feature vector with database feature vectors to identify a person. It is experimented that, the performance of proposed algorithm is better compared to existing algorithms.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"20 1","pages":"57-72"},"PeriodicalIF":0.0,"publicationDate":"2016-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83649378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting clouds in satellite imagery is becoming more important with increasing data availability which are generated by earth observing satellites. Hence, intellectual processing of the enormous amount of data received by hundreds of earth receiving stations, with specific satellite image oriented approaches, presents itself as a pressing need. One of the most important steps in previous stages of satellite image processing is cloud detection. While there are many approaches that compact with different semantic meaning, there are rarely approaches that compact specifically with cloud and cloud cover detection. In this paper, the technique presented is the scene based adaptive cloud, cloud cover detection and find the position with assumption of sun reflection, background varying and scattering are constant. The capability of the developed system was tested using dedicated satellite images and assessed in terms of cloud percentage coverage. The system used for this process comprises of Intel(R) Xenon(R) CPU E31245 @ 3.30GHz processor along with MATLAB 13 software and DSPC6713 processor along with Code Compose Studio 3.1.
随着地球观测卫星产生的数据越来越多,在卫星图像中探测云变得越来越重要。因此,对数百个地球接收站接收到的大量数据进行智能处理,并采用特定的卫星图像导向方法,是一项迫切需要。在卫星图像处理的前几个阶段中,最重要的步骤之一是云检测。虽然有许多方法可以压缩不同的语义,但很少有方法专门压缩云和云覆盖检测。本文提出的技术是基于场景的自适应云,在假定太阳反射、背景变化和散射恒定的情况下进行云量检测和定位。使用专用卫星图像测试了开发的系统的能力,并根据云覆盖率进行了评估。用于此过程的系统包括Intel(R) Xenon(R) CPU E31245 @ 3.30GHz处理器以及MATLAB 13软件和DSPC6713处理器以及Code Compose Studio 3.1。
{"title":"DEVELOPMENT AND HARDWARE IMPLEMENTATION OF AN EFFICIENT ALGORITHM FOR CLOUD DETECTION FROM SATELLITE IMAGES","authors":"Pooja Shah","doi":"10.5121/SIPIJ.2016.7205","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7205","url":null,"abstract":"Detecting clouds in satellite imagery is becoming more important with increasing data availability which are generated by earth observing satellites. Hence, intellectual processing of the enormous amount of data received by hundreds of earth receiving stations, with specific satellite image oriented approaches, presents itself as a pressing need. One of the most important steps in previous stages of satellite image processing is cloud detection. While there are many approaches that compact with different semantic meaning, there are rarely approaches that compact specifically with cloud and cloud cover detection. In this paper, the technique presented is the scene based adaptive cloud, cloud cover detection and find the position with assumption of sun reflection, background varying and scattering are constant. The capability of the developed system was tested using dedicated satellite images and assessed in terms of cloud percentage coverage. The system used for this process comprises of Intel(R) Xenon(R) CPU E31245 @ 3.30GHz processor along with MATLAB 13 software and DSPC6713 processor along with Code Compose Studio 3.1.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"70 1","pages":"73-80"},"PeriodicalIF":0.0,"publicationDate":"2016-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86923400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this paper was to develop a means of rapidly obtaining RGB image data, as part of an effort to develop a low-cost method of image processing and analysis based on Microsoft Excel. A simple standalone GUI (graphical user interface) software application called RGBExcel was developed to extract RGB image data from any colour image files of any format. For a given image file, the output from the software is an Excel file with the data from the R (red), G (green), and B (blue) bands of the image contained in different sheets. The raw data and any enhancements can be visualized by using the surface chart type in combination with other features. Since Excel can plot a maximum dimension of 255 by 255 pixels, larger images are downscaled to have a maximum dimension of 255 pixels. Results from testing the application are discussed in the paper.
{"title":"RGBEXCEL : An RGB Image Data Extractor and Exporter for Excel Processing","authors":"P. A. Larbi","doi":"10.5121/SIPIJ.2016.7101","DOIUrl":"https://doi.org/10.5121/SIPIJ.2016.7101","url":null,"abstract":"The objective of this paper was to develop a means of rapidly obtaining RGB image data, as part of an effort to develop a low-cost method of image processing and analysis based on Microsoft Excel. A simple standalone GUI (graphical user interface) software application called RGBExcel was developed to extract RGB image data from any colour image files of any format. For a given image file, the output from the software is an Excel file with the data from the R (red), G (green), and B (blue) bands of the image contained in different sheets. The raw data and any enhancements can be visualized by using the surface chart type in combination with other features. Since Excel can plot a maximum dimension of 255 by 255 pixels, larger images are downscaled to have a maximum dimension of 255 pixels. Results from testing the application are discussed in the paper.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"55 1","pages":"01-09"},"PeriodicalIF":0.0,"publicationDate":"2016-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72810228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}