Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7313932
M. Pham
In this paper, we present the method that it uses image entropy to choose coefficients in energy equation of active contour. We calculate the entropy of the computed tomography image with each pair of tension and elastic coefficient. When the active contour is optimal, the image has minimum entropy (i.e. the image is less change). In addition to solve energy equation (the optimal problem), we use dynamic programming with constraints, these increase computed efficiency of method. They are compound conditions to detect the optimal active contour in computed tomography image.
{"title":"Detecting the optimal active contour in the computed tomography image by using entropy to choose coefficients in energy equation","authors":"M. Pham","doi":"10.1109/IWSSIP.2015.7313932","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313932","url":null,"abstract":"In this paper, we present the method that it uses image entropy to choose coefficients in energy equation of active contour. We calculate the entropy of the computed tomography image with each pair of tension and elastic coefficient. When the active contour is optimal, the image has minimum entropy (i.e. the image is less change). In addition to solve energy equation (the optimal problem), we use dynamic programming with constraints, these increase computed efficiency of method. They are compound conditions to detect the optimal active contour in computed tomography image.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116742216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7313926
A. Alsam, H. J. Rivertz
The removal of high frequencies from an image while retaining edges, is a complicated problem that has many solutions in the literature. Most of these solutions are, however, iterative and computationally expensive. In this paper, we introduce a direct method with three basic steps. In the first, the image is convolved with a Gaussian function of a defined size. In the second the gradients of the blurred image are compared with those of the original and a third gradient that is the minimum of the two at each pixel is composed. Finally, the combined gradient is integrated in the Fourier domain to obtain the result.
{"title":"Fast scale space image decomposition","authors":"A. Alsam, H. J. Rivertz","doi":"10.1109/IWSSIP.2015.7313926","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313926","url":null,"abstract":"The removal of high frequencies from an image while retaining edges, is a complicated problem that has many solutions in the literature. Most of these solutions are, however, iterative and computationally expensive. In this paper, we introduce a direct method with three basic steps. In the first, the image is convolved with a Gaussian function of a defined size. In the second the gradients of the blurred image are compared with those of the original and a third gradient that is the minimum of the two at each pixel is composed. Finally, the combined gradient is integrated in the Fourier domain to obtain the result.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314227
V. Kurilová, J. Pavlovičová, M. Oravec, Radoslav Rakar, Igor Marcek
Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysms, haemorrhages or neovascularisations could be erroneously exchanged. We developed two independent methods; every method is the combination of different morphological operations with different structural elements (different types and sizes). Images from standard database with blood vessels marked by ophthalmologist were used for evaluation. Sensitivity, specificity and accuracy were used as measures of methods efficiency. Both approaches show promising results and could be used as a part of image preprocessing before pathological retinal findings detection algorithms.
{"title":"Retinal blood vessels extraction using morphological operations","authors":"V. Kurilová, J. Pavlovičová, M. Oravec, Radoslav Rakar, Igor Marcek","doi":"10.1109/IWSSIP.2015.7314227","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314227","url":null,"abstract":"Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysms, haemorrhages or neovascularisations could be erroneously exchanged. We developed two independent methods; every method is the combination of different morphological operations with different structural elements (different types and sizes). Images from standard database with blood vessels marked by ophthalmologist were used for evaluation. Sensitivity, specificity and accuracy were used as measures of methods efficiency. Both approaches show promising results and could be used as a part of image preprocessing before pathological retinal findings detection algorithms.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314230
T. Hamid, D. Al-Jumeily
The goal of this work is to advance a new methodology to measure a dynamic severity cost impact for each host by developing the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics to create a Dynamic Vulnerability Scoring System (DVSS) based on Intrinsic, Time-based and Ecological Metric. The interactions between vulnerabilities are considered and dynamic impact metric is developed, which can be seen as a baseline between the static metric and the interaction between the exposures. A new method has developed to represent a unique severity dynamic cost of the total weight of all vulnerabilities for each host to represent the cost-centric severity for each state.
{"title":"A dynamic cost-centric risk impact metrics development","authors":"T. Hamid, D. Al-Jumeily","doi":"10.1109/IWSSIP.2015.7314230","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314230","url":null,"abstract":"The goal of this work is to advance a new methodology to measure a dynamic severity cost impact for each host by developing the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics to create a Dynamic Vulnerability Scoring System (DVSS) based on Intrinsic, Time-based and Ecological Metric. The interactions between vulnerabilities are considered and dynamic impact metric is developed, which can be seen as a baseline between the static metric and the interaction between the exposures. A new method has developed to represent a unique severity dynamic cost of the total weight of all vulnerabilities for each host to represent the cost-centric severity for each state.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131951325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314184
Warren Rieutort-Louis, Ognjen Arandjelovic
In this paper we introduce two novel methods for object recognition from video. Our major contributions are (i) the use of dense, overlapping local descriptors as means of accurately capturing the appearance of generic, even untextured objects, (ii) a framework for employing such sets for recognition using video, (iii) a detailed empirical examination of different aspects of the proposed model and (iv) a comparative performance evaluation on a large object database. We describe and compare two bag-of-visual-words (BoVW)-based representations of an object's appearance in a video sequence, one using a per-sequence bag-of-words and one using a set of per-frame bag-of-words. Empirical results demonstrate the effectiveness of both representations with a somewhat favourable performance of the former.
{"title":"Bo(V)W models for object recognition from video","authors":"Warren Rieutort-Louis, Ognjen Arandjelovic","doi":"10.1109/IWSSIP.2015.7314184","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314184","url":null,"abstract":"In this paper we introduce two novel methods for object recognition from video. Our major contributions are (i) the use of dense, overlapping local descriptors as means of accurately capturing the appearance of generic, even untextured objects, (ii) a framework for employing such sets for recognition using video, (iii) a detailed empirical examination of different aspects of the proposed model and (iv) a comparative performance evaluation on a large object database. We describe and compare two bag-of-visual-words (BoVW)-based representations of an object's appearance in a video sequence, one using a per-sequence bag-of-words and one using a set of per-frame bag-of-words. Empirical results demonstrate the effectiveness of both representations with a somewhat favourable performance of the former.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133910205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7313928
T. Berisha, Cise Midoglu, Samira Homayouni, P. Svoboda, M. Rupp
WLAN technology defined by the IEEE 802.11 standard family delivers ever increasing raw data rates with each new standard. However, raw data rate does not reflect real-world performance for end users. This paper proposes an automated setup to conduct performance measurements for WLAN APs in consideration of network performance metrics. The main objective is to create a simple baseline for benchmarking to find the best among the devices. The setup is able to measure data rate, RSSI and jitter in WLAN uplink and downlink. It is a repeatable and reliable mechanism, which can be further extended to different scenarios and use-cases. We also present and discuss the preliminary numerical results. This is only the first step toward a fully automated setup in an anechoic chamber.
{"title":"Measurement setup for automatized baselining of WLAN network performance","authors":"T. Berisha, Cise Midoglu, Samira Homayouni, P. Svoboda, M. Rupp","doi":"10.1109/IWSSIP.2015.7313928","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313928","url":null,"abstract":"WLAN technology defined by the IEEE 802.11 standard family delivers ever increasing raw data rates with each new standard. However, raw data rate does not reflect real-world performance for end users. This paper proposes an automated setup to conduct performance measurements for WLAN APs in consideration of network performance metrics. The main objective is to create a simple baseline for benchmarking to find the best among the devices. The setup is able to measure data rate, RSSI and jitter in WLAN uplink and downlink. It is a repeatable and reliable mechanism, which can be further extended to different scenarios and use-cases. We also present and discuss the preliminary numerical results. This is only the first step toward a fully automated setup in an anechoic chamber.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314239
C. Armeanu
Image processing of radiograms is one of the fields that have known a very fast development since the digitalization of the radiological imagery. Depending on the objects that are studied, different approaches have been developed. In Cultural Heritage investigations image processing, the image analysis can go from conversion of image types and classes, morphological filtering, deblurring, and other image enhancement tools, to image transforms or refinement of regions of interest. In the present work, historical aretefacts, that are not to be opened, will be investigated by X-ray, and then the images will be processed and enhanced. Also for the case studied it would be analyzed the possibility of 3D reconstruction of an object of interest, inside the studied object by an alternative method.
{"title":"X-ray image analysis for cultural heritage investigations","authors":"C. Armeanu","doi":"10.1109/IWSSIP.2015.7314239","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314239","url":null,"abstract":"Image processing of radiograms is one of the fields that have known a very fast development since the digitalization of the radiological imagery. Depending on the objects that are studied, different approaches have been developed. In Cultural Heritage investigations image processing, the image analysis can go from conversion of image types and classes, morphological filtering, deblurring, and other image enhancement tools, to image transforms or refinement of regions of interest. In the present work, historical aretefacts, that are not to be opened, will be investigated by X-ray, and then the images will be processed and enhanced. Also for the case studied it would be analyzed the possibility of 3D reconstruction of an object of interest, inside the studied object by an alternative method.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114732428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314202
Yuanjin Zhang, Liam A. Comerford, M. Beer, I. Kougioumtzoglou
A compressive sensing (CS) based approach is applied in conjunction with an adaptive basis re-weighting procedure for multi-dimensional stochastic process power spectrum estimation. In particular, the problem of sampling gaps in stochastic process records, occurring for reasons such as sensor failures, data corruption, and bandwidth limitations, is addressed. Specifically, due to the fact that many stochastic process records such as wind, sea wave and earthquake excitations can be represented with relative sparsity in the frequency domain, a CS framework can be applied for power spectrum estimation. By relying on signal sparsity, and the assumption that multiple records are available upon which to produce a spectral estimate, it has been shown that a re-weighted CS approach succeeds in estimating power spectra with satisfactory accuracy. Of key importance in this paper is the extension from one-dimensional vector processes to a broader class of problems involving multidimensional stochastic fields. Numerical examples demonstrate the effectiveness of the approach when records are subjected to up to 75% missing data.
{"title":"Compressive sensing for power spectrum estimation of multi-dimensional processes under missing data","authors":"Yuanjin Zhang, Liam A. Comerford, M. Beer, I. Kougioumtzoglou","doi":"10.1109/IWSSIP.2015.7314202","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314202","url":null,"abstract":"A compressive sensing (CS) based approach is applied in conjunction with an adaptive basis re-weighting procedure for multi-dimensional stochastic process power spectrum estimation. In particular, the problem of sampling gaps in stochastic process records, occurring for reasons such as sensor failures, data corruption, and bandwidth limitations, is addressed. Specifically, due to the fact that many stochastic process records such as wind, sea wave and earthquake excitations can be represented with relative sparsity in the frequency domain, a CS framework can be applied for power spectrum estimation. By relying on signal sparsity, and the assumption that multiple records are available upon which to produce a spectral estimate, it has been shown that a re-weighted CS approach succeeds in estimating power spectra with satisfactory accuracy. Of key importance in this paper is the extension from one-dimensional vector processes to a broader class of problems involving multidimensional stochastic fields. Numerical examples demonstrate the effectiveness of the approach when records are subjected to up to 75% missing data.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127671370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1109/IWSSIP.2015.7314201
H. Mezlini, Rabaa Youssef, H. Bouhadoun, E. Budyn, J. Laredo, S. Sevestre, C. Chappard
Osteoarthritis (OA) is a joint disorder that causes pain, stiffness and decreased mobility. Knee OA presents the greatest morbidity. The main characteristic of OA is the cartilage loss inducing joint space (JS) narrowing. Usually, the progression of OA is monitored by the minimum JS measurement on 2D X-rays images. New dedicated systems based on cone beam computed tomography, providing enough image quality and with favourable dose characteristics are under development. With these new systems, it would be possible to follow the 3D JS changes. High resolution peripheral computed tomography (HR-pQCT) usually used for assessing the trabecular and cortical bone mineral density have been performed on specimen knees with an isotropic voxel of 82 microns. We present here a new semi-automatic segmentation method to measure the 3D local variations of JS. The experiments have been done on HR-pQCT data set and the results have been extended to other computed tomography images with low resolution and/or with cone beam geometry.
{"title":"High resolution volume quantification of the knee joint space based on a semi-automatic segmentation of computed tomography images","authors":"H. Mezlini, Rabaa Youssef, H. Bouhadoun, E. Budyn, J. Laredo, S. Sevestre, C. Chappard","doi":"10.1109/IWSSIP.2015.7314201","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7314201","url":null,"abstract":"Osteoarthritis (OA) is a joint disorder that causes pain, stiffness and decreased mobility. Knee OA presents the greatest morbidity. The main characteristic of OA is the cartilage loss inducing joint space (JS) narrowing. Usually, the progression of OA is monitored by the minimum JS measurement on 2D X-rays images. New dedicated systems based on cone beam computed tomography, providing enough image quality and with favourable dose characteristics are under development. With these new systems, it would be possible to follow the 3D JS changes. High resolution peripheral computed tomography (HR-pQCT) usually used for assessing the trabecular and cortical bone mineral density have been performed on specimen knees with an isotropic voxel of 82 microns. We present here a new semi-automatic segmentation method to measure the 3D local variations of JS. The experiments have been done on HR-pQCT data set and the results have been extended to other computed tomography images with low resolution and/or with cone beam geometry.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-30DOI: 10.1109/IWSSIP.2015.7313924
K. Ueki, Tetsunori Kobayashi
Recently, there has been considerable research into the application of deep learning to image recognition. Notably, deep convolutional neural networks (CNNs) have achieved excellent performance in a number of image classification tasks, compared with conventional methods based on techniques such as Bag-of-Features (BoF) using local descriptors. In this paper, to cultivate a better understanding of the structure of CNN, we focus on the characteristics of deep CNNs, and adapt them to SIFT+BoF-based methods to improve the classification accuracy. We introduce the multi-layer structure of CNNs into the classification pipeline of the BoF framework, and conduct experiments to confirm the effectiveness of this approach using a fine-grained visual categorization dataset. The results show that the average classification rate is improved from 52.4% to 69.8%.
{"title":"Multi-layer feature extractions for image classification — Knowledge from deep CNNs","authors":"K. Ueki, Tetsunori Kobayashi","doi":"10.1109/IWSSIP.2015.7313924","DOIUrl":"https://doi.org/10.1109/IWSSIP.2015.7313924","url":null,"abstract":"Recently, there has been considerable research into the application of deep learning to image recognition. Notably, deep convolutional neural networks (CNNs) have achieved excellent performance in a number of image classification tasks, compared with conventional methods based on techniques such as Bag-of-Features (BoF) using local descriptors. In this paper, to cultivate a better understanding of the structure of CNN, we focus on the characteristics of deep CNNs, and adapt them to SIFT+BoF-based methods to improve the classification accuracy. We introduce the multi-layer structure of CNNs into the classification pipeline of the BoF framework, and conduct experiments to confirm the effectiveness of this approach using a fine-grained visual categorization dataset. The results show that the average classification rate is improved from 52.4% to 69.8%.","PeriodicalId":249021,"journal":{"name":"2015 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"481 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122331259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}