Pub Date : 2015-12-10DOI: 10.1109/ICCVIA.2015.7351792
A. Funmilola, D. Olusayo, A. A. Michael
Image compression reduces irrelevant and redundancy of the image data in order to be able to store or transmits data in an efficient form. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. Medical image compression plays a key role as hospitals move towards filmless imaging and completely digital. Image compression will allow Picture Archiving and Communication Systems (PACS) to reduce the file sizes on their storage requirements while maintaining relevant diagnostic information. Teleradiology sites benefit since reduced image file sizes yield reduced transmission times. Even as the capacity of storage media continues to increase, it is expected that the volume of uncompressed data produced by hospitals will exceed capacity and drive up costs. The improved compression performance will be accomplished by making use of clinically relevant regions as defined by physicians. This work compared Discrete Cosine Transform (DCT) compression technique and Wavelet Transform (WT) compression techniques for medical images. The result showed compression ratio of 10:1 and 7:1 for DCT and WT respectively. The mean difference of 77.84 with standard deviation of 83.17 and mean difference of 77.77 with standard deviation of 83.23 from the original image were recorded for DCT and WT compression technique.
{"title":"Comparative analysis between discrete cosine transform and wavelet transform techniques for medical image compression","authors":"A. Funmilola, D. Olusayo, A. A. Michael","doi":"10.1109/ICCVIA.2015.7351792","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351792","url":null,"abstract":"Image compression reduces irrelevant and redundancy of the image data in order to be able to store or transmits data in an efficient form. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. Medical image compression plays a key role as hospitals move towards filmless imaging and completely digital. Image compression will allow Picture Archiving and Communication Systems (PACS) to reduce the file sizes on their storage requirements while maintaining relevant diagnostic information. Teleradiology sites benefit since reduced image file sizes yield reduced transmission times. Even as the capacity of storage media continues to increase, it is expected that the volume of uncompressed data produced by hospitals will exceed capacity and drive up costs. The improved compression performance will be accomplished by making use of clinically relevant regions as defined by physicians. This work compared Discrete Cosine Transform (DCT) compression technique and Wavelet Transform (WT) compression techniques for medical images. The result showed compression ratio of 10:1 and 7:1 for DCT and WT respectively. The mean difference of 77.84 with standard deviation of 83.17 and mean difference of 77.77 with standard deviation of 83.23 from the original image were recorded for DCT and WT compression technique.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"34 50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132862735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-10DOI: 10.1109/ICCVIA.2015.7351788
Aymen El Amroui, K. Sethom
Mobile cloud computing consists in offering cloud services to limited resources mobile devices in order to help them to essentially support energy and memory hungry application execution. This should be done while maintaining adequate quality of service and response time. The network characteristics like network latency and the huge transmission power consumption may act negatively on cloud access, response time and data transfer. This paper raises the mobile cloud computing paradigm and its challenges. It aims to present the mobile cloud computing framework general functional blocs and to expose our context-aware mobile cloud computing framework architecture.
{"title":"A new framework for autonomic mobile cloud computing","authors":"Aymen El Amroui, K. Sethom","doi":"10.1109/ICCVIA.2015.7351788","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351788","url":null,"abstract":"Mobile cloud computing consists in offering cloud services to limited resources mobile devices in order to help them to essentially support energy and memory hungry application execution. This should be done while maintaining adequate quality of service and response time. The network characteristics like network latency and the huge transmission power consumption may act negatively on cloud access, response time and data transfer. This paper raises the mobile cloud computing paradigm and its challenges. It aims to present the mobile cloud computing framework general functional blocs and to expose our context-aware mobile cloud computing framework architecture.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-10DOI: 10.1109/ICCVIA.2015.7351882
M. Alloghani
Many terms were used over the past on the automated government services and one of which is the electronic government that genuinely emerged to public in the early 1990s as developed and used by the US, however the E-government on the other hand found its way towards prominence in 1997. The e-government or e-governance uses its core Information and communication Technologies (ICT) to leverage services rendered by public sector. The e-government is looked upon as a very rich resource that can provide organizations with a competitive cutting edge value if it's well managed and improved. The rapid growth of computing and ICT had encouraged governments to encompass the technological changes and advances into their policies, forward looking and strategic development processes. The UAE government has promoted the e-government initiatives to improve system of governance in place to provide and make the business of governance more efficient, effective, qualitatively responsive, transparent and accountable to the society. Currently the Smartphone have become an alternative tool for traditional desktop machines that can also provide feasibility to browse and get services easily anytime and anywhere. However, those services might be offered on various operating systems which requires specific platform to run on which become a burden to end-users. In this research paper we discuss about the design and implementation of a cross-platform mobile eGovernment system for suppliers.
{"title":"Development of a cross-platform mobile eGovernment system for suppliers (A case study from UAE)","authors":"M. Alloghani","doi":"10.1109/ICCVIA.2015.7351882","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351882","url":null,"abstract":"Many terms were used over the past on the automated government services and one of which is the electronic government that genuinely emerged to public in the early 1990s as developed and used by the US, however the E-government on the other hand found its way towards prominence in 1997. The e-government or e-governance uses its core Information and communication Technologies (ICT) to leverage services rendered by public sector. The e-government is looked upon as a very rich resource that can provide organizations with a competitive cutting edge value if it's well managed and improved. The rapid growth of computing and ICT had encouraged governments to encompass the technological changes and advances into their policies, forward looking and strategic development processes. The UAE government has promoted the e-government initiatives to improve system of governance in place to provide and make the business of governance more efficient, effective, qualitatively responsive, transparent and accountable to the society. Currently the Smartphone have become an alternative tool for traditional desktop machines that can also provide feasibility to browse and get services easily anytime and anywhere. However, those services might be offered on various operating systems which requires specific platform to run on which become a burden to end-users. In this research paper we discuss about the design and implementation of a cross-platform mobile eGovernment system for suppliers.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127119519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-10DOI: 10.1109/ICCVIA.2015.7351903
Y. Hassen, T. Ouni, W. Ayedi, M. Jallouli
This article presents a simple and efficient approach to persons tracking within large scale environment. The proposed approach is a point matching tracking algorithm based on a covariance descriptor. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of the object and the scene and partial and total occlusions. Tracking is usually performed in the context of higher-level applications that require the location and appearance of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. The ultimate purpose of the proposed approach is to propose an efficient tracking algorithm as a way for real time multi-shot re-identification. This approach is evaluated using standard datasets.
{"title":"Mono-camera person tracking based on template matching and covariance descriptor","authors":"Y. Hassen, T. Ouni, W. Ayedi, M. Jallouli","doi":"10.1109/ICCVIA.2015.7351903","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351903","url":null,"abstract":"This article presents a simple and efficient approach to persons tracking within large scale environment. The proposed approach is a point matching tracking algorithm based on a covariance descriptor. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of the object and the scene and partial and total occlusions. Tracking is usually performed in the context of higher-level applications that require the location and appearance of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. The ultimate purpose of the proposed approach is to propose an efficient tracking algorithm as a way for real time multi-shot re-identification. This approach is evaluated using standard datasets.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126923315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-10DOI: 10.1109/ICCVIA.2015.7351885
F. Bouzidi, Emna Charfi, F. Ghozzi, A. Fakhfakh
The main purpose of this work is to put together the benefits of the optical correlator which is VunderLaugt (VLC) correlator method for face recognition applications. With this aim in mind, we compare the performances of VLC correlator based on the fast Fourier transform (FFT) with the optical Fourier Transform (NO_FT) based on Fraunhofer diffraction. To achieve this goal, numerical implementation of the optical FT is needed. In this paper, we suggest and validate an all-numerical implementation of a VLC correlator with optical FT. Different tests using the Pointing Head Pose Image Database (PHPID) and considering faces with vertical and horizontal rotations were performed. The receiving operating characteristics (ROC) curves show that the optical FT simulating the Fraunhofer diffraction leads to better performances than the FFT.
{"title":"Implementation of optical correlator for face recognition applications","authors":"F. Bouzidi, Emna Charfi, F. Ghozzi, A. Fakhfakh","doi":"10.1109/ICCVIA.2015.7351885","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351885","url":null,"abstract":"The main purpose of this work is to put together the benefits of the optical correlator which is VunderLaugt (VLC) correlator method for face recognition applications. With this aim in mind, we compare the performances of VLC correlator based on the fast Fourier transform (FFT) with the optical Fourier Transform (NO_FT) based on Fraunhofer diffraction. To achieve this goal, numerical implementation of the optical FT is needed. In this paper, we suggest and validate an all-numerical implementation of a VLC correlator with optical FT. Different tests using the Pointing Head Pose Image Database (PHPID) and considering faces with vertical and horizontal rotations were performed. The receiving operating characteristics (ROC) curves show that the optical FT simulating the Fraunhofer diffraction leads to better performances than the FFT.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115137412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-23DOI: 10.1109/ICCVIA.2015.7351789
Boukhatem Mohammed Belkaid, L. Mourad, Cherifi Mehdi, A. Soltane
Data Security for end-end transmission is achieved by many different symmetric and asymmetric techniques for message confidentiality, message authentication and key exchange using transport layer security. This paper presents a new encryption system for secure medical images transmission. The hybrid encryption system is based on AES and RSA algorithms. AES is used for data confidentiality, the RSA is used for authentication and the integrity is assured by the basic function of correlation between adjacent pixels in the image. Our encryption system generates a unique password every new session of encryption. Several parameters were used for various tests of our analysis.
{"title":"Secure transfer of medical images using hybrid encryption: Authentication, confidentiality, integrity","authors":"Boukhatem Mohammed Belkaid, L. Mourad, Cherifi Mehdi, A. Soltane","doi":"10.1109/ICCVIA.2015.7351789","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351789","url":null,"abstract":"Data Security for end-end transmission is achieved by many different symmetric and asymmetric techniques for message confidentiality, message authentication and key exchange using transport layer security. This paper presents a new encryption system for secure medical images transmission. The hybrid encryption system is based on AES and RSA algorithms. AES is used for data confidentiality, the RSA is used for authentication and the integrity is assured by the basic function of correlation between adjacent pixels in the image. Our encryption system generates a unique password every new session of encryption. Several parameters were used for various tests of our analysis.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131677651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICCVIA.2015.7351888
Ahlem Kourid, M. Batouche
Scale feature selection is one of the most important fields in the big data domain that can solve real data problems, such as bioinformatics, when it is necessary to process huge amount of data. The efficiency of existing feature selection algorithms significantly downgrades, if not totally inapplicable, when data size exceeds hundreds of gigabytes, because most feature selection algorithms are designed for centralized computing architecture. For that distributed computing techniques, such as MapReduce can be applied to handle very large data. Our approach is to scale the existing method for feature selection, Kmeans clustering and Signal to Noise Ratio (SNR) combined with optimization technique as Binary Particle Swarm Optimization (BPSO). The proposed method is divided into two stages. In the first stage, we have used parallel Kmeans on MapReduce for clustering features, and then we have applied iterative MapReduce that implement parallel SNR ranking for each cluster, after we have selected the top ranked feature from each cluster. The top scored features from each cluster are gathered and a new feature subset is generated. In the second stage the new feature subset is used as input to the novel BPSO proposed based on MapReduce and optimized feature subset is being produced. The proposed method is implemented in a distributed environment, and its efficiency is illustrated through analyzing practical problems such as biomarker discovery.
{"title":"A novel approach for feature selection based on MapReduce for biomarker discovery","authors":"Ahlem Kourid, M. Batouche","doi":"10.1109/ICCVIA.2015.7351888","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351888","url":null,"abstract":"Scale feature selection is one of the most important fields in the big data domain that can solve real data problems, such as bioinformatics, when it is necessary to process huge amount of data. The efficiency of existing feature selection algorithms significantly downgrades, if not totally inapplicable, when data size exceeds hundreds of gigabytes, because most feature selection algorithms are designed for centralized computing architecture. For that distributed computing techniques, such as MapReduce can be applied to handle very large data. Our approach is to scale the existing method for feature selection, Kmeans clustering and Signal to Noise Ratio (SNR) combined with optimization technique as Binary Particle Swarm Optimization (BPSO). The proposed method is divided into two stages. In the first stage, we have used parallel Kmeans on MapReduce for clustering features, and then we have applied iterative MapReduce that implement parallel SNR ranking for each cluster, after we have selected the top ranked feature from each cluster. The top scored features from each cluster are gathered and a new feature subset is generated. In the second stage the new feature subset is used as input to the novel BPSO proposed based on MapReduce and optimized feature subset is being produced. The proposed method is implemented in a distributed environment, and its efficiency is illustrated through analyzing practical problems such as biomarker discovery.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122539184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/iccvia.2015.7351900
N. Souilem, I. Elaissi, O. Taouali, M. Hassani
This paper proposes a new algorithm to estimate the required number of parameters in the models developed in Reproducing Kernel Hilbert Space (RKHS). The proposed method considers models with growing complexities and calculates for each a given matrix, such that these matrices tend to singularity. The required number of parameters is given by verifying a criterion on the determinants of these matrices.
{"title":"Identification of non linear system modeled in Reproducing Kernel Hilbert Space using a new criterion","authors":"N. Souilem, I. Elaissi, O. Taouali, M. Hassani","doi":"10.1109/iccvia.2015.7351900","DOIUrl":"https://doi.org/10.1109/iccvia.2015.7351900","url":null,"abstract":"This paper proposes a new algorithm to estimate the required number of parameters in the models developed in Reproducing Kernel Hilbert Space (RKHS). The proposed method considers models with growing complexities and calculates for each a given matrix, such that these matrices tend to singularity. The required number of parameters is given by verifying a criterion on the determinants of these matrices.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126097502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICCVIA.2015.7351904
C. O. Ukpai, S. Dlay, W. L. Woo
Deriving effective iris feature from the segmented iris image is a crucial step in iris recognition system. In this paper we propose a new iris feature extraction method based on the Principal Texture Pattern (PTP) and dual tree complex wavelet transform (DT-CWT). We compute the principal direction (PD) of the iris texture using principal component analysis (PCA) and obtain the angle θ of the PD. Then, complex wavelet filters CWFs are constructed and rotated in the direction θ of the PD and also in the opposite direction - θ for decomposition of the image into 12 sub-bands using DT-CWT. Rotational invariant and scale invariant features are obtained by combining LL and HL sub-bands at each level. Consequently, channel energies and standard deviations are constructed as feature representation of the iris while SVM is used for classification of iris images. Our experiments demonstrate the superiority of the proposed method on CASIA iris databases, over existing methods.
{"title":"Iris feature extraction using principally rotated complex wavelet filters (PR-CWF)","authors":"C. O. Ukpai, S. Dlay, W. L. Woo","doi":"10.1109/ICCVIA.2015.7351904","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351904","url":null,"abstract":"Deriving effective iris feature from the segmented iris image is a crucial step in iris recognition system. In this paper we propose a new iris feature extraction method based on the Principal Texture Pattern (PTP) and dual tree complex wavelet transform (DT-CWT). We compute the principal direction (PD) of the iris texture using principal component analysis (PCA) and obtain the angle θ of the PD. Then, complex wavelet filters CWFs are constructed and rotated in the direction θ of the PD and also in the opposite direction - θ for decomposition of the image into 12 sub-bands using DT-CWT. Rotational invariant and scale invariant features are obtained by combining LL and HL sub-bands at each level. Consequently, channel energies and standard deviations are constructed as feature representation of the iris while SVM is used for classification of iris images. Our experiments demonstrate the superiority of the proposed method on CASIA iris databases, over existing methods.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123273588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICCVIA.2015.7351899
Imane Youkana, R. Saouli, J. Cousty, M. Akil
In this article, we are interested in the graph-based mathematical morphology operators (dilations, erosions, openings, closings, alternated filters) defined in [1] [2]. These operators depend on a size parameter and, as often in mathematical morphology; they are obtained by iterative successions of elementary dilations/erosions. The number of iterations of the elementary operators depends directly of the parameter size. Thus, this leads to running times that increase with respect to the parameter size. In order to optimize this computation time, we propose another algorithmic variant that is based on the computation of geodesic distance maps in graphs. The computed distance map allows us to determine (by thresholding), for any value of the parameter size, dilations and erosions that map a set of vertices to a set of edges and a set of edges to a set of vertices. The proposed algorithm allows the operators to be computed with a single (linear-time) iteration. Therefore, the processing time is improved compared to the time of the multi-iterations original method and does not depend of the parameter size anymore.
{"title":"Morphological operators on graph based on geodesic distance map","authors":"Imane Youkana, R. Saouli, J. Cousty, M. Akil","doi":"10.1109/ICCVIA.2015.7351899","DOIUrl":"https://doi.org/10.1109/ICCVIA.2015.7351899","url":null,"abstract":"In this article, we are interested in the graph-based mathematical morphology operators (dilations, erosions, openings, closings, alternated filters) defined in [1] [2]. These operators depend on a size parameter and, as often in mathematical morphology; they are obtained by iterative successions of elementary dilations/erosions. The number of iterations of the elementary operators depends directly of the parameter size. Thus, this leads to running times that increase with respect to the parameter size. In order to optimize this computation time, we propose another algorithmic variant that is based on the computation of geodesic distance maps in graphs. The computed distance map allows us to determine (by thresholding), for any value of the parameter size, dilations and erosions that map a set of vertices to a set of edges and a set of edges to a set of vertices. The proposed algorithm allows the operators to be computed with a single (linear-time) iteration. Therefore, the processing time is improved compared to the time of the multi-iterations original method and does not depend of the parameter size anymore.","PeriodicalId":419122,"journal":{"name":"International Conference on Computer Vision and Image Analysis Applications","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124684865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}