Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563388
C. Vinhais, M. Kociński, A. Materka
Accurate modeling of the human vascular tree from 3D computed tomography (CTA) or magnetic resonance (MRA) angiograms is required for visualization, diagnosis of vascular diseases, and computational fluid dynamic (CFD) blood flow simulations. This work describes an automated algorithm for constructing the polygonal mesh of blood vessels from such images. Each vascular segment is modeled as a tubular object, and a thin plate spline transform is used to generate the corresponding surface from its centerline-radius representation. A novel approach for generating the polygonal mesh of bifurcating vessels based on conformal mapping is presented. A mathematical description of the methodology is also provided. The model is improved by computing local intensity features with subvoxel accuracy, to slightly deform the mesh of the vascular tree for fine-tuning. The proposed algorithm was successfully tested on a 3D synthetic image containing randomly generated vascular branches. Experiment results, confirmed by real-world Time of Flight MRA, demonstrate that our methodology is consistent and capable of generating high quality triangulated meshes of vascular trees, suitable for further CFD simulations. Compared to common techniques, conformal mapping proved to be a simple and effective mathematical approach for polygonal mesh modeling of bifurcating vessels.
{"title":"Centerline-Radius Polygonal-Mesh Modeling of Bifurcated Blood Vessels in 3D Images using Conformal Mapping","authors":"C. Vinhais, M. Kociński, A. Materka","doi":"10.23919/SPA.2018.8563388","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563388","url":null,"abstract":"Accurate modeling of the human vascular tree from 3D computed tomography (CTA) or magnetic resonance (MRA) angiograms is required for visualization, diagnosis of vascular diseases, and computational fluid dynamic (CFD) blood flow simulations. This work describes an automated algorithm for constructing the polygonal mesh of blood vessels from such images. Each vascular segment is modeled as a tubular object, and a thin plate spline transform is used to generate the corresponding surface from its centerline-radius representation. A novel approach for generating the polygonal mesh of bifurcating vessels based on conformal mapping is presented. A mathematical description of the methodology is also provided. The model is improved by computing local intensity features with subvoxel accuracy, to slightly deform the mesh of the vascular tree for fine-tuning. The proposed algorithm was successfully tested on a 3D synthetic image containing randomly generated vascular branches. Experiment results, confirmed by real-world Time of Flight MRA, demonstrate that our methodology is consistent and capable of generating high quality triangulated meshes of vascular trees, suitable for further CFD simulations. Compared to common techniques, conformal mapping proved to be a simple and effective mathematical approach for polygonal mesh modeling of bifurcating vessels.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114152817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563422
J. Bednarek, K. Piaskowski, Michał Bednarek
Semantic Segmentation is one of the visual tasks that gained the significant boost in performance in recent years due to the popularization of Convolutional Neural Networks (CNNs). In this paper, we addressed the problem of losing information while changing the size of input images during training neural models. Moreover, our method of downsampling and upsampling could be easily injected into current autoencoder models. We show that without any significant changes in a model architecture it is possible to noticeably improve IoU metric. On popular Cityscapes benchmark, our model is achieving almost 2.5% boost in the accuracy of segmentation in comparison to the widely known ERF model. Additionally, to the ability to real-time usages, we run our network on GPU comparable to NVIDIA Jetson Tx2, what let us implement our algorithm in autonomous vehicles.
{"title":"Methods of Enriching The Flow of Information in The Real-Time Semantic Segmentation Using Deep Neural Networks","authors":"J. Bednarek, K. Piaskowski, Michał Bednarek","doi":"10.23919/SPA.2018.8563422","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563422","url":null,"abstract":"Semantic Segmentation is one of the visual tasks that gained the significant boost in performance in recent years due to the popularization of Convolutional Neural Networks (CNNs). In this paper, we addressed the problem of losing information while changing the size of input images during training neural models. Moreover, our method of downsampling and upsampling could be easily injected into current autoencoder models. We show that without any significant changes in a model architecture it is possible to noticeably improve IoU metric. On popular Cityscapes benchmark, our model is achieving almost 2.5% boost in the accuracy of segmentation in comparison to the widely known ERF model. Additionally, to the ability to real-time usages, we run our network on GPU comparable to NVIDIA Jetson Tx2, what let us implement our algorithm in autonomous vehicles.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563387
Adam Bykowski, Szymon Kupiński
This paper presents eye tracking glasses data analysis automation techniques, utilizing image processing. Two separate techniques will be described. One method is used to automate mapping of point-of-regard to a static reference image using a feature matching algorithm AKAZE. The second method utilizes ArUco markers for mapping of point-of-regard to a screencast from a mobile device. The described methods are used to aggregate experiment statistical data for future analysis and presentation in forms like heatmaps or gaze plots. Algorithms are implemented in Python 3.6 and OpenCV library.
{"title":"Feature matching and ArUco markers application in mobile eye tracking studies","authors":"Adam Bykowski, Szymon Kupiński","doi":"10.23919/SPA.2018.8563387","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563387","url":null,"abstract":"This paper presents eye tracking glasses data analysis automation techniques, utilizing image processing. Two separate techniques will be described. One method is used to automate mapping of point-of-regard to a static reference image using a feature matching algorithm AKAZE. The second method utilizes ArUco markers for mapping of point-of-regard to a screencast from a mobile device. The described methods are used to aggregate experiment statistical data for future analysis and presentation in forms like heatmaps or gaze plots. Algorithms are implemented in Python 3.6 and OpenCV library.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563386
Wladyslaw Magiera, Urszula Libal
In the paper, we investigate reconstruction of statistical properties of signals approximated in various orthogonal bases. The approximation of signals is performed in various polynomial bases and by Schur parametrization algorithm. To compare quality of remodeled signals in different bases, we use mean square error criterion for power spectral density. The correlation function, and the derived from it power spectral density, is sufficient to describe signal statistical properties. The numerical experiments were performed using benchmark signals. The tests were executed for different polynomial degrees and different orders of Schur innovation filtering. Our purpose was to find which patrametrization method requires less parameters.
{"title":"Statistical properties of signals approximated by orthogonal polynomials and Schur parametrization","authors":"Wladyslaw Magiera, Urszula Libal","doi":"10.23919/SPA.2018.8563386","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563386","url":null,"abstract":"In the paper, we investigate reconstruction of statistical properties of signals approximated in various orthogonal bases. The approximation of signals is performed in various polynomial bases and by Schur parametrization algorithm. To compare quality of remodeled signals in different bases, we use mean square error criterion for power spectral density. The correlation function, and the derived from it power spectral density, is sufficient to describe signal statistical properties. The numerical experiments were performed using benchmark signals. The tests were executed for different polynomial degrees and different orders of Schur innovation filtering. Our purpose was to find which patrametrization method requires less parameters.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121660803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563423
Jalil Nourmohammadi-Khiarak, S. Mazaheri, R. M. Tayebi, Hamid Noorbakhsh-Devlagh
Deep learning models are widely used in object detection area, including combination of multiple non-linear data transformations. The objective is receiving brief and concise information for feature representations. Due to the high volume of processing data, object detection in videos has been faced with big challenges, such as mass calculation. To increase the object detection precision in videos, a hybrid method is proposed, in this paper. Some modifications are applied to auto encoder neural networks, for the compact and discriminative learning of object features. Furthermore, for object classification, firstly extracted features are transferred to a convolutional neural network, and after feature convolution with input pictures, they will be classified. The proposed method has two main advantages over other unsupervised feature learning techniques. Firstly, as it will be shown, features are detected with a much higher precision. Secondly, in the proposed method, the outcome is compact and additional unnecessary information is removed; while the existing unsupervised feature learning models mainly learn repeated and redundant information of the features. Experimental evaluation shows that precision of feature detection improved by 1.5% in average in compare with the state-of-the-art methods.
{"title":"Object Detection utilizing Modified Auto Encoder and Convolutional Neural Networks","authors":"Jalil Nourmohammadi-Khiarak, S. Mazaheri, R. M. Tayebi, Hamid Noorbakhsh-Devlagh","doi":"10.23919/SPA.2018.8563423","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563423","url":null,"abstract":"Deep learning models are widely used in object detection area, including combination of multiple non-linear data transformations. The objective is receiving brief and concise information for feature representations. Due to the high volume of processing data, object detection in videos has been faced with big challenges, such as mass calculation. To increase the object detection precision in videos, a hybrid method is proposed, in this paper. Some modifications are applied to auto encoder neural networks, for the compact and discriminative learning of object features. Furthermore, for object classification, firstly extracted features are transferred to a convolutional neural network, and after feature convolution with input pictures, they will be classified. The proposed method has two main advantages over other unsupervised feature learning techniques. Firstly, as it will be shown, features are detected with a much higher precision. Secondly, in the proposed method, the outcome is compact and additional unnecessary information is removed; while the existing unsupervised feature learning models mainly learn repeated and redundant information of the features. Experimental evaluation shows that precision of feature detection improved by 1.5% in average in compare with the state-of-the-art methods.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128283071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563385
C. Wernik, G. Ulacha
In this paper the advantages of the Golomb codes family on example of audio signals coding are presented. Such as low computational complexity, high efficiency and flexibility to adapt to local changes in the probability distribution characteristics of coded data. The effectiveness of the Golomb coder with forward adaptation with three versions of the reverse adaptation coder has been compared. Also, attempts have been made to solve the problem of incomplete adaptation to the distribution encoded audio data to a one-sided geometric distribution, for which the Golomb code is the optimal code.
{"title":"Application of adaptive Golomb codes for lossless audio compression","authors":"C. Wernik, G. Ulacha","doi":"10.23919/SPA.2018.8563385","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563385","url":null,"abstract":"In this paper the advantages of the Golomb codes family on example of audio signals coding are presented. Such as low computational complexity, high efficiency and flexibility to adapt to local changes in the probability distribution characteristics of coded data. The effectiveness of the Golomb coder with forward adaptation with three versions of the reverse adaptation coder has been compared. Also, attempts have been made to solve the problem of incomplete adaptation to the distribution encoded audio data to a one-sided geometric distribution, for which the Golomb code is the optimal code.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130487966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563352
Lukasz Kubus, A. Yastrebov, Katarzyna Poczeta, M. Poterała, L. Gromadziński
Fuzzy cognitive map (FCM) is an effective tool for modeling decision support systems. It describes the analyzed problem in the form of key concepts and causal connections between them. The aim of this paper is to use the fuzzy cognitive map in evaluation of prognosis for patients with chronic heart failure. The developed evolutionary algorithm for fuzzy cognitive maps learning and medical data of consecutive chronic heart failure patients were used to select the most significant concepts and determine the relationships between them.
{"title":"The use of fuzzy cognitive maps in evaluation of prognosis of chronic heart failure patients","authors":"Lukasz Kubus, A. Yastrebov, Katarzyna Poczeta, M. Poterała, L. Gromadziński","doi":"10.23919/SPA.2018.8563352","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563352","url":null,"abstract":"Fuzzy cognitive map (FCM) is an effective tool for modeling decision support systems. It describes the analyzed problem in the form of key concepts and causal connections between them. The aim of this paper is to use the fuzzy cognitive map in evaluation of prognosis for patients with chronic heart failure. The developed evolutionary algorithm for fuzzy cognitive maps learning and medical data of consecutive chronic heart failure patients were used to select the most significant concepts and determine the relationships between them.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133319142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563395
Szymon Zaporowski, Joanna Gołębiewska, B. Kostek, Julia Piltz
In this paper an analysis of audio-visual recordings of the Lombard effect is shown. First, audio signal is analyzed indicating the presence of this phenomenon in the recorded sessions. The principal aim, however, was to discuss problems related to extracting differences caused by the Lombard effect, present in the video, i.e. visible as tension and work of facial muscles aligned to an increase in the intensity of the articulated speech signal. Also the database of recordings, available on the internet, depicting emotional states was analyzed in order to compare and find a visual similarity between the Lombard effect and sentiment contained in speech. The results presented are discussed and further plans are depicted.
{"title":"Audio-visual aspect of the Lombard effect and comparison with recordings depicting emotional states","authors":"Szymon Zaporowski, Joanna Gołębiewska, B. Kostek, Julia Piltz","doi":"10.23919/SPA.2018.8563395","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563395","url":null,"abstract":"In this paper an analysis of audio-visual recordings of the Lombard effect is shown. First, audio signal is analyzed indicating the presence of this phenomenon in the recorded sessions. The principal aim, however, was to discuss problems related to extracting differences caused by the Lombard effect, present in the video, i.e. visible as tension and work of facial muscles aligned to an increase in the intensity of the articulated speech signal. Also the database of recordings, available on the internet, depicting emotional states was analyzed in order to compare and find a visual similarity between the Lombard effect and sentiment contained in speech. The results presented are discussed and further plans are depicted.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134179607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/SPA.2018.8563396
M. Kulawiak, Przemysław Falkowski-Gilski, M. Kulawiak
For many years, the matter of designing a transmitter network, optimized for best signal coverage, has been a subject of intense research. In the last decade, numerous researchers and institutions used GIS and spatial analysis tools for network planning, especially transmitter location. Currently, many existing systems operate in a strictly two-dimensional manner, not taking into account the three-dimensional nature of the analyzed landscape. Moreover, systems that utilize a digital terrain model in their analyzes also resort to a basic viewshed model. Due to recent adoption of mobile LiDAR scanners, the number of three-dimensional terrain information, including location and height of natural and man-made objects, is growing rapidly. In this paper, we propose a novel web-based GIS tool for three-dimensional simulation and analysis of radio signal coverage, dedicated to DAB+ terrestrial broadcasting network planning.
{"title":"DAB+ Coverage Analysis: a New Look at Network Planning using GIS Tools","authors":"M. Kulawiak, Przemysław Falkowski-Gilski, M. Kulawiak","doi":"10.23919/SPA.2018.8563396","DOIUrl":"https://doi.org/10.23919/SPA.2018.8563396","url":null,"abstract":"For many years, the matter of designing a transmitter network, optimized for best signal coverage, has been a subject of intense research. In the last decade, numerous researchers and institutions used GIS and spatial analysis tools for network planning, especially transmitter location. Currently, many existing systems operate in a strictly two-dimensional manner, not taking into account the three-dimensional nature of the analyzed landscape. Moreover, systems that utilize a digital terrain model in their analyzes also resort to a basic viewshed model. Due to recent adoption of mobile LiDAR scanners, the number of three-dimensional terrain information, including location and height of natural and man-made objects, is growing rapidly. In this paper, we propose a novel web-based GIS tool for three-dimensional simulation and analysis of radio signal coverage, dedicated to DAB+ terrestrial broadcasting network planning.","PeriodicalId":265587,"journal":{"name":"2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}