Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052969
Klearchos Stavrothanasopoulos, Konstantinos Gkountakos, K. Ioannidis, T. Tsikrika, S. Vrochidis, Y. Kompatsiaris
Color comprises one of the most significant and dominant cues for various applications. As one of the most noticeable and stable attributes of vehicles, color can constitute a valuable key component in several practices of intelligent surveillance systems. In this paper, we propose a deep-learning-based framework that combines semantic segmentation masks with pixels clustering for automatic vehicle color recognition. Different from conventional methods, which usually consider only the features of the vehicle's front side, the proposed algorithm is able for view-independent color identification, which is more effective for the surveillance tasks. To the best of our knowledge, this is the first work that employs semantic segmentation masks along with color clustering for the extraction of the vehicle's color representative parts and the recognition of the dominant color, respectively. To evaluate the performance of the proposed method, we introduce a challenging multi-view dataset of 500 car-related RGB images extending the publicly available DSMLR Car Parts dataset for vehicle parts segmentation. The experiments demonstrate that the proposed approach achieves excellent performance and accurate results reaching an accuracy of 93.06% in the multi-view scenario. To facilitate further research, the evaluation dataset and the pre-trained models will be released at https://github.com/klearchos-stav/vehicle_color_recognition.
{"title":"Vehicle Color Identification Framework using Pixel-level Color Estimation from Segmentation Masks of Car Parts","authors":"Klearchos Stavrothanasopoulos, Konstantinos Gkountakos, K. Ioannidis, T. Tsikrika, S. Vrochidis, Y. Kompatsiaris","doi":"10.1109/IPAS55744.2022.10052969","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052969","url":null,"abstract":"Color comprises one of the most significant and dominant cues for various applications. As one of the most noticeable and stable attributes of vehicles, color can constitute a valuable key component in several practices of intelligent surveillance systems. In this paper, we propose a deep-learning-based framework that combines semantic segmentation masks with pixels clustering for automatic vehicle color recognition. Different from conventional methods, which usually consider only the features of the vehicle's front side, the proposed algorithm is able for view-independent color identification, which is more effective for the surveillance tasks. To the best of our knowledge, this is the first work that employs semantic segmentation masks along with color clustering for the extraction of the vehicle's color representative parts and the recognition of the dominant color, respectively. To evaluate the performance of the proposed method, we introduce a challenging multi-view dataset of 500 car-related RGB images extending the publicly available DSMLR Car Parts dataset for vehicle parts segmentation. The experiments demonstrate that the proposed approach achieves excellent performance and accurate results reaching an accuracy of 93.06% in the multi-view scenario. To facilitate further research, the evaluation dataset and the pre-trained models will be released at https://github.com/klearchos-stav/vehicle_color_recognition.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127958902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052906
Chris Engelhardt, Jakob Mittelberger, David Peer, Sebastian Stabinger, A. Rodríguez-Sánchez
When applying convolutional neural networks to 3D point cloud reconstruction, these do not seem to be able to learn meaningful 2D manifold embeddings, suffer a lack of explainability and are vulnerable to adversarial attacks [20]. Except for the latter, these shortcomings can be overcome with capsule networks. In this work we introduce an auto-encoder based on dynamic tree-structured capsule networks for sparse 3D point clouds with SDA-routing. Our approach preserves the spatial arrangements of the input data and increases the adversarial robustness without introducing additional computational overhead. Our experimental evaluation shows that our architecture outperforms the current state-of-the-art capsule and CNN-based networks.
{"title":"Improving 3D Point Cloud Reconstruction with Dynamic Tree-Structured Capsules","authors":"Chris Engelhardt, Jakob Mittelberger, David Peer, Sebastian Stabinger, A. Rodríguez-Sánchez","doi":"10.1109/IPAS55744.2022.10052906","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052906","url":null,"abstract":"When applying convolutional neural networks to 3D point cloud reconstruction, these do not seem to be able to learn meaningful 2D manifold embeddings, suffer a lack of explainability and are vulnerable to adversarial attacks [20]. Except for the latter, these shortcomings can be overcome with capsule networks. In this work we introduce an auto-encoder based on dynamic tree-structured capsule networks for sparse 3D point clouds with SDA-routing. Our approach preserves the spatial arrangements of the input data and increases the adversarial robustness without introducing additional computational overhead. Our experimental evaluation shows that our architecture outperforms the current state-of-the-art capsule and CNN-based networks.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127673960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10053060
S. Chatterjee, Pavan Tummala, O. Speck, A. Nürnberger
Neural networks, especially convolutional neural networks (CNN), are one of the most common tools these days used in computer vision. Most of these networks work with real-valued data using real-valued features. Complex-valued convolutional neural networks (CV-CNN) can preserve the algebraic structure of complex-valued input data and have the potential to learn more complex relationships between the input and the ground-truth. Although some comparisons of CNNs and CV-CNNs for different tasks have been performed in the past, a large-scale investigation comparing different models operating on different tasks has not been conducted. Furthermore, because complex features contain both real and imaginary components, CV-CNNs have double the number of trainable parameters as real-valued CNNs in terms of the actual number of trainable parameters. Whether or not the improvements in performance with CV-CNN observed in the past have been because of the complex features or just because of having double the number of trainable parameters has not yet been explored. This paper presents a comparative study of CNN, CNNx2 (CNN with double the number of trainable parameters as the CNN), and CV-CNN. The experiments were performed using seven models for two different tasks - brain tumour classification and segmentation in brain MRIs. The results have revealed that the CV-CNN models outperformed the CNN and CNNx2 models.
{"title":"Complex Network for Complex Problems: A comparative study of CNN and Complex-valued CNN","authors":"S. Chatterjee, Pavan Tummala, O. Speck, A. Nürnberger","doi":"10.1109/IPAS55744.2022.10053060","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053060","url":null,"abstract":"Neural networks, especially convolutional neural networks (CNN), are one of the most common tools these days used in computer vision. Most of these networks work with real-valued data using real-valued features. Complex-valued convolutional neural networks (CV-CNN) can preserve the algebraic structure of complex-valued input data and have the potential to learn more complex relationships between the input and the ground-truth. Although some comparisons of CNNs and CV-CNNs for different tasks have been performed in the past, a large-scale investigation comparing different models operating on different tasks has not been conducted. Furthermore, because complex features contain both real and imaginary components, CV-CNNs have double the number of trainable parameters as real-valued CNNs in terms of the actual number of trainable parameters. Whether or not the improvements in performance with CV-CNN observed in the past have been because of the complex features or just because of having double the number of trainable parameters has not yet been explored. This paper presents a comparative study of CNN, CNNx2 (CNN with double the number of trainable parameters as the CNN), and CV-CNN. The experiments were performed using seven models for two different tasks - brain tumour classification and segmentation in brain MRIs. The results have revealed that the CV-CNN models outperformed the CNN and CNNx2 models.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052952
Subrata Bhattacharjee, Yeong-Byn Hwang, Rashadul Islam Sumon, H. Rahman, Dong-Woo Hyeon, Damin Moon, Kouayep Sonia Carole, Hee-Cheol Kim, Heung-Kook Choi
Recently, many fields have widely used cluster analysis: psychology, biology, statistics, pattern recognition, information retrieval, machine learning, and data mining. Diagnosis of histopathological images of prostate cancer is one of the routine tasks for pathologists and it is challenging for pathologists to analyze the formation of glands and tumors based on the Gleason grading system. In this study, unsupervised classification has been performed for differentiating malignant (cancerous) from benign (non-cancerous) tumors. Therefore, the unsupervised-based computer-aided diagnosis (CAD) technique would be of great benefit in easing the workloads of pathologists. This technique is used to find meaningful clustering objects (i.e., individuals, entities, patterns, or cases) and identify useful patterns. Radiomic-based features were extracted for cluster analysis using the gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and gray-level size zone matrix (GLSZM) techniques. Multi-clustering techniques used for the unsupervised classification are K-means clustering, K-medoids clustering, Agglomerative Hierarchical (AH) clustering, Gaussian mixture model (GMM) clustering, and Spectral clustering. The quality of the clustering algorithms was determined using Purity, Silhouettes, Adjusted Rand, Fowlkes Mallows, and Calinski Harabasz (CH) scores. However, the best-performing algorithm (i.e., K-means) has been applied to predict and annotate the cancerous regions in the whole slide image (WSI) to compare with the pathologist annotation.
{"title":"Cluster Analysis: Unsupervised Classification for Identifying Benign and Malignant Tumors on Whole Slide Image of Prostate Cancer","authors":"Subrata Bhattacharjee, Yeong-Byn Hwang, Rashadul Islam Sumon, H. Rahman, Dong-Woo Hyeon, Damin Moon, Kouayep Sonia Carole, Hee-Cheol Kim, Heung-Kook Choi","doi":"10.1109/IPAS55744.2022.10052952","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052952","url":null,"abstract":"Recently, many fields have widely used cluster analysis: psychology, biology, statistics, pattern recognition, information retrieval, machine learning, and data mining. Diagnosis of histopathological images of prostate cancer is one of the routine tasks for pathologists and it is challenging for pathologists to analyze the formation of glands and tumors based on the Gleason grading system. In this study, unsupervised classification has been performed for differentiating malignant (cancerous) from benign (non-cancerous) tumors. Therefore, the unsupervised-based computer-aided diagnosis (CAD) technique would be of great benefit in easing the workloads of pathologists. This technique is used to find meaningful clustering objects (i.e., individuals, entities, patterns, or cases) and identify useful patterns. Radiomic-based features were extracted for cluster analysis using the gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and gray-level size zone matrix (GLSZM) techniques. Multi-clustering techniques used for the unsupervised classification are K-means clustering, K-medoids clustering, Agglomerative Hierarchical (AH) clustering, Gaussian mixture model (GMM) clustering, and Spectral clustering. The quality of the clustering algorithms was determined using Purity, Silhouettes, Adjusted Rand, Fowlkes Mallows, and Calinski Harabasz (CH) scores. However, the best-performing algorithm (i.e., K-means) has been applied to predict and annotate the cancerous regions in the whole slide image (WSI) to compare with the pathologist annotation.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127839643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052947
S. Gantenapalli, P. Choppala, Vandana Gullipalli, J. Meka, Paul D. Teal
The traditional vector median filtering and its variants used to reduce impulse noise in digital color images operate by processing over all the pixels in the image sequentially. This renders these filtering methods computationally expensive. This paper presents a fast method for reducing impulse noise in digital color images. The key idea here is to slice each row of the image as a univariate data vector, identify impulse noise using anomaly detection schemes and then apply median filtering over these to restore the original image. This idea ensures fast filtering as only the noisy pixels are processed. Using simulations, we show that the proposed method scales efficiently with respect to accuracy and time. Through a combined measure of time and accuracy, we show that the proposed method exhibits nearly 42% improvement over the conventional ones.
{"title":"A fast method for impulse noise reduction in digital color images using anomaly median filtering","authors":"S. Gantenapalli, P. Choppala, Vandana Gullipalli, J. Meka, Paul D. Teal","doi":"10.1109/IPAS55744.2022.10052947","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052947","url":null,"abstract":"The traditional vector median filtering and its variants used to reduce impulse noise in digital color images operate by processing over all the pixels in the image sequentially. This renders these filtering methods computationally expensive. This paper presents a fast method for reducing impulse noise in digital color images. The key idea here is to slice each row of the image as a univariate data vector, identify impulse noise using anomaly detection schemes and then apply median filtering over these to restore the original image. This idea ensures fast filtering as only the noisy pixels are processed. Using simulations, we show that the proposed method scales efficiently with respect to accuracy and time. Through a combined measure of time and accuracy, we show that the proposed method exhibits nearly 42% improvement over the conventional ones.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"Five 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130895020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052832
Mohamed Gabr, H. Younis, Marwa Ibrahim, Sara Alajmy, Wassim Alexan
The ever-evolving nature of the Internet and wireless communications, as well as the production of huge amounts of multimedia every day has created a dire need for their security. In this paper, an image encryption technique that is based on 3 stages is proposed. The first stage makes use of DNA encoding. The second stage proposed and utilizes a novel S-box that is based on the Mersenne Twister and a linear descent algorithm. The third stage employs the Tent chaotic map. The computed performance evaluation metrics exhibit a high level of achieved security.
{"title":"Visual Data Enciphering via DNA Encoding, S-Box, and Tent Mapping","authors":"Mohamed Gabr, H. Younis, Marwa Ibrahim, Sara Alajmy, Wassim Alexan","doi":"10.1109/IPAS55744.2022.10052832","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052832","url":null,"abstract":"The ever-evolving nature of the Internet and wireless communications, as well as the production of huge amounts of multimedia every day has created a dire need for their security. In this paper, an image encryption technique that is based on 3 stages is proposed. The first stage makes use of DNA encoding. The second stage proposed and utilizes a novel S-box that is based on the Mersenne Twister and a linear descent algorithm. The third stage employs the Tent chaotic map. The computed performance evaluation metrics exhibit a high level of achieved security.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133798838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052790
A. Rahai, M. Rahai, Mostafa Iraniparast, M. Ghatee
Regular safety inspections of concrete and steel structures during their serviceability are essential since they directly affect the reliability and structural health. Early detection of cracks helps prevent further damage. Traditional methods involve the detection of cracks by human visual inspection. However, it is difficult to visually find cracks and other defects for extremely large structures because of time and cost constraints. Therefore, the development of smart inspection systems has been given utmost importance. We provide a deep convolutional neural network (DCNN) with transfer learning (TF) technique for crack detection. To reduce false detection rates, the images used to train in the TF technique come from two different datasets (CCIC and SDNET). Moreover, the designed CNN is trained on 3200 images of $256 times 256$ pixel resolutions. Different deep learning networks are considered and the experiments on test images show that the accuracy of the damage detection is more than 99%. Results illustrate the viability of the suggested approach for crack observation and classification.
{"title":"Surface Crack Detection using Deep Convolutional Neural Network in Concrete Structures","authors":"A. Rahai, M. Rahai, Mostafa Iraniparast, M. Ghatee","doi":"10.1109/IPAS55744.2022.10052790","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052790","url":null,"abstract":"Regular safety inspections of concrete and steel structures during their serviceability are essential since they directly affect the reliability and structural health. Early detection of cracks helps prevent further damage. Traditional methods involve the detection of cracks by human visual inspection. However, it is difficult to visually find cracks and other defects for extremely large structures because of time and cost constraints. Therefore, the development of smart inspection systems has been given utmost importance. We provide a deep convolutional neural network (DCNN) with transfer learning (TF) technique for crack detection. To reduce false detection rates, the images used to train in the TF technique come from two different datasets (CCIC and SDNET). Moreover, the designed CNN is trained on 3200 images of $256 times 256$ pixel resolutions. Different deep learning networks are considered and the experiments on test images show that the accuracy of the damage detection is more than 99%. Results illustrate the viability of the suggested approach for crack observation and classification.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125425204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/IPAS55744.2022.10052930
Altuğ Bakan, I. Erer
In this paper a new infrared and visible image fusion (IVIF) method which combines the advantages of optimization and deep learning based methods is proposed. This model takes the iterative solution used by the alternating direction method of the multiplier (ADMM) optimization method, and uses algorithm unrolling to obtain a high performance and efficient algorithm. Compared with traditional optimization methods, this model generates fusion with 99.6% improvement in terms of image fusion time, and compared with deep learning based algorithms, this model generates detailed fusion images with 99.1% improvement in terms of training time. Compared with the other state-of-the-art unrolling based methods, this model performs 26.7% better on average in terms of Average Gradient (AG), Cross Entropy (CE), Mutual Information (MI), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Loss (SSIM) metrics with a minimal testing time cost.
{"title":"Unrolling Alternating Direction Method of Multipliers for Visible and Infrared Image Fusion","authors":"Altuğ Bakan, I. Erer","doi":"10.1109/IPAS55744.2022.10052930","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052930","url":null,"abstract":"In this paper a new infrared and visible image fusion (IVIF) method which combines the advantages of optimization and deep learning based methods is proposed. This model takes the iterative solution used by the alternating direction method of the multiplier (ADMM) optimization method, and uses algorithm unrolling to obtain a high performance and efficient algorithm. Compared with traditional optimization methods, this model generates fusion with 99.6% improvement in terms of image fusion time, and compared with deep learning based algorithms, this model generates detailed fusion images with 99.1% improvement in terms of training time. Compared with the other state-of-the-art unrolling based methods, this model performs 26.7% better on average in terms of Average Gradient (AG), Cross Entropy (CE), Mutual Information (MI), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Loss (SSIM) metrics with a minimal testing time cost.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"15 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-20DOI: 10.1109/IPAS55744.2022.10053036
Karina Ruzaeva, J. Cohrs, Keitaro Kasahara, D. Kohlheyer, K. Nöh, B. Berkels
Cell tracking is an essential tool in live-cell imaging to determine single-cell features, such as division patterns or elongation rates. Unlike in common multiple object tracking, in microbial live-cell experiments cells are growing, moving, and dividing over time, to form cell colonies that are densely packed in mono-layer structures. With increasing cell numbers, following the precise cell-cell associations correctly over many generations becomes more and more challenging, due to the massively increasing number of possible associations. To tackle this challenge, we propose a fast parameter-free cell tracking approach, which consists of activity-prioritized nearest neighbor assignment of growing (expanding) cells and a combinatorial solver that assigns splitting mother cells to their daughters. As input for the tracking, Omnipose is utilized for instance segmentation. Unlike conventional nearest-neighbor-based tracking approaches, the assignment steps of our proposed method are based on a Gaussian activity-based metric, predicting the cell-specific migration probability, thereby limiting the number of erroneous assignments. In addition to being a building block for cell tracking, the proposed activity map is a standalone tracking-free metric for indicating cell activity. Finally, we perform a quantitative analysis of the tracking accuracy for different frame rates, to inform life scientists about a suitable (in terms of tracking performance) choice of the frame rate for their cultivation experiments, when cell tracks are the desired key outcome.
{"title":"Cell tracking for live-cell microscopy using an activity-prioritized assignment strategy","authors":"Karina Ruzaeva, J. Cohrs, Keitaro Kasahara, D. Kohlheyer, K. Nöh, B. Berkels","doi":"10.1109/IPAS55744.2022.10053036","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053036","url":null,"abstract":"Cell tracking is an essential tool in live-cell imaging to determine single-cell features, such as division patterns or elongation rates. Unlike in common multiple object tracking, in microbial live-cell experiments cells are growing, moving, and dividing over time, to form cell colonies that are densely packed in mono-layer structures. With increasing cell numbers, following the precise cell-cell associations correctly over many generations becomes more and more challenging, due to the massively increasing number of possible associations. To tackle this challenge, we propose a fast parameter-free cell tracking approach, which consists of activity-prioritized nearest neighbor assignment of growing (expanding) cells and a combinatorial solver that assigns splitting mother cells to their daughters. As input for the tracking, Omnipose is utilized for instance segmentation. Unlike conventional nearest-neighbor-based tracking approaches, the assignment steps of our proposed method are based on a Gaussian activity-based metric, predicting the cell-specific migration probability, thereby limiting the number of erroneous assignments. In addition to being a building block for cell tracking, the proposed activity map is a standalone tracking-free metric for indicating cell activity. Finally, we perform a quantitative analysis of the tracking accuracy for different frame rates, to inform life scientists about a suitable (in terms of tracking performance) choice of the frame rate for their cultivation experiments, when cell tracks are the desired key outcome.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"450 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123174704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-19DOI: 10.1109/IPAS55744.2022.10052997
H. Daglayan, Simon Vary, F. Cantalloube, P. Absil, O. Absil
Direct imaging of exoplanets is a challenging task due to the small angular distance and high contrast relative to their host star, and the presence of quasi-static noise. We propose a new statistical method for direct imaging of exoplanets based on a likelihood ratio detection map, which assumes that the noise after the background subtraction step obeys a Laplacian distribution. We compare the method with two detection approaches based on signal-to-noise ratio (SNR) map after performing the background subtraction by the widely used Annular Principal Component Analysis (AnnPCA). The experimental results on the Beta Pictoris data set show the method outperforms SNR maps in terms of achieving the highest true positive rate (TPR) at zero false positive rate (FPR).
{"title":"Likelihood ratio map for direct exoplanet detection","authors":"H. Daglayan, Simon Vary, F. Cantalloube, P. Absil, O. Absil","doi":"10.1109/IPAS55744.2022.10052997","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052997","url":null,"abstract":"Direct imaging of exoplanets is a challenging task due to the small angular distance and high contrast relative to their host star, and the presence of quasi-static noise. We propose a new statistical method for direct imaging of exoplanets based on a likelihood ratio detection map, which assumes that the noise after the background subtraction step obeys a Laplacian distribution. We compare the method with two detection approaches based on signal-to-noise ratio (SNR) map after performing the background subtraction by the widely used Annular Principal Component Analysis (AnnPCA). The experimental results on the Beta Pictoris data set show the method outperforms SNR maps in terms of achieving the highest true positive rate (TPR) at zero false positive rate (FPR).","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114686104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}