首页 > 最新文献

2022 International Conference on Machine Vision and Image Processing (MVIP)最新文献

英文 中文
Hardware Implementation of Moving Object Detection using Adaptive Coefficient in Performing Background Subtraction Algorithm 基于自适应系数的背景减法运动目标检测硬件实现
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738764
Ali Rahiminezhad, Mohammad Reza Tavakoli, Sayed Masoud Sayedi
Moving object detection is an essential process in many surveillance systems, autonomous navigation systems, and computer vision applications. A hardware architecture for the motion detection process based on the background subtraction operation and with the introduction of an adaptive background update coefficient is proposed. The architecture is implemented on a Kintex 7 FPGA device. Its operating frequency is 250 MHz for 360*640 video frame size and average processing time for each frame is 2.304 ms with 130 fps processing rate and its power consumption is 140 mW. The architecture achieves high speed performance with relatively low resource utilization.
在许多监视系统、自主导航系统和计算机视觉应用中,运动目标检测是一个必不可少的过程。提出了一种基于背景减法运算并引入自适应背景更新系数的运动检测硬件结构。该架构在Kintex 7 FPGA器件上实现。其工作频率为250 MHz,视频帧大小为360*640,平均每帧处理时间为2.304 ms,处理速率为130 fps,功耗为140 mW。该架构以相对较低的资源利用率实现了高速性能。
{"title":"Hardware Implementation of Moving Object Detection using Adaptive Coefficient in Performing Background Subtraction Algorithm","authors":"Ali Rahiminezhad, Mohammad Reza Tavakoli, Sayed Masoud Sayedi","doi":"10.1109/MVIP53647.2022.9738764","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738764","url":null,"abstract":"Moving object detection is an essential process in many surveillance systems, autonomous navigation systems, and computer vision applications. A hardware architecture for the motion detection process based on the background subtraction operation and with the introduction of an adaptive background update coefficient is proposed. The architecture is implemented on a Kintex 7 FPGA device. Its operating frequency is 250 MHz for 360*640 video frame size and average processing time for each frame is 2.304 ms with 130 fps processing rate and its power consumption is 140 mW. The architecture achieves high speed performance with relatively low resource utilization.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116660744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimized Quantum Circuits in Quantum Image Processing Using Qiskit 利用Qiskit优化量子图像处理中的量子电路
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738550
Zahra Boreiri, Alireza Norouzi Azad, N. Majd
Quantum image representation is an essential component of quantum image processing and plays a critical role in quantum information processing. Flexible Representation of Quantum Images (FRQI) presents pixel colors and associated locations as a state to represent images on quantum computers. A fundamental part of the quantum image processing system is quantum image compression (QIC), which is utilized to maintain and retrieve binary images. This compression allows us to minimize the number of controlled rotation gates in the quantum circuits. This paper designed optimized quantum circuits and simulated them using Qiskit on a real quantum computer based on minimum boolean expressions to retrieve the 8×4 binary single-digit images. To demonstrate the feasibility and efficacy of quantum image representation, quantum circuits for images were developed using FRQI, and quantum image representation experiments were done on IBM Quantum Experience (IBMQ). We were able to visualize quantum information by doing the quantum measurement on the image information that we had prepared. Without utilizing this method, the number of controlled rotation gates is equal to the number of pixels in the image; however, we showed that by using the QIC algorithm, we could decrease the number of gates significantly. On these images, the maximum and minimum compression ratios of QIC are 90.63% and 68.75%, respectively.
量子图像表示是量子图像处理的重要组成部分,在量子信息处理中起着至关重要的作用。量子图像的灵活表示(FRQI)将像素颜色和相关位置作为一种状态来表示量子计算机上的图像。量子图像压缩(QIC)是量子图像处理系统的一个基础部分,用于维护和检索二值图像。这种压缩使我们能够最小化量子电路中受控旋转门的数量。本文设计了优化的量子电路,并利用Qiskit在真实量子计算机上进行了基于最小布尔表达式的量子电路仿真,以检索8×4二进制个位数图像。为了证明量子图像表示的可行性和有效性,利用FRQI开发了图像的量子电路,并在IBM量子体验(IBMQ)上进行了量子图像表示实验。通过对我们准备好的图像信息进行量子测量,我们能够可视化量子信息。在不使用这种方法的情况下,受控旋转门的数量等于图像中的像素数量;然而,我们表明,通过使用QIC算法,我们可以显著减少门的数量。在这些图像上,QIC的最大压缩比为90.63%,最小压缩比为68.75%。
{"title":"Optimized Quantum Circuits in Quantum Image Processing Using Qiskit","authors":"Zahra Boreiri, Alireza Norouzi Azad, N. Majd","doi":"10.1109/MVIP53647.2022.9738550","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738550","url":null,"abstract":"Quantum image representation is an essential component of quantum image processing and plays a critical role in quantum information processing. Flexible Representation of Quantum Images (FRQI) presents pixel colors and associated locations as a state to represent images on quantum computers. A fundamental part of the quantum image processing system is quantum image compression (QIC), which is utilized to maintain and retrieve binary images. This compression allows us to minimize the number of controlled rotation gates in the quantum circuits. This paper designed optimized quantum circuits and simulated them using Qiskit on a real quantum computer based on minimum boolean expressions to retrieve the 8×4 binary single-digit images. To demonstrate the feasibility and efficacy of quantum image representation, quantum circuits for images were developed using FRQI, and quantum image representation experiments were done on IBM Quantum Experience (IBMQ). We were able to visualize quantum information by doing the quantum measurement on the image information that we had prepared. Without utilizing this method, the number of controlled rotation gates is equal to the number of pixels in the image; however, we showed that by using the QIC algorithm, we could decrease the number of gates significantly. On these images, the maximum and minimum compression ratios of QIC are 90.63% and 68.75%, respectively.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Employing a Hybrid Technique to Detect Tumor in Medical Images 应用混合技术检测医学图像中的肿瘤
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738739
Leyla Aqhaei
In this article, a hybrid approach using watershed, genetic, and support vector machine algorithms is presented to detect brain tumors in medical images. Employing this method, the images are segmented properly and the brain tumor is detected with high accuracy. Accordingly, first, grayscale and median filters are used to pre-process the images for noise removal. Then, the watershed algorithm is applied for segmentation of the image and then with using genetic features are explored. Finally, the SVM algorithm is applied to learn extracted features and diagnose brain tumors with high accuracy. Considering the accuracy, precision, and recall, the evaluation results indicate that the proposed method can segment and classify the images well, and it outperforms conventional algorithms with an accuracy of 95% and precision of 97%.
在这篇文章中,一个混合的方法使用分水岭,遗传,和支持向量机算法提出检测脑肿瘤的医学图像。该方法对图像进行了较好的分割,对脑肿瘤的检测精度较高。因此,首先采用灰度滤波和中值滤波对图像进行预处理,去除噪声。然后,应用分水岭算法对图像进行分割,并利用遗传特征对图像进行分割。最后,利用SVM算法学习提取的特征,实现对脑肿瘤的高精度诊断。从准确率、精密度和查全率三个方面进行评价,结果表明该方法能够很好地对图像进行分割和分类,并以95%的准确率和97%的精密度优于传统算法。
{"title":"Employing a Hybrid Technique to Detect Tumor in Medical Images","authors":"Leyla Aqhaei","doi":"10.1109/MVIP53647.2022.9738739","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738739","url":null,"abstract":"In this article, a hybrid approach using watershed, genetic, and support vector machine algorithms is presented to detect brain tumors in medical images. Employing this method, the images are segmented properly and the brain tumor is detected with high accuracy. Accordingly, first, grayscale and median filters are used to pre-process the images for noise removal. Then, the watershed algorithm is applied for segmentation of the image and then with using genetic features are explored. Finally, the SVM algorithm is applied to learn extracted features and diagnose brain tumors with high accuracy. Considering the accuracy, precision, and recall, the evaluation results indicate that the proposed method can segment and classify the images well, and it outperforms conventional algorithms with an accuracy of 95% and precision of 97%.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128030966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Learning Based Contrast Specific no Reference Image Quality Assessment Algorithm 一种基于学习的对比度特定无参考图像质量评估算法
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738784
Moliamadali Mahmoodpour, Abdolah Amirany, M. H. Moaiyeri, Kian Jafari
Contrast is one of the most important visual characteristics of an image that has a significant effect in understanding an image, however, due to different imaging conditions and poor devices, quality of image in terms of contrast will degrade. although, limited methods have been used to assess the quality of a contrast distorted images. Proper image contrast enhancement can increase the perceptual quality of most contrast distorted images. In this paper, assuming that the output images of a contrast enhancing algorithms have a quality such as a reference image, a learning-based contrast-specific no reference image quality assessment method is proposed. In the proposed method in this paper the image with the closest quality to the reference image is selected using a pre-trained classification network, and then the quality assessment is performed by comparing the enhanced image and the distorted image using structural similarity (SSIM) index. The functionality of the proposed method has been validated using three well-known contrast distorted image datasets (CSIQ, CCID2014 and TID2013).
对比度是图像最重要的视觉特征之一,对理解图像有重要的影响,但由于成像条件不同,设备不佳,图像的对比度质量会下降。虽然,有限的方法已经被用来评估对比度失真图像的质量。适当的图像对比度增强可以提高大多数对比度失真图像的感知质量。本文假设对比度增强算法的输出图像具有参考图像等质量,提出了一种基于学习的针对对比度的无参考图像质量评估方法。在本文提出的方法中,使用预训练的分类网络选择与参考图像质量最接近的图像,然后使用结构相似度(SSIM)指标比较增强图像和失真图像进行质量评估。使用三个著名的对比度失真图像数据集(CSIQ, CCID2014和TID2013)验证了所提出方法的功能。
{"title":"A Learning Based Contrast Specific no Reference Image Quality Assessment Algorithm","authors":"Moliamadali Mahmoodpour, Abdolah Amirany, M. H. Moaiyeri, Kian Jafari","doi":"10.1109/MVIP53647.2022.9738784","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738784","url":null,"abstract":"Contrast is one of the most important visual characteristics of an image that has a significant effect in understanding an image, however, due to different imaging conditions and poor devices, quality of image in terms of contrast will degrade. although, limited methods have been used to assess the quality of a contrast distorted images. Proper image contrast enhancement can increase the perceptual quality of most contrast distorted images. In this paper, assuming that the output images of a contrast enhancing algorithms have a quality such as a reference image, a learning-based contrast-specific no reference image quality assessment method is proposed. In the proposed method in this paper the image with the closest quality to the reference image is selected using a pre-trained classification network, and then the quality assessment is performed by comparing the enhanced image and the distorted image using structural similarity (SSIM) index. The functionality of the proposed method has been validated using three well-known contrast distorted image datasets (CSIQ, CCID2014 and TID2013).","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hybrid of Inference and Stacked Classifiers to Indoor Scenes Classification of RGB-D Images 基于推理和堆叠混合分类器的RGB-D图像室内场景分类
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738755
Shokouh S. Ahmadi, Hassan Khotanlou
Scene classification makes it easier to semantic scene understanding and aids to further processes and inference, using an assignment of pre-defined classes. Under this motive, we proposed an approach to classify indoor scene objects. The proposed method utilizes a stacked classifier model and refines classification results considering segment consistency. Furthermore, the challenging and messy indoor scene images have been addressed, as dealing daily. Finally, this approach simplicity and affordably obtains desirable classification results.
场景分类使用预定义类的分配,使语义场景理解更容易,并有助于进一步的处理和推理。在此动机下,我们提出了一种室内场景物体的分类方法。该方法采用堆叠分类器模型,并考虑片段一致性对分类结果进行细化。此外,解决了具有挑战性和凌乱的室内场景图像,作为日常处理。最后,该方法简单、经济,获得了理想的分类结果。
{"title":"A Hybrid of Inference and Stacked Classifiers to Indoor Scenes Classification of RGB-D Images","authors":"Shokouh S. Ahmadi, Hassan Khotanlou","doi":"10.1109/MVIP53647.2022.9738755","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738755","url":null,"abstract":"Scene classification makes it easier to semantic scene understanding and aids to further processes and inference, using an assignment of pre-defined classes. Under this motive, we proposed an approach to classify indoor scene objects. The proposed method utilizes a stacked classifier model and refines classification results considering segment consistency. Furthermore, the challenging and messy indoor scene images have been addressed, as dealing daily. Finally, this approach simplicity and affordably obtains desirable classification results.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124704320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Detection of Blastocyst Embryo In Vitro Fertilization (IVF) 体外受精(IVF)中囊胚的检测
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738768
Kimiya Samie Dehkordi, M. Ebrahimi Moghaddam
One of the most important stages in the fate of the embryo in In vitro fertilization (IVF) is the blastocyst stage. There is currently no way to diagnose blastocyst. In this study, using Resnet and Unet networks, the embryo was detected in the blastocyst state. The proposed method is trained on a set of data consisting of 40392 data, which is 24365 data for training and 5814 data for validation, and is tested on 10263 data obtained from various sources. The results show an accuracy of 92.9% and a precision of 93.7% and recall of 92 92.1% which confirm that the proposed method was well able to detect the states in which the fetus is in the blastocyst state.
在体外受精(IVF)中,胚胎命运最重要的阶段之一是囊胚期。目前还没有诊断囊胚的方法。本研究利用Resnet和Unet网络对胚泡状态下的胚胎进行检测。该方法在40392个数据集上进行训练,其中24365个数据用于训练,5814个数据用于验证,并在10263个来自不同来源的数据上进行测试。结果表明,该方法的准确率为92.9%,精密度为93.7%,召回率为92 92.1%,表明该方法能够很好地检测出胎儿处于囊胚状态的状态。
{"title":"The Detection of Blastocyst Embryo In Vitro Fertilization (IVF)","authors":"Kimiya Samie Dehkordi, M. Ebrahimi Moghaddam","doi":"10.1109/MVIP53647.2022.9738768","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738768","url":null,"abstract":"One of the most important stages in the fate of the embryo in In vitro fertilization (IVF) is the blastocyst stage. There is currently no way to diagnose blastocyst. In this study, using Resnet and Unet networks, the embryo was detected in the blastocyst state. The proposed method is trained on a set of data consisting of 40392 data, which is 24365 data for training and 5814 data for validation, and is tested on 10263 data obtained from various sources. The results show an accuracy of 92.9% and a precision of 93.7% and recall of 92 92.1% which confirm that the proposed method was well able to detect the states in which the fetus is in the blastocyst state.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121709700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FFDR: Design and implementation framework for face detection based on raspberry pi 基于树莓派的人脸检测框架的设计与实现
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738788
Dhafer Alhajim, G. Akbarizadeh, K. Ansari-Asl
In today’s world, we are surrounded by data of many types, but the abundance of image and video data available offers the data set needed for face recognition technology to function. Face recognition is a critical component of security and surveillance systems that analyze visual data and millions of pictures. In this article, we investigated the possibility of combining standard face detection and identification techniques such as machine learning and deep learning with Raspberry Pi face detection since the Raspberry Pi makes the system cost-effective, easy to use, and improves performance. Furthermore, some images of a selected individual were shot with a camera and a python program in order to do face recognition. This paper proposes a facial recognition system that can detect faces from direct and indirect images. We call this system FFDR, which is characterized by high speed and accuracy in the diagnosis of faces because it uses the Raspberry Pi 4 and the latest libraries and advanced environments in the Python language.
在当今世界,我们被各种类型的数据所包围,但丰富的图像和视频数据提供了人脸识别技术运行所需的数据集。人脸识别是安全和监控系统的关键组成部分,这些系统需要分析视觉数据和数百万张图片。在本文中,我们研究了将标准人脸检测和识别技术(如机器学习和深度学习)与树莓派人脸检测相结合的可能性,因为树莓派使系统具有成本效益,易于使用并提高了性能。此外,为了进行面部识别,用相机和python程序拍摄了选定个体的一些图像。本文提出了一种可以从直接和间接图像中检测人脸的人脸识别系统。我们称这个系统为FFDR,它的特点是快速和准确的面部诊断,因为它使用了树莓派4和最新的库和Python语言的高级环境。
{"title":"FFDR: Design and implementation framework for face detection based on raspberry pi","authors":"Dhafer Alhajim, G. Akbarizadeh, K. Ansari-Asl","doi":"10.1109/MVIP53647.2022.9738788","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738788","url":null,"abstract":"In today’s world, we are surrounded by data of many types, but the abundance of image and video data available offers the data set needed for face recognition technology to function. Face recognition is a critical component of security and surveillance systems that analyze visual data and millions of pictures. In this article, we investigated the possibility of combining standard face detection and identification techniques such as machine learning and deep learning with Raspberry Pi face detection since the Raspberry Pi makes the system cost-effective, easy to use, and improves performance. Furthermore, some images of a selected individual were shot with a camera and a python program in order to do face recognition. This paper proposes a facial recognition system that can detect faces from direct and indirect images. We call this system FFDR, which is characterized by high speed and accuracy in the diagnosis of faces because it uses the Raspberry Pi 4 and the latest libraries and advanced environments in the Python language.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124829401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Augmented Reality Framework for Eye Muscle Education 眼肌肉教育的增强现实框架
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738780
Asiyeh Bahaloo, Arman Ali Mohammadi, Mohammad Reza Mohammadi, M. Soryani
Due to the COVID-19 pandemic, the need for remote education is felt more than ever. New technologies such as Augmented Reality (AR) can improve students’ training experiences and directly affect the learning process, especially in remote education. By using AR in medical education, we no longer need to worry about patient safety during the education process because AR helps students see inside the human body without needing to cut human flesh in the real world. In this paper, we present an augmented reality framework that has the ability to add a virtual eye muscle to a person’s face in a single photo or a video. We go one step further to not just show the muscle of the eye but also customize it for each person by modeling the person’s face with a 3D morphable model (3DMM).
由于2019冠状病毒病大流行,人们比以往任何时候都更需要远程教育。增强现实(AR)等新技术可以改善学生的培训体验,并直接影响学习过程,特别是在远程教育中。通过在医学教育中使用AR,我们不再需要在教育过程中担心患者的安全,因为AR可以帮助学生看到人体内部,而无需在现实世界中切割人肉。在本文中,我们提出了一个增强现实框架,该框架能够在单个照片或视频中为人脸添加虚拟眼肌。我们更进一步,不仅展示了眼睛的肌肉,而且还通过用3D变形模型(3DMM)建模人的脸来定制每个人的肌肉。
{"title":"An Augmented Reality Framework for Eye Muscle Education","authors":"Asiyeh Bahaloo, Arman Ali Mohammadi, Mohammad Reza Mohammadi, M. Soryani","doi":"10.1109/MVIP53647.2022.9738780","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738780","url":null,"abstract":"Due to the COVID-19 pandemic, the need for remote education is felt more than ever. New technologies such as Augmented Reality (AR) can improve students’ training experiences and directly affect the learning process, especially in remote education. By using AR in medical education, we no longer need to worry about patient safety during the education process because AR helps students see inside the human body without needing to cut human flesh in the real world. In this paper, we present an augmented reality framework that has the ability to add a virtual eye muscle to a person’s face in a single photo or a video. We go one step further to not just show the muscle of the eye but also customize it for each person by modeling the person’s face with a 3D morphable model (3DMM).","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125163508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images 在时间圈图像中使用自适应多阶段卡尔曼滤波的自动细胞跟踪
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738793
Hane Naghshbandi, Yaser Baleghi Damavandi
Segmenting living cells and tracking their movement in microscopy images are significant in biological studies and have played a crucial role in disease diagnosis, targeted therapy, drug delivery, and many other medical applications. Due to a large amount of time-lapse image data, automated image analysis can be a proper alternative to manual analysis, which is unreasonably time-consuming. However, Low-resolution microscopic images, unpredictable cell behavior, and multiple cell divisions make automated cell tracking challenging. In this paper, we propose a novel multi-object tracking approach guided by a two-stage adaptive Kalman forecast. Cell segmentation is performed using an edge detector combined with various morphological operations. The tracking section includes two general stages. At first, a Kalman filter with a constant speed is used to estimate the position of each cell in consecutive frames. The primary Kalman filter was able to detect a significant percentage of cells, but the high rate of cell division and migration of cells in or out of the field of view has caused errors in the final result. In the next stage, a secondary Kalman filter with modified parameters extracted from the results of initial tracking is proposed to estimate the position of cells in each frame, decrease errors, and improve the tracking results. Experimental results indicate that our method is 94.37% accurate in segmenting cells. The validity of the whole method has been conducted by comparing the results of the proposed method with manual tracking results, which demonstrates its efficiency.
在显微镜图像中分割活细胞并跟踪其运动在生物学研究中具有重要意义,并且在疾病诊断、靶向治疗、药物输送和许多其他医学应用中发挥了至关重要的作用。由于大量的延时图像数据,自动图像分析可以替代人工分析,而人工分析是不合理的耗时。然而,低分辨率显微图像、不可预测的细胞行为和多次细胞分裂使得自动细胞跟踪具有挑战性。本文提出了一种基于两阶段自适应卡尔曼预测的多目标跟踪方法。细胞分割是使用结合各种形态操作的边缘检测器进行的。跟踪部分包括两个一般阶段。首先,用等速卡尔曼滤波估计连续帧中每个细胞的位置;初级卡尔曼滤波器能够检测到很大比例的细胞,但是高细胞分裂率和细胞在视场内外的迁移导致了最终结果的误差。在第二阶段,利用初始跟踪结果提取的修正参数,提出二次卡尔曼滤波器,估计每帧中细胞的位置,减少误差,改善跟踪结果。实验结果表明,该方法的细胞分割准确率为94.37%。通过与人工跟踪结果的比较,验证了整个方法的有效性,证明了该方法的有效性。
{"title":"Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images","authors":"Hane Naghshbandi, Yaser Baleghi Damavandi","doi":"10.1109/MVIP53647.2022.9738793","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738793","url":null,"abstract":"Segmenting living cells and tracking their movement in microscopy images are significant in biological studies and have played a crucial role in disease diagnosis, targeted therapy, drug delivery, and many other medical applications. Due to a large amount of time-lapse image data, automated image analysis can be a proper alternative to manual analysis, which is unreasonably time-consuming. However, Low-resolution microscopic images, unpredictable cell behavior, and multiple cell divisions make automated cell tracking challenging. In this paper, we propose a novel multi-object tracking approach guided by a two-stage adaptive Kalman forecast. Cell segmentation is performed using an edge detector combined with various morphological operations. The tracking section includes two general stages. At first, a Kalman filter with a constant speed is used to estimate the position of each cell in consecutive frames. The primary Kalman filter was able to detect a significant percentage of cells, but the high rate of cell division and migration of cells in or out of the field of view has caused errors in the final result. In the next stage, a secondary Kalman filter with modified parameters extracted from the results of initial tracking is proposed to estimate the position of cells in each frame, decrease errors, and improve the tracking results. Experimental results indicate that our method is 94.37% accurate in segmenting cells. The validity of the whole method has been conducted by comparing the results of the proposed method with manual tracking results, which demonstrates its efficiency.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115209996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computer-aided Brain Age Estimation via Ensemble Learning of 3D Convolutional Neural Networks 基于三维卷积神经网络集成学习的计算机辅助脑年龄估计
Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738758
Ali Bahari Malayeri, Mohammad Mahdi Moradi, Kian Jafari Dinani
predicting brain age using Magnetic Resonant Imaging (MRI) and its difference with chronological age is useful for detecting Alzheimer's disease in the early stages. For having accurate brain age prediction with MRI, Deep learning could play an active role, but its performance is highly dependent on the amount of data and computes memory we access. In this paper, in order to approximate as accurately as possible, the age of the brain through T1 weighted structural MRI, a deep 3D convolutional neural network model is proposed. Furthermore, different techniques such as data normalization and ensemble learning have been applied to the suggested model for getting more accurate results. The system is trained and tested on the IXI database, which is being normalized by SPM12. Finally, this model is assessed through the Mean Absolute Error (MAE) metric, and the results demonstrate our model is capable of computing the approximation age of the subjects with an MAE, which is equal to 5.07 years.
利用磁共振成像(MRI)预测脑年龄及其与实足年龄的差异有助于早期发现阿尔茨海默病。对于通过MRI准确预测大脑年龄,深度学习可以发挥积极作用,但其性能高度依赖于我们访问的数据和计算内存的数量。为了通过T1加权结构MRI尽可能准确地近似大脑的年龄,本文提出了一种深度三维卷积神经网络模型。此外,不同的技术,如数据归一化和集成学习已应用于建议的模型,以获得更准确的结果。该系统在IXI数据库上进行训练和测试,该数据库正在SPM12进行规范化。最后,通过平均绝对误差(MAE)度量对该模型进行了评估,结果表明该模型能够计算出具有平均绝对误差(MAE)的受试者的近似年龄,该年龄等于5.07岁。
{"title":"Computer-aided Brain Age Estimation via Ensemble Learning of 3D Convolutional Neural Networks","authors":"Ali Bahari Malayeri, Mohammad Mahdi Moradi, Kian Jafari Dinani","doi":"10.1109/MVIP53647.2022.9738758","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738758","url":null,"abstract":"predicting brain age using Magnetic Resonant Imaging (MRI) and its difference with chronological age is useful for detecting Alzheimer's disease in the early stages. For having accurate brain age prediction with MRI, Deep learning could play an active role, but its performance is highly dependent on the amount of data and computes memory we access. In this paper, in order to approximate as accurately as possible, the age of the brain through T1 weighted structural MRI, a deep 3D convolutional neural network model is proposed. Furthermore, different techniques such as data normalization and ensemble learning have been applied to the suggested model for getting more accurate results. The system is trained and tested on the IXI database, which is being normalized by SPM12. Finally, this model is assessed through the Mean Absolute Error (MAE) metric, and the results demonstrate our model is capable of computing the approximation age of the subjects with an MAE, which is equal to 5.07 years.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"35 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125551505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 International Conference on Machine Vision and Image Processing (MVIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1