Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675431
Hadi Amirpour, R. Schatz, C. Timmerer, M. Ghanbari
Due to the growing importance of optimizing the quality and efficiency of video streaming delivery, accurate assessment of user-perceived video quality becomes increasingly important. However, due to the wide range of viewing distances encountered in real-world viewing settings, the perceived video quality can vary significantly in everyday viewing situations. In this paper, we investigate and quantify the influence of viewing distance on perceived video quality. A subjective experiment was conducted with full HD sequences at three different fixed viewing distances, with each video sequence being encoded at three different quality levels. Our study results confirm that the viewing distance has a significant influence on the quality assessment. In particular, they show that an increased viewing distance generally leads to increased perceived video quality, especially at low media encoding quality levels. In this context, we also provide an estimation of potential bitrate savings that knowledge of actual viewing distance would enable in practice. Since current objective video quality metrics do not systematically take into account viewing distance, we also analyze and quantify the influence of viewing distance on the correlation between objective and subjective metrics. Our results confirm the need for distance-aware objective metrics when the accurate prediction of perceived video quality in real-world environments is required.
{"title":"On the Impact of Viewing Distance on Perceived Video Quality","authors":"Hadi Amirpour, R. Schatz, C. Timmerer, M. Ghanbari","doi":"10.1109/VCIP53242.2021.9675431","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675431","url":null,"abstract":"Due to the growing importance of optimizing the quality and efficiency of video streaming delivery, accurate assessment of user-perceived video quality becomes increasingly important. However, due to the wide range of viewing distances encountered in real-world viewing settings, the perceived video quality can vary significantly in everyday viewing situations. In this paper, we investigate and quantify the influence of viewing distance on perceived video quality. A subjective experiment was conducted with full HD sequences at three different fixed viewing distances, with each video sequence being encoded at three different quality levels. Our study results confirm that the viewing distance has a significant influence on the quality assessment. In particular, they show that an increased viewing distance generally leads to increased perceived video quality, especially at low media encoding quality levels. In this context, we also provide an estimation of potential bitrate savings that knowledge of actual viewing distance would enable in practice. Since current objective video quality metrics do not systematically take into account viewing distance, we also analyze and quantify the influence of viewing distance on the correlation between objective and subjective metrics. Our results confirm the need for distance-aware objective metrics when the accurate prediction of perceived video quality in real-world environments is required.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115268245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675426
Xinye Jiang, Zhenyu Liu, Yongbing Zhang, Xiangyang Ji
Rate-distortion optimization (RDO) is widely used in video coding to improve coding efficiency. Conventionally, RDO is applied to each block independently to avoid high computational complexity. However, various prediction techniques introduce spatio-temporal dependency between blocks, therefore the independent RDO is not optimal. Specifically, because of the motion compensation, the distortion of reference blocks will affect the quality of subsequent prediction blocks. And considering this temporal dependency in RDO can improve the global rate-distortion (R-D) performance. x265 leveraged on a lookahead module to analyze the temporal dependency between blocks, and weighted the quality of each block based on its reference strength. However, the original algorithm in x265 ignored the impacts of quantization, and this shortcoming degraded the R-D performance of x265. In this paper, we propose a new linear distortion propagation model to estimate the temporal dependency, which introduces the impacts of quantization. And from a perspective of global RDO, a corresponding adaptive quantization formula is presented. The proposed algorithm was conducted in x265 version 3.2. Experiments revealed that, the proposed algorithm achieved average 15.43% PSNR-based and 23.81% SSIM-based BD-rate reductions, which outperformed the original algorithm in x265 by 4.14% and 9.68%, respectively.
{"title":"A Distortion Propagation Oriented CU-tree Algorithm for x265","authors":"Xinye Jiang, Zhenyu Liu, Yongbing Zhang, Xiangyang Ji","doi":"10.1109/VCIP53242.2021.9675426","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675426","url":null,"abstract":"Rate-distortion optimization (RDO) is widely used in video coding to improve coding efficiency. Conventionally, RDO is applied to each block independently to avoid high computational complexity. However, various prediction techniques introduce spatio-temporal dependency between blocks, therefore the independent RDO is not optimal. Specifically, because of the motion compensation, the distortion of reference blocks will affect the quality of subsequent prediction blocks. And considering this temporal dependency in RDO can improve the global rate-distortion (R-D) performance. x265 leveraged on a lookahead module to analyze the temporal dependency between blocks, and weighted the quality of each block based on its reference strength. However, the original algorithm in x265 ignored the impacts of quantization, and this shortcoming degraded the R-D performance of x265. In this paper, we propose a new linear distortion propagation model to estimate the temporal dependency, which introduces the impacts of quantization. And from a perspective of global RDO, a corresponding adaptive quantization formula is presented. The proposed algorithm was conducted in x265 version 3.2. Experiments revealed that, the proposed algorithm achieved average 15.43% PSNR-based and 23.81% SSIM-based BD-rate reductions, which outperformed the original algorithm in x265 by 4.14% and 9.68%, respectively.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131560756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675361
J. Schneider, Christian Rohlfing
Versatile Video Coding (VVC) introduces the con-cept of Reference Picture Resampling (RPR), which allows for a resolution change of the video during decoding, without introducing an additional Intra Random Access Point (IRAP) into the bitstream. When the resolution is increased, an upsampling operation of the reference picture is required in order to apply motion compensated prediction. Conceptually, the upsampling by linear interpolation filters fails to recover frequencies which were lost during downsampling. Yet, the quality of the upsampled reference picture is crucial to the pre-diction performance. In recent years, machine learning based Super-Resolution (SR) has shown to outperform conventional interpolation filters by far in regard to super-resolving a previ-ously downsampled image. In particular, Dictionary Learning-based Super-Resolution (DLSR) was shown to improve the inter-layer prediction in SHVC [1]. Thus, this paper introduces DLSR to the prediction process in RPR. Further, the approach is experimentally evaluated by an implementation based on the VTM-9.3 reference software. The simulation results show a reduction of the instantaneous bitrate of 0.98% on average at the same objective quality in terms of PSNR. Moreover, the peak bitrate reduction is measured to 4.74% for the “Johnny” sequence of the JVET test set.
{"title":"Dictionary Learning-based Reference Picture Resampling in VVC","authors":"J. Schneider, Christian Rohlfing","doi":"10.1109/VCIP53242.2021.9675361","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675361","url":null,"abstract":"Versatile Video Coding (VVC) introduces the con-cept of Reference Picture Resampling (RPR), which allows for a resolution change of the video during decoding, without introducing an additional Intra Random Access Point (IRAP) into the bitstream. When the resolution is increased, an upsampling operation of the reference picture is required in order to apply motion compensated prediction. Conceptually, the upsampling by linear interpolation filters fails to recover frequencies which were lost during downsampling. Yet, the quality of the upsampled reference picture is crucial to the pre-diction performance. In recent years, machine learning based Super-Resolution (SR) has shown to outperform conventional interpolation filters by far in regard to super-resolving a previ-ously downsampled image. In particular, Dictionary Learning-based Super-Resolution (DLSR) was shown to improve the inter-layer prediction in SHVC [1]. Thus, this paper introduces DLSR to the prediction process in RPR. Further, the approach is experimentally evaluated by an implementation based on the VTM-9.3 reference software. The simulation results show a reduction of the instantaneous bitrate of 0.98% on average at the same objective quality in terms of PSNR. Moreover, the peak bitrate reduction is measured to 4.74% for the “Johnny” sequence of the JVET test set.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134548173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675352
Yingjie Feng, Sumei Li, Sihan Hao
In recent years, deep learning has achieved significant progress in many respects. However, unlike other research fields with millions of labeled data such as image recognition, only several thousand labeled images are available in image quality assessment (IQA) field for deep learning, which heavily hinders the development and application for IQA. To tackle this problem, in this paper, we proposed an error self-learning semi-supervised method for no-reference (NR) IQA (ESSIQA), which is based on deep learning. We employed an advanced full reference (FR) IQA method to expand databases and supervise the training of network. In addition, the network outputs of expanding images were used as proxy labels replacing errors between subjective scores and objective scores to achieve error self-learning. Two weights of error back propagation were designed to reduce the impact of inaccurate outputs. The experimental results show that the proposed method yielded comparative effect.
{"title":"An Error Self-learning Semi-supervised Method for No-reference Image Quality Assessment","authors":"Yingjie Feng, Sumei Li, Sihan Hao","doi":"10.1109/VCIP53242.2021.9675352","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675352","url":null,"abstract":"In recent years, deep learning has achieved significant progress in many respects. However, unlike other research fields with millions of labeled data such as image recognition, only several thousand labeled images are available in image quality assessment (IQA) field for deep learning, which heavily hinders the development and application for IQA. To tackle this problem, in this paper, we proposed an error self-learning semi-supervised method for no-reference (NR) IQA (ESSIQA), which is based on deep learning. We employed an advanced full reference (FR) IQA method to expand databases and supervise the training of network. In addition, the network outputs of expanding images were used as proxy labels replacing errors between subjective scores and objective scores to achieve error self-learning. Two weights of error back propagation were designed to reduce the impact of inaccurate outputs. The experimental results show that the proposed method yielded comparative effect.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115719799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675367
Asfand Yaar, H. Ateş, B. Gunturk
Deep learning-based single image super-resolution (SR) consistently shows superior performance compared to the traditional SR methods. However, most of these methods assume that the blur kernel used to generate the low-resolution (LR) image is known and fixed (e.g. bicubic). Since blur kernels involved in real-life scenarios are complex and unknown, per-formance of these SR methods is greatly reduced for real blurry images. Reconstruction of high-resolution (HR) images from randomly blurred and noisy LR images remains a challenging task. Typical blind SR approaches involve two sequential stages: i) kernel estimation; ii) SR image reconstruction based on estimated kernel. However, due to the ill-posed nature of this problem, an iterative refinement could be beneficial for both kernel and SR image estimate. With this observation, in this paper, we propose an image SR method based on deep learning with iterative kernel estimation and image reconstruction. Simulation results show that the proposed method outperforms state-of-the-art in blind image SR and produces visually superior results as well.
{"title":"Deep Learning-Based Blind Image Super-Resolution using Iterative Networks","authors":"Asfand Yaar, H. Ateş, B. Gunturk","doi":"10.1109/VCIP53242.2021.9675367","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675367","url":null,"abstract":"Deep learning-based single image super-resolution (SR) consistently shows superior performance compared to the traditional SR methods. However, most of these methods assume that the blur kernel used to generate the low-resolution (LR) image is known and fixed (e.g. bicubic). Since blur kernels involved in real-life scenarios are complex and unknown, per-formance of these SR methods is greatly reduced for real blurry images. Reconstruction of high-resolution (HR) images from randomly blurred and noisy LR images remains a challenging task. Typical blind SR approaches involve two sequential stages: i) kernel estimation; ii) SR image reconstruction based on estimated kernel. However, due to the ill-posed nature of this problem, an iterative refinement could be beneficial for both kernel and SR image estimate. With this observation, in this paper, we propose an image SR method based on deep learning with iterative kernel estimation and image reconstruction. Simulation results show that the proposed method outperforms state-of-the-art in blind image SR and produces visually superior results as well.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134133493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675347
Mário Saldanha, G. Sanchez, C. Marcon, L. Agostini
This paper presents an encoding time and encoding efficiency analysis of the Quadtree with nested Multi-type Tree (QTMT) structure in the Versatile Video Coding (VVC) intra-frame prediction. The QTMT structure enables VVC to improve the compression performance compared to its predecessor standard at the cost of a higher encoding complexity. The intra-frame prediction time raised about 26 times compared to the HEVC reference software, and most of this time is related to the new block partitioning structure. Thus, this paper provides a detailed description of the VVC block partitioning structure and an in-depth analysis of the QTMT structure regarding coding time and coding efficiency. Based on the presented analyses, this paper can guide outcoming works focusing on the block partitioning of the VVC intra-frame prediction.
{"title":"Analysis of VVC Intra Prediction Block Partitioning Structure","authors":"Mário Saldanha, G. Sanchez, C. Marcon, L. Agostini","doi":"10.1109/VCIP53242.2021.9675347","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675347","url":null,"abstract":"This paper presents an encoding time and encoding efficiency analysis of the Quadtree with nested Multi-type Tree (QTMT) structure in the Versatile Video Coding (VVC) intra-frame prediction. The QTMT structure enables VVC to improve the compression performance compared to its predecessor standard at the cost of a higher encoding complexity. The intra-frame prediction time raised about 26 times compared to the HEVC reference software, and most of this time is related to the new block partitioning structure. Thus, this paper provides a detailed description of the VVC block partitioning structure and an in-depth analysis of the QTMT structure regarding coding time and coding efficiency. Based on the presented analyses, this paper can guide outcoming works focusing on the block partitioning of the VVC intra-frame prediction.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"5 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132329617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675348
Cong Zou, Xuchen Wang, Yaosi Hu, Zhenzhong Chen, Shan Liu
Video captioning is considered to be challenging due to the combination of video understanding and text generation. Recent progress in video captioning has been made mainly using methods of visual feature extraction and sequential learning. However, the syntax structure and semantic consistency of generated captions are not fully explored. Thus, in our work, we propose a novel multimodal attention based framework with Part-of-Speech (POS) sequence guidance to generate more accu-rate video captions. In general, the word sequence generation and POS sequence prediction are hierarchically jointly modeled in the framework. Specifically, different modalities including visual, motion, object and syntactic features are adaptively weighted and fused with the POS guided attention mechanism when computing the probability distributions of prediction words. Experimental results on two benchmark datasets, i.e. MSVD and MSR-VTT, demonstrate that the proposed method can not only fully exploit the information from video and text content, but also focus on the decisive feature modality when generating a word with a certain POS type. Thus, our approach boosts the video captioning performance as well as generating idiomatic captions.
{"title":"MAPS: Joint Multimodal Attention and POS Sequence Generation for Video Captioning","authors":"Cong Zou, Xuchen Wang, Yaosi Hu, Zhenzhong Chen, Shan Liu","doi":"10.1109/VCIP53242.2021.9675348","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675348","url":null,"abstract":"Video captioning is considered to be challenging due to the combination of video understanding and text generation. Recent progress in video captioning has been made mainly using methods of visual feature extraction and sequential learning. However, the syntax structure and semantic consistency of generated captions are not fully explored. Thus, in our work, we propose a novel multimodal attention based framework with Part-of-Speech (POS) sequence guidance to generate more accu-rate video captions. In general, the word sequence generation and POS sequence prediction are hierarchically jointly modeled in the framework. Specifically, different modalities including visual, motion, object and syntactic features are adaptively weighted and fused with the POS guided attention mechanism when computing the probability distributions of prediction words. Experimental results on two benchmark datasets, i.e. MSVD and MSR-VTT, demonstrate that the proposed method can not only fully exploit the information from video and text content, but also focus on the decisive feature modality when generating a word with a certain POS type. Thus, our approach boosts the video captioning performance as well as generating idiomatic captions.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"1129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133802942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675355
Yixin Du, Xin Zhao, Shanchun Liu
Existing cross-component video coding technologies have shown great potential on improving coding efficiency. The fundamental insight of cross-component coding technology is respecting the statistical correlations among different color components. In this paper, a Cross-Component Sample Offset (CCSO) approach for image and video coding is proposed inspired by the observation that, luma component tends to contain more texture, while chroma component is relatively smoother. The key component of CCSO is a non-linear offset mapping mechanism implemented as a look-up-table (LUT). The input of the mapping is the co-located reconstructed samples of luma component, and the output is offset values applied on chroma component. The proposed method has been implemented on top of a recent version of libaom. Experimental results show that the proposed approach brings 1.16% Random Access (RA) BD-rate saving on top of AV1 with marginal encoding/decoding time increase.
{"title":"Cross-Component Sample Offset for Image and Video Coding","authors":"Yixin Du, Xin Zhao, Shanchun Liu","doi":"10.1109/VCIP53242.2021.9675355","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675355","url":null,"abstract":"Existing cross-component video coding technologies have shown great potential on improving coding efficiency. The fundamental insight of cross-component coding technology is respecting the statistical correlations among different color components. In this paper, a Cross-Component Sample Offset (CCSO) approach for image and video coding is proposed inspired by the observation that, luma component tends to contain more texture, while chroma component is relatively smoother. The key component of CCSO is a non-linear offset mapping mechanism implemented as a look-up-table (LUT). The input of the mapping is the co-located reconstructed samples of luma component, and the output is offset values applied on chroma component. The proposed method has been implemented on top of a recent version of libaom. Experimental results show that the proposed approach brings 1.16% Random Access (RA) BD-rate saving on top of AV1 with marginal encoding/decoding time increase.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132858740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675423
Chunjun Hua, Menghan Hu, Yue Wu
Congenital glaucoma is an eye disease caused by embryonic developmental disorders, which damages the optic nerve. In this demo paper, we proposed a portable non-contact congenital glaucoma detection system, which can evaluate the condition of children's eyes by measuring the cornea size using the developed mobile application. The system consists of two modules viz. cornea identification module and diagnosis module. This system can be utilized by everyone with a smartphone, which is of wider application. It can be used as a convenient home self-examination tool for children in the large-scale screening of congenital glaucoma. The demo video of the proposed detection system is available at: https://doi.org/10.6084/m9.figshare.14728854.v1.
{"title":"Portable Congenital Glaucoma Detection System","authors":"Chunjun Hua, Menghan Hu, Yue Wu","doi":"10.1109/VCIP53242.2021.9675423","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675423","url":null,"abstract":"Congenital glaucoma is an eye disease caused by embryonic developmental disorders, which damages the optic nerve. In this demo paper, we proposed a portable non-contact congenital glaucoma detection system, which can evaluate the condition of children's eyes by measuring the cornea size using the developed mobile application. The system consists of two modules viz. cornea identification module and diagnosis module. This system can be utilized by everyone with a smartphone, which is of wider application. It can be used as a convenient home self-examination tool for children in the large-scale screening of congenital glaucoma. The demo video of the proposed detection system is available at: https://doi.org/10.6084/m9.figshare.14728854.v1.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116542331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675318
Sarit Divekar, Irina Rabaev, Marina Litvak
Plant classification requires an expert because subtle differences in leaves or petal forms might differentiate between different species. On the contrary, some species are characterized by high variability in appearance. This paper introduces a web app for assisting people in identifying plants for discovering the best growing methods. The uploaded picture is submitted to the back-end server, and a pre-trained neural network classifies it to one of the predefined classes. The classification label and confidence are displayed to the end user on the front-end page. The application focuses on the house and garden plant species that can be grown mainly in a desert climate and are not covered by existing datasets. For training a model, we collected the Urban Planter dataset. The installation code of the alpha version and the demo video of the app can be found on https://github.com/UrbanPlanter/urbanplanterapp.
{"title":"Urban Planter: A Web App for Automatic Classification of Urban Plants","authors":"Sarit Divekar, Irina Rabaev, Marina Litvak","doi":"10.1109/VCIP53242.2021.9675318","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675318","url":null,"abstract":"Plant classification requires an expert because subtle differences in leaves or petal forms might differentiate between different species. On the contrary, some species are characterized by high variability in appearance. This paper introduces a web app for assisting people in identifying plants for discovering the best growing methods. The uploaded picture is submitted to the back-end server, and a pre-trained neural network classifies it to one of the predefined classes. The classification label and confidence are displayed to the end user on the front-end page. The application focuses on the house and garden plant species that can be grown mainly in a desert climate and are not covered by existing datasets. For training a model, we collected the Urban Planter dataset. The installation code of the alpha version and the demo video of the app can be found on https://github.com/UrbanPlanter/urbanplanterapp.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115999684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}