Item-based collaborative filtering recommendation system has been widely used in many fields, which generates recommendations based on similarity between items. However, the conventional similarity calculation may produce inaccurate results because of data sparsity. To alleviate this problem, this paper proposes a new method of similarity calculation based on item rating and genre. Firstly, similarity calculation based on item rating are proposed, which reduces similarity between items with fewer co-rating users. Genre information is an inherent attribute of an item which could not be changed by user behavior. It reflects the common characteristics among items, then item similarity based on the item’s dependency on genre are calculated. Finally, a trade-off between rating and genre similarity are proposed to calaulate the similarity between items. Experimental results show that the proposed method can alleviate the issue of inaccurate similarity caused by sparse data and improve the recommendation quality.
{"title":"Improvement of Similarity Coefficients Based on Item Rating and Item Genre","authors":"Xiao-Chuan Lin, Fei Zhang, Wei-Hui Jiang, Jia-Chen Liang","doi":"10.1109/ICWAPR48189.2019.8946453","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946453","url":null,"abstract":"Item-based collaborative filtering recommendation system has been widely used in many fields, which generates recommendations based on similarity between items. However, the conventional similarity calculation may produce inaccurate results because of data sparsity. To alleviate this problem, this paper proposes a new method of similarity calculation based on item rating and genre. Firstly, similarity calculation based on item rating are proposed, which reduces similarity between items with fewer co-rating users. Genre information is an inherent attribute of an item which could not be changed by user behavior. It reflects the common characteristics among items, then item similarity based on the item’s dependency on genre are calculated. Finally, a trade-off between rating and genre similarity are proposed to calaulate the similarity between items. Experimental results show that the proposed method can alleviate the issue of inaccurate similarity caused by sparse data and improve the recommendation quality.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946472
Lina Yang, Pu Wei, Xichun Li, Yuanyan Tang
In this paper the authors discuss the applications of LSTM Neural Network in Protein Structure Prediction. The main idea is to construct a LSTM neural network. Predicting the secondary structure of a protein is the basis content for predicting its spatial structure. In this article, a position-specific scoring matrices (PSSM) containing evolutionary information is linked to other features to construct a completely new feature set. The CB513 data set is selected to construct LSTM neural networks to predict the secondary structure of the sequence. Experiments have shown that the proposed method effectively improves the prediction accuracy and is better than the previous method. The idea in this paper can also be applied to the analysis of other sequences.
{"title":"Application Of LSTM In Protein Structure Prediction LINA","authors":"Lina Yang, Pu Wei, Xichun Li, Yuanyan Tang","doi":"10.1109/ICWAPR48189.2019.8946472","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946472","url":null,"abstract":"In this paper the authors discuss the applications of LSTM Neural Network in Protein Structure Prediction. The main idea is to construct a LSTM neural network. Predicting the secondary structure of a protein is the basis content for predicting its spatial structure. In this article, a position-specific scoring matrices (PSSM) containing evolutionary information is linked to other features to construct a completely new feature set. The CB513 data set is selected to construct LSTM neural networks to predict the secondary structure of the sequence. Experiments have shown that the proposed method effectively improves the prediction accuracy and is better than the previous method. The idea in this paper can also be applied to the analysis of other sequences.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117137431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946452
Hiroaki Takeda, Teruya Minamoto
We propose herein a new feature extraction method based on the lifting wavelet transform for dysplasia detection from an endoscopic image. In the proposed method, the input endoscopic image is converted into the hue-saturation-value color space, and the S space image is used. The pattern of the abnormal area is learned from this image using Daubechies 2 (db2) wavelet lifting wavelet transform. The lifting wavelet transform is performed on the detected image using the learned filter. Each frequency component is obtained using this method. The detected image generated from the sum of the high-frequency components is divided into small blocks. A static threshold is determined herein to obtain a binary image. Discrete wavelet transform is used to exclude smooth areas. V space images are used to exclude dark areas, such as shadows. This emphasizes the contour of the abnormal part. Finally, from the idea that the area surrounded by the outline is also abnormal, the life game is limitedly applied to emphasize the abnormal area. We describe the feature extraction in detail and present the experimental results demonstrating that our method is useful for the development of dysplasia detection from an endoscopic image.
{"title":"Detection Of Dysplasia From Endoscopic Images Using Daubechies 2 Wavelet Lifting Wavelet Transform","authors":"Hiroaki Takeda, Teruya Minamoto","doi":"10.1109/ICWAPR48189.2019.8946452","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946452","url":null,"abstract":"We propose herein a new feature extraction method based on the lifting wavelet transform for dysplasia detection from an endoscopic image. In the proposed method, the input endoscopic image is converted into the hue-saturation-value color space, and the S space image is used. The pattern of the abnormal area is learned from this image using Daubechies 2 (db2) wavelet lifting wavelet transform. The lifting wavelet transform is performed on the detected image using the learned filter. Each frequency component is obtained using this method. The detected image generated from the sum of the high-frequency components is divided into small blocks. A static threshold is determined herein to obtain a binary image. Discrete wavelet transform is used to exclude smooth areas. V space images are used to exclude dark areas, such as shadows. This emphasizes the contour of the abnormal part. Finally, from the idea that the area surrounded by the outline is also abnormal, the life game is limitedly applied to emphasize the abnormal area. We describe the feature extraction in detail and present the experimental results demonstrating that our method is useful for the development of dysplasia detection from an endoscopic image.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122644489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946476
L. Qiao, Manman Jia, Bin Wei, Ziqi Liu, Yao Qin
Spontaneous biophoton emission (BPE) signals of wheat have strong nonlinear and non-Gaussian, the traditional time-frequency analysis method cannot effectively analyze the spontaneous BPE signals of wheat. This study uses the bispectral analysis technique to process BPE signals for the first time and extracts the high-order spectrum distribution characteristics of normal wheat and infected wheat. By estimating the bispectrum, the slice bispectrum, and the characteristic parameters of the diagonal slice spectrum and the horizontal slice spectrum, the bispectral distribution characteristics of the spontaneous BPE signal of normal wheat and wheat that has been infected by insects are obtained. Bispectral analysis can not only eliminate the interference of Gaussian noise, but also elucidate the amplitude and phase information of the signal. Experiments show that the extracted parameters of the BPE signals yield a detailed spectral distribution and show differences between infected wheat and normal wheat. The results of this study provide a comprehensive description of the characteristics of infected wheat and provide an experimental and theoretical basis for the detection of insects in grain.
{"title":"Frequency Characteristics Extraction of Infected Wheat BPE Signals Based on Bispectrum Analysis and High-Order Spectrum Distribution","authors":"L. Qiao, Manman Jia, Bin Wei, Ziqi Liu, Yao Qin","doi":"10.1109/ICWAPR48189.2019.8946476","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946476","url":null,"abstract":"Spontaneous biophoton emission (BPE) signals of wheat have strong nonlinear and non-Gaussian, the traditional time-frequency analysis method cannot effectively analyze the spontaneous BPE signals of wheat. This study uses the bispectral analysis technique to process BPE signals for the first time and extracts the high-order spectrum distribution characteristics of normal wheat and infected wheat. By estimating the bispectrum, the slice bispectrum, and the characteristic parameters of the diagonal slice spectrum and the horizontal slice spectrum, the bispectral distribution characteristics of the spontaneous BPE signal of normal wheat and wheat that has been infected by insects are obtained. Bispectral analysis can not only eliminate the interference of Gaussian noise, but also elucidate the amplitude and phase information of the signal. Experiments show that the extracted parameters of the BPE signals yield a detailed spectral distribution and show differences between infected wheat and normal wheat. The results of this study provide a comprehensive description of the characteristics of infected wheat and provide an experimental and theoretical basis for the detection of insects in grain.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"342 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133770995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946470
JinXin Fan, K. U
In this paper, a novel time-domain zero-watermarking algorithm based on Non-Uniform Triangular Partition (NTP) is proposed. NTP is an image representation method in which the bivariate polynomials are calculated and used to represent the pixel values in each triangular region under a set control error. The number of triangles in each $8times 8$ region is counted and recorded as a feature matrix which will be mapped to a binary watermark scrambling by the Arnold scrambling method to enhance the security of the algorithm. The feature matrix and the scrambled binary watermark are stored as zero watermarks. Experimental results show that the proposed algorithm is robust to various attacks, such as JPEG compression, rotation, Gaussian noise and salt and pepper noise.
{"title":"A Novel Image Zero-Watermarking Scheme Based on Non-Uniform Triangular Partition","authors":"JinXin Fan, K. U","doi":"10.1109/ICWAPR48189.2019.8946470","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946470","url":null,"abstract":"In this paper, a novel time-domain zero-watermarking algorithm based on Non-Uniform Triangular Partition (NTP) is proposed. NTP is an image representation method in which the bivariate polynomials are calculated and used to represent the pixel values in each triangular region under a set control error. The number of triangles in each $8times 8$ region is counted and recorded as a feature matrix which will be mapped to a binary watermark scrambling by the Arnold scrambling method to enhance the security of the algorithm. The feature matrix and the scrambled binary watermark are stored as zero watermarks. Experimental results show that the proposed algorithm is robust to various attacks, such as JPEG compression, rotation, Gaussian noise and salt and pepper noise.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114892594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946451
Hailong Su, Lina Yang, Yuanyan Tang, Huiwu Luo
Traditional wavelet transform-based methods process decompose coefficient in high-dimensional, which makes computational complicated. In order to address this problem, in this paper, a novel approach named wavelet transform-based one dimensional manifold embedding (WT1DME) is proposed for HSI classification. In the proposed approach, firstly, using wavelet transform decomposes the input signal into an approximate coefficients (ACs). Then, smooth ordering is applied to the ACs which maps the coefficients into one-dimensional (1-D) space. Finally, since the coefficients in the 1-D space, hence, 1-D signal processing tools can be applied to build final classifier(we utilize interpolation in this paper). Our proposed methods can be used to process the decompose coefficients in 1-D space, which can perform efficiently. The proposed scheme is experimentally demonstrated by two HSI data sets: IndianPines, University of Pavia has the state-of-the-art performance of results.
{"title":"Wavelet Transform-Based One Dimensional Manifold Embedding For Hyperspectral Image Classification","authors":"Hailong Su, Lina Yang, Yuanyan Tang, Huiwu Luo","doi":"10.1109/ICWAPR48189.2019.8946451","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946451","url":null,"abstract":"Traditional wavelet transform-based methods process decompose coefficient in high-dimensional, which makes computational complicated. In order to address this problem, in this paper, a novel approach named wavelet transform-based one dimensional manifold embedding (WT1DME) is proposed for HSI classification. In the proposed approach, firstly, using wavelet transform decomposes the input signal into an approximate coefficients (ACs). Then, smooth ordering is applied to the ACs which maps the coefficients into one-dimensional (1-D) space. Finally, since the coefficients in the 1-D space, hence, 1-D signal processing tools can be applied to build final classifier(we utilize interpolation in this paper). Our proposed methods can be used to process the decompose coefficients in 1-D space, which can perform efficiently. The proposed scheme is experimentally demonstrated by two HSI data sets: IndianPines, University of Pavia has the state-of-the-art performance of results.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946460
Yi Zhu, Wendi Li, Ting Wang, Junwen Li, NG Wing W. Y.
The most critical step in license plate recognition tasks is the identification of individual character image from the license plate image segments. Conventional methods of recognizing a character including Support Vector Machine (SVM) and neural network require the training using many license plate images. However, the amount of training data is limited and there are many unseen situations, where the generalization capability of a trained classifier is usually limited. If the license plate image distortion is serious due to either weather conditions or technical reasons of photographing, accuracy of these methods will be greatly reduced. Therefore a robust license plate recognition method is proposed using a Radial Basis Function Neural Network (RBFNN) trained via a minimization of the localized generalization error model (L-GEM). The L-GEM provides the upper bound of the generalization capability of an RBFNN with respect to a given training data set. Therefore, the trained RBFNN yields a better generalization capability and a higher recognition rate for new unseen samples. Experimental results show that RBFNNs trained by minimizing the L-GEM always yield the highest accuracy in diversified situations, such as rainy and snowy conditions.
{"title":"License Plate Recognition in Diversified Situations Using Robust L-GEM-Based RBFNN","authors":"Yi Zhu, Wendi Li, Ting Wang, Junwen Li, NG Wing W. Y.","doi":"10.1109/ICWAPR48189.2019.8946460","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946460","url":null,"abstract":"The most critical step in license plate recognition tasks is the identification of individual character image from the license plate image segments. Conventional methods of recognizing a character including Support Vector Machine (SVM) and neural network require the training using many license plate images. However, the amount of training data is limited and there are many unseen situations, where the generalization capability of a trained classifier is usually limited. If the license plate image distortion is serious due to either weather conditions or technical reasons of photographing, accuracy of these methods will be greatly reduced. Therefore a robust license plate recognition method is proposed using a Radial Basis Function Neural Network (RBFNN) trained via a minimization of the localized generalization error model (L-GEM). The L-GEM provides the upper bound of the generalization capability of an RBFNN with respect to a given training data set. Therefore, the trained RBFNN yields a better generalization capability and a higher recognition rate for new unseen samples. Experimental results show that RBFNNs trained by minimizing the L-GEM always yield the highest accuracy in diversified situations, such as rainy and snowy conditions.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128556888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946465
Chen-Jian Wang, Hong Li, Y. Tang
Due to factors such as low spatial resolution, microscopic material mixing, and multiple scattering, hyperspectral images generally have problems with mixed pixels. This paper proposes two network structures under the framework of deep learning, which can be well applied to hyperspectral unmixing: 1) network architecture based on spectral information, the architecture uses a fully connected neural network and the spectral vector is used as an input for unmixing; 2) network architecture based on spatial-spectral information, the architecture further combines the convolutional neural networks to fuse the spatial information and spectral information of the hyperspectral image for unmixing. Experiments on simulated dataset and real dataset show the efficiency of our approach.
{"title":"Hyperspectral Unmixing Using Deep Learning","authors":"Chen-Jian Wang, Hong Li, Y. Tang","doi":"10.1109/ICWAPR48189.2019.8946465","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946465","url":null,"abstract":"Due to factors such as low spatial resolution, microscopic material mixing, and multiple scattering, hyperspectral images generally have problems with mixed pixels. This paper proposes two network structures under the framework of deep learning, which can be well applied to hyperspectral unmixing: 1) network architecture based on spectral information, the architecture uses a fully connected neural network and the spectral vector is used as an input for unmixing; 2) network architecture based on spatial-spectral information, the architecture further combines the convolutional neural networks to fuse the spatial information and spectral information of the hyperspectral image for unmixing. Experiments on simulated dataset and real dataset show the efficiency of our approach.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129144015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many people shop on Taobao, Jingdong and other online platforms in China. More and more products are advertised through self-media. Shopping guide text is an effective means to improve the effectiveness of advertising. However, self-media companies need to hire a lot of professional writer to write shopping guide text, which leads to high labor cost. In this paper, we proposed a shopping guide text generator, which can automatically generate shopping guide text given an image of the product. In this paper, we focus on the shopping guide text generation of clothes. The proposed text generator consists of a convolutional neural network, a recurrent neural network with long-short-term-memory (LSTM) over shopping guide text, and a structured module which evaluates the related degree between the image and shopping guide text. The experimental results show that the proposed shopping guide text generation system can generate attractive text to advertise the given clothes.
{"title":"A Shopping Guide Text Generation System Based on Deep Neural Network","authors":"Shilin Xu, Zhimin He, Junjian Su, Liangsheng Zhong, Yue Xu, Huimin Gu, Yubing Huang","doi":"10.1109/ICWAPR48189.2019.8946478","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946478","url":null,"abstract":"Many people shop on Taobao, Jingdong and other online platforms in China. More and more products are advertised through self-media. Shopping guide text is an effective means to improve the effectiveness of advertising. However, self-media companies need to hire a lot of professional writer to write shopping guide text, which leads to high labor cost. In this paper, we proposed a shopping guide text generator, which can automatically generate shopping guide text given an image of the product. In this paper, we focus on the shopping guide text generation of clothes. The proposed text generator consists of a convolutional neural network, a recurrent neural network with long-short-term-memory (LSTM) over shopping guide text, and a structured module which evaluates the related degree between the image and shopping guide text. The experimental results show that the proposed shopping guide text generation system can generate attractive text to advertise the given clothes.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116591652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.1109/ICWAPR48189.2019.8946487
A. Morimoto, R. Ashino, T. Mandai
Image separation problems, where observed images are weighted superpositions of translations and rotations of original images, are considered. The Algorithms for estimating fine relative translation parameters and relative mixing coefficients are proposed. Numerical experiments show that the proposed Algorithms work well.
{"title":"An Estimation of Mixing Coefficients in Image Separation Problem Using Multiwavelet Transforms","authors":"A. Morimoto, R. Ashino, T. Mandai","doi":"10.1109/ICWAPR48189.2019.8946487","DOIUrl":"https://doi.org/10.1109/ICWAPR48189.2019.8946487","url":null,"abstract":"Image separation problems, where observed images are weighted superpositions of translations and rotations of original images, are considered. The Algorithms for estimating fine relative translation parameters and relative mixing coefficients are proposed. Numerical experiments show that the proposed Algorithms work well.","PeriodicalId":436840,"journal":{"name":"2019 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115685282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}