Pub Date : 2019-02-19DOI: 10.5220/0007258600400051
Marcus Valtonen Örnhag
This paper presents a novel polynomial constraint for homographies compatible with the general planar motion model. In this setting, compatible homographies have five degrees of freedom-instead of the general case of eight degrees of freedom-and, as a consequence, a minimal solver requires 2.5 point correspondences. The existing minimal solver, however, is computationally expensive, and we propose using non-minimal solvers, which significantly reduces the execution time of obtaining a compatible homography, with accuracy and robustness comparable to that of the minimal solver. The proposed solvers are compared with the minimal solver and the traditional 4-point solver on synthetic and real data, and demonstrate good performance, in terms of speed and accuracy. By decomposing the homographies obtained from the different methods, it is shown that the proposed solvers have future potential to be incorporated in a complete Simultaneous Localization and Mapping (SLAM) framework. (Less)
{"title":"Fast Non-minimal Solvers for Planar Motion Compatible Homographies","authors":"Marcus Valtonen Örnhag","doi":"10.5220/0007258600400051","DOIUrl":"https://doi.org/10.5220/0007258600400051","url":null,"abstract":"This paper presents a novel polynomial constraint for homographies compatible with the general planar motion model. In this setting, compatible homographies have five degrees of freedom-instead of the general case of eight degrees of freedom-and, as a consequence, a minimal solver requires 2.5 point correspondences. The existing minimal solver, however, is computationally expensive, and we propose using non-minimal solvers, which significantly reduces the execution time of obtaining a compatible homography, with accuracy and robustness comparable to that of the minimal solver. The proposed solvers are compared with the minimal solver and the traditional 4-point solver on synthetic and real data, and demonstrate good performance, in terms of speed and accuracy. By decomposing the homographies obtained from the different methods, it is shown that the proposed solvers have future potential to be incorporated in a complete Simultaneous Localization and Mapping (SLAM) framework. (Less)","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122061520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007367701450154
Veeru Dumpala, Sheela Raju Kurupathi, S. S. Bukhari, A. Dengel
One of the most crucial problem in document analysis and OCR pipeline is document binarization. Many traditional algorithms over the past few decades like Sauvola, Niblack, Otsu etc,. were used for binarization which gave insufficient results for historical texts with degradations. Recently many attempts have been made to solve binarization using deep learning approaches like Autoencoders, FCNs. However, these models do not generalize well to real world historical document images qualitatively. In this paper, we propose a model based on conditional GAN, well known for its high-resolution image synthesis. Here, the proposed model is used for image manipulation task which can remove different degradations in historical documents like stains, bleed-through and non-uniform shadings. The performance of the proposed model outperforms recent state-of-the-art models for document image binarization. We support our claims by benchmarking the proposed model on publicly available PHIBC 2012, DIBCO (2009-2017) and Palm Leaf datasets. The main objective of this paper is to illuminate the advantages of generative modeling and adversarial training for document image binarization in supervised setting which shows good generalization capabilities on different inter/intra class domain document images.
{"title":"Removal of Historical Document Degradations using Conditional GANs","authors":"Veeru Dumpala, Sheela Raju Kurupathi, S. S. Bukhari, A. Dengel","doi":"10.5220/0007367701450154","DOIUrl":"https://doi.org/10.5220/0007367701450154","url":null,"abstract":"One of the most crucial problem in document analysis and OCR pipeline is document binarization. Many traditional algorithms over the past few decades like Sauvola, Niblack, Otsu etc,. were used for binarization which gave insufficient results for historical texts with degradations. Recently many attempts have been made to solve binarization using deep learning approaches like Autoencoders, FCNs. However, these models do not generalize well to real world historical document images qualitatively. In this paper, we propose a model based on conditional GAN, well known for its high-resolution image synthesis. Here, the proposed model is used for image manipulation task which can remove different degradations in historical documents like stains, bleed-through and non-uniform shadings. The performance of the proposed model outperforms recent state-of-the-art models for document image binarization. We support our claims by benchmarking the proposed model on publicly available PHIBC 2012, DIBCO (2009-2017) and Palm Leaf datasets. The main objective of this paper is to illuminate the advantages of generative modeling and adversarial training for document image binarization in supervised setting which shows good generalization capabilities on different inter/intra class domain document images.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130247200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007387505980604
E. Acuña, Velcy Palomino, José L. Agosto, R. Mégret, T. Giray, A. Prado, C. Alaux, Y. Conte
In this work, we analyze the activity of bees starting at 6 days old. The data was collected at the INRA (France) during 2014 and 2016. The activity is counted according to whether the bees enter or leave the hive. After data wrangling, we decided to analyze data corresponding to a period of 10 days. We use clustering method to determine bees with similar activity and to estimate the time during the day when the bees are most active. To achieve our objective, the data was analyzed in three different time periods in a day. One considering the daily activity during in two periods: morning and afternoon, then looking at activities in periods of 3 hours from 8:00am to 8:00pm and, finally looking at the activities hourly from 8:00am to 8:00pm. Our study found two clusters of bees and in one of them clearly the bees activity increased at the day 5. The smaller cluster included the most active bees representing about 24 percent of the total bees under study. Also, the highest activity of the bees was registered between 2:00pm until 3:00pm. A Chi-square test shows that there is a combined effect Treatment× Colony on the clusters formation.
{"title":"Clustering Honeybees by Its Daily Activity","authors":"E. Acuña, Velcy Palomino, José L. Agosto, R. Mégret, T. Giray, A. Prado, C. Alaux, Y. Conte","doi":"10.5220/0007387505980604","DOIUrl":"https://doi.org/10.5220/0007387505980604","url":null,"abstract":"In this work, we analyze the activity of bees starting at 6 days old. The data was collected at the INRA (France) during 2014 and 2016. The activity is counted according to whether the bees enter or leave the hive. After data wrangling, we decided to analyze data corresponding to a period of 10 days. We use clustering method to determine bees with similar activity and to estimate the time during the day when the bees are most active. To achieve our objective, the data was analyzed in three different time periods in a day. One considering the daily activity during in two periods: morning and afternoon, then looking at activities in periods of 3 hours from 8:00am to 8:00pm and, finally looking at the activities hourly from 8:00am to 8:00pm. Our study found two clusters of bees and in one of them clearly the bees activity increased at the day 5. The smaller cluster included the most active bees representing about 24 percent of the total bees under study. Also, the highest activity of the bees was registered between 2:00pm until 3:00pm. A Chi-square test shows that there is a combined effect Treatment× Colony on the clusters formation.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007366904910500
Oduwa Edo-Osagie, B. Iglesia, I. Lake, O. Edeghere
In this paper, we investigate deep learning methods that may extract some word context for Twitter mining for syndromic surveillance. Most of the work on syndromic surveillance has been done on the flu or Influenza- Like Illnesses (ILIs). For this reason, we decided to look at a different but equally important syndrome, asthma/difficulty breathing, as this is quite topical given global concerns about the impact of air pollution. We also compare deep learning algorithms for the purpose of filtering Tweets relevant to our syndrome of interest, asthma/difficulty breathing. We make our comparisons using different variants of the F-measure as our evaluation metric because they allow us to emphasise recall over precision, which is important in the context of syndromic surveillance so that we do not lose relevant Tweets in the classification. We then apply our relevance filtering systems based on deep learning algorithms, to the task of syndromic surveillance and compare the results with real-world syndromic surveillance data provided by Public Health England (PHE).We find that the RNN performs best at relevance filtering but can also be slower than other architectures which is important for consideration in real-time application. We also found that the correlation between Twitter and the real-world asthma syndromic surveillance data was positive and improved with the use of the deep- learning-powered relevance filtering. Finally, the deep learning methods enabled us to gather context and word similarity information which we can use to fine tune the vocabulary we employ to extract relevant Tweets in the first place.
{"title":"Deep Learning for Relevance Filtering in Syndromic Surveillance: A Case Study in Asthma/Difficulty Breathing","authors":"Oduwa Edo-Osagie, B. Iglesia, I. Lake, O. Edeghere","doi":"10.5220/0007366904910500","DOIUrl":"https://doi.org/10.5220/0007366904910500","url":null,"abstract":"In this paper, we investigate deep learning methods that may extract some word context for Twitter mining for syndromic surveillance. Most of the work on syndromic surveillance has been done on the flu or Influenza- Like Illnesses (ILIs). For this reason, we decided to look at a different but equally important syndrome, asthma/difficulty breathing, as this is quite topical given global concerns about the impact of air pollution. We also compare deep learning algorithms for the purpose of filtering Tweets relevant to our syndrome of interest, asthma/difficulty breathing. We make our comparisons using different variants of the F-measure as our evaluation metric because they allow us to emphasise recall over precision, which is important in the context of syndromic surveillance so that we do not lose relevant Tweets in the classification. We then apply our relevance filtering systems based on deep learning algorithms, to the task of syndromic surveillance and compare the results with real-world syndromic surveillance data provided by Public Health England (PHE).We find that the RNN performs best at relevance filtering but can also be slower than other architectures which is important for consideration in real-time application. We also found that the correlation between Twitter and the real-world asthma syndromic surveillance data was positive and improved with the use of the deep- learning-powered relevance filtering. Finally, the deep learning methods enabled us to gather context and word similarity information which we can use to fine tune the vocabulary we employ to extract relevant Tweets in the first place.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114897218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007581708370844
M. Cimino, M. Lega, Manilo Monaco, G. Vaglini
This paper focuses on the problem of coordinating multiple UAVs for distributed targets detection and tracking, in different technological and environmental settings. The proposed approach is founded on the concept of swarm behavior in multi-agent systems, i.e., a self-formed and self-coordinated team of UAVs which adapts itself to mission-specific environmental layouts. The swarm formation and coordination are inspired by biological mechanisms of flocking and stigmergy, respectively. These mechanisms, suitably combined, make it possible to strike the right balance between global search (exploration) and local search (exploitation) in the environment. The swarm adaptation is based on an evolutionary algorithm with the objective of maximizing the number of tracked targets during a mission or minimizing the time for target discovery. A simulation testbed has been developed and publicly released, on the basis of commercially available UAVs technology and real-world scenarios. Experimental results show that the proposed approach extends and sensibly outperforms a similar approach in the literature.
{"title":"Adaptive Exploration of a UAVs Swarm for Distributed Targets Detection and Tracking","authors":"M. Cimino, M. Lega, Manilo Monaco, G. Vaglini","doi":"10.5220/0007581708370844","DOIUrl":"https://doi.org/10.5220/0007581708370844","url":null,"abstract":"This paper focuses on the problem of coordinating multiple UAVs for distributed targets detection and tracking, in different technological and environmental settings. The proposed approach is founded on the concept of swarm behavior in multi-agent systems, i.e., a self-formed and self-coordinated team of UAVs which adapts itself to mission-specific environmental layouts. The swarm formation and coordination are inspired by biological mechanisms of flocking and stigmergy, respectively. These mechanisms, suitably combined, make it possible to strike the right balance between global search (exploration) and local search (exploitation) in the environment. The swarm adaptation is based on an evolutionary algorithm with the objective of maximizing the number of tracked targets during a mission or minimizing the time for target discovery. A simulation testbed has been developed and publicly released, on the basis of commercially available UAVs technology and real-world scenarios. Experimental results show that the proposed approach extends and sensibly outperforms a similar approach in the literature.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126793747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007570208020806
John Alejandro Castro-Vargas, Brayan S. Zapata-Impata, P. Gil, J. G. Rodríguez, Fernando Torres Medina
This work was funded by the Ministry of Economy, Industry and Competitiveness from the Spanish Government through the DPI2015-68087-R and the pre-doctoral grant BES-2016-078290, by the European Commission and FEDER funds through the project COMMANDIA (SOE2/P1/F0638), action supported by Interreg-V Sudoe.
这项工作由西班牙政府经济、工业和竞争力部通过DPI2015-68087-R和博士前拨款BES-2016-078290资助,由欧盟委员会和联邦联邦基金通过项目COMMANDIA (SOE2/P1/F0638)资助,行动由interregi - v Sudoe支持。
{"title":"3DCNN Performance in Hand Gesture Recognition Applied to Robot Arm Interaction","authors":"John Alejandro Castro-Vargas, Brayan S. Zapata-Impata, P. Gil, J. G. Rodríguez, Fernando Torres Medina","doi":"10.5220/0007570208020806","DOIUrl":"https://doi.org/10.5220/0007570208020806","url":null,"abstract":"This work was funded by the Ministry of Economy, Industry and Competitiveness from the Spanish Government through the DPI2015-68087-R and the pre-doctoral grant BES-2016-078290, by the European Commission and FEDER funds through the project COMMANDIA (SOE2/P1/F0638), action supported by Interreg-V Sudoe.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128061053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.1007/978-3-030-40014-9_4
N. Higuchi, Yasunobu Imamura, T. Shinohara, K. Hirata, T. Kuboyama
{"title":"Annealing by Increasing Resampling","authors":"N. Higuchi, Yasunobu Imamura, T. Shinohara, K. Hirata, T. Kuboyama","doi":"10.1007/978-3-030-40014-9_4","DOIUrl":"https://doi.org/10.1007/978-3-030-40014-9_4","url":null,"abstract":"","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122170862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007383605740581
Zhongguo Li, A. Heyden, M. Oskarsson
Estimating the 3D model of the human body is needed for many applications. However, this is a challenging problem since the human body inherently has a high complexity due to self-occlusions and articulation. We present a method to reconstruct the 3D human body model from a single RGB-D image. 2D joint points are firstly predicted by a CNN-based model called convolutional pose machine, and the 3D joint points are calculated using the depth image. Then, we propose to utilize both 2D and 3D joint points, which provide more information, to fit a parametric body model (SMPL). This is implemented through minimizing an objective function, which measures the difference of the joint points between the observed model and the parametric model. The pose and shape parameters of the body are obtained through optimization and the final 3D model is estimated. The experiments on synthetic data and real data demonstrate that our method can estimate the 3D human body model correctly.
{"title":"Template based Human Pose and Shape Estimation from a Single RGB-D Image","authors":"Zhongguo Li, A. Heyden, M. Oskarsson","doi":"10.5220/0007383605740581","DOIUrl":"https://doi.org/10.5220/0007383605740581","url":null,"abstract":"Estimating the 3D model of the human body is needed for many applications. However, this is a challenging problem since the human body inherently has a high complexity due to self-occlusions and articulation. We present a method to reconstruct the 3D human body model from a single RGB-D image. 2D joint points are firstly predicted by a CNN-based model called convolutional pose machine, and the 3D joint points are calculated using the depth image. Then, we propose to utilize both 2D and 3D joint points, which provide more information, to fit a parametric body model (SMPL). This is implemented through minimizing an objective function, which measures the difference of the joint points between the observed model and the parametric model. The pose and shape parameters of the body are obtained through optimization and the final 3D model is estimated. The experiments on synthetic data and real data demonstrate that our method can estimate the 3D human body model correctly.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"31 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007687608940899
Prakruti V. Bhatt, Sanat Sarangi, Anshul Shivhare, Dineshkumar Singh, S. Pappula
Precision farming technologies are essential for a steady supply of healthy food for the increasing population around the globe. Pests and diseases remain a major threat and a large fraction of crops are lost each year due to them. Automated detection of crop health from images helps in taking timely actions to increase yield while helping reduce input cost. With an aim to detect crop diseases and pests with high confidence, we use convolutional neural networks (CNN) and boosting techniques on Corn leaf images in different health states. The queen of cereals, Corn, is a versatile crop that has adapted to various climatic conditions. It is one of the major food crops in India along with wheat and rice. Considering that different diseases might have different treatments, incorrect detection can lead to incorrect remedial measures. Although CNN based models have been used for classification tasks, we aim to classify similar looking disease manifestations with a higher accuracy compared to the one obtained by existing deep learning methods. We have evaluated ensembles of CNN based image features, with a classifier and boosting in order to achieve plant disease classification. Using an ensemble of Adaptive Boosting cascaded with a decision tree based classifier trained on features from CNN, we have achieved an accuracy of 98% in classifying the Corn leaf images into four different categories viz. Healthy, Common Rust, Late Blight and Leaf Spot. This is about 8% improvement in classification performance when compared to CNN only.
{"title":"Identification of Diseases in Corn Leaves using Convolutional Neural Networks and Boosting","authors":"Prakruti V. Bhatt, Sanat Sarangi, Anshul Shivhare, Dineshkumar Singh, S. Pappula","doi":"10.5220/0007687608940899","DOIUrl":"https://doi.org/10.5220/0007687608940899","url":null,"abstract":"Precision farming technologies are essential for a steady supply of healthy food for the increasing population around the globe. Pests and diseases remain a major threat and a large fraction of crops are lost each year due to them. Automated detection of crop health from images helps in taking timely actions to increase yield while helping reduce input cost. With an aim to detect crop diseases and pests with high confidence, we use convolutional neural networks (CNN) and boosting techniques on Corn leaf images in different health states. The queen of cereals, Corn, is a versatile crop that has adapted to various climatic conditions. It is one of the major food crops in India along with wheat and rice. Considering that different diseases might have different treatments, incorrect detection can lead to incorrect remedial measures. Although CNN based models have been used for classification tasks, we aim to classify similar looking disease manifestations with a higher accuracy compared to the one obtained by existing deep learning methods. We have evaluated ensembles of CNN based image features, with a classifier and boosting in order to achieve plant disease classification. Using an ensemble of Adaptive Boosting cascaded with a decision tree based classifier trained on features from CNN, we have achieved an accuracy of 98% in classifying the Corn leaf images into four different categories viz. Healthy, Common Rust, Late Blight and Leaf Spot. This is about 8% improvement in classification performance when compared to CNN only.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130211515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.5220/0007471506990706
Mikio Mizukami, K. Hirata, T. Kuboyama
In this paper, first we formulate the problem of a bipartite edge correlation clustering which finds an edge biclique partition with the minimum disagreement from a bipartite graph, by extending the bipartite correlation clustering which finds a biclique partition. Then, we design a simple randomized algorithm for bipartite edge correlation clustering, based on the randomized algorithm of bipartite correlation clustering. Finally, we give experimental results to evaluate the algorithms from both artificial data and real data.
{"title":"Bipartite Edge Correlation Clustering: Finding an Edge Biclique Partition from a Bipartite Graph with Minimum Disagreement","authors":"Mikio Mizukami, K. Hirata, T. Kuboyama","doi":"10.5220/0007471506990706","DOIUrl":"https://doi.org/10.5220/0007471506990706","url":null,"abstract":"In this paper, first we formulate the problem of a bipartite edge correlation clustering which finds an edge biclique partition with the minimum disagreement from a bipartite graph, by extending the bipartite correlation clustering which finds a biclique partition. Then, we design a simple randomized algorithm for bipartite edge correlation clustering, based on the randomized algorithm of bipartite correlation clustering. Finally, we give experimental results to evaluate the algorithms from both artificial data and real data.","PeriodicalId":410036,"journal":{"name":"International Conference on Pattern Recognition Applications and Methods","volume":"308 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}