Modifying network results is the most intuitive way to inject domain knowledge into network detection algorithms to improve their performance. While advances in computation scalability have made detecting large-scale networks possible, the human ability to modify such networks has not scaled accordingly, resulting in a huge ‘interaction gap’. Most existing works only support navigating and modifying edges one by one in a graph visualization, which causes a significant interaction burden when faced with large-scale networks. In this work, we propose a novel graph pattern mining algorithm based on the minimum description length (MDL) principle to partition and summarize multi-feature and isomorphic sub-graph matches. The mined sub-graph patterns can be utilized as mediums for modifying large-scale networks. Combining two traditional approaches, we introduce a new coarse-middle-fine graph modification paradigm (i.e. query graph-based modification sub-graph pattern-based modification raw edge-based modification). We further present a graph modification system that supports the graph modification paradigm for improving the scalability of modifying detected large-scale networks. We evaluate the performance of our graph pattern mining algorithm through an experimental study, demonstrate the usefulness of our system through a case study, and illustrate the efficiency of our graph modification paradigm through a user study.
{"title":"A High-Scalability Graph Modification System for Large-Scale Networks","authors":"Shaobin Xu, Minghui Sun, Jun Qin","doi":"10.1111/cgf.15191","DOIUrl":"10.1111/cgf.15191","url":null,"abstract":"<p>Modifying network results is the most intuitive way to inject domain knowledge into network detection algorithms to improve their performance. While advances in computation scalability have made detecting large-scale networks possible, the human ability to modify such networks has not scaled accordingly, resulting in a huge ‘interaction gap’. Most existing works only support navigating and modifying edges one by one in a graph visualization, which causes a significant interaction burden when faced with large-scale networks. In this work, we propose a novel graph pattern mining algorithm based on the minimum description length (MDL) principle to partition and summarize multi-feature and isomorphic sub-graph matches. The mined sub-graph patterns can be utilized as mediums for modifying large-scale networks. Combining two traditional approaches, we introduce a new coarse-middle-fine graph modification paradigm (<i>i.e</i>. query graph-based modification <span></span><math></math> sub-graph pattern-based modification <span></span><math></math> raw edge-based modification). We further present a graph modification system that supports the graph modification paradigm for improving the scalability of modifying detected large-scale networks. We evaluate the performance of our graph pattern mining algorithm through an experimental study, demonstrate the usefulness of our system through a case study, and illustrate the efficiency of our graph modification paradigm through a user study.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Freehand sketch-to-image (S2I) is a challenging task due to the individualized lines and the random shape of freehand sketches. The multi-class freehand sketch-to-image synthesis task, in turn, presents new challenges for this research area. This task requires not only the consideration of the problems posed by freehand sketches but also the analysis of multi-class domain differences in the conditions of a single model. However, existing methods often have difficulty learning domain differences between multiple classes, and cannot generate controllable and appropriate textures while maintaining shape stability. In this paper, we propose a style-guided multi-class freehand sketch-to-image synthesis model, SMFS-GAN, which can be trained using only unpaired data. To this end, we introduce a contrast-based style encoder that optimizes the network's perception of domain disparities by explicitly modelling the differences between classes and thus extracting style information across domains. Further, to optimize the fine-grained texture of the generated results and the shape consistency with freehand sketches, we propose a local texture refinement discriminator and a Shape Constraint Module, respectively. In addition, to address the imbalance of data classes in the QMUL-Sketch dataset, we add 6K images by drawing manually and obtain QMUL-Sketch+ dataset. Extensive experiments on SketchyCOCO Object dataset, QMUL-Sketch+ dataset and Pseudosketches dataset demonstrate the effectiveness as well as the superiority of our proposed method.
{"title":"SMFS-GAN: Style-Guided Multi-class Freehand Sketch-to-Image Synthesis","authors":"Zhenwei Cheng, Lei Wu, Xiang Li, Xiangxu Meng","doi":"10.1111/cgf.15190","DOIUrl":"10.1111/cgf.15190","url":null,"abstract":"<p>Freehand sketch-to-image (S2I) is a challenging task due to the individualized lines and the random shape of freehand sketches. The multi-class freehand sketch-to-image synthesis task, in turn, presents new challenges for this research area. This task requires not only the consideration of the problems posed by freehand sketches but also the analysis of multi-class domain differences in the conditions of a single model. However, existing methods often have difficulty learning domain differences between multiple classes, and cannot generate controllable and appropriate textures while maintaining shape stability. In this paper, we propose a style-guided multi-class freehand sketch-to-image synthesis model, SMFS-GAN, which can be trained using only unpaired data. To this end, we introduce a contrast-based style encoder that optimizes the network's perception of domain disparities by explicitly modelling the differences between classes and thus extracting style information across domains. Further, to optimize the fine-grained texture of the generated results and the shape consistency with freehand sketches, we propose a local texture refinement discriminator and a Shape Constraint Module, respectively. In addition, to address the imbalance of data classes in the QMUL-Sketch dataset, we add 6K images by drawing manually and obtain QMUL-Sketch+ dataset. Extensive experiments on SketchyCOCO Object dataset, QMUL-Sketch+ dataset and Pseudosketches dataset demonstrate the effectiveness as well as the superiority of our proposed method.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141948496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a cross field, possibly with singular points of valence 3 or 5, in which all streamlines are finite, and either end on the boundary or form cycles. We show that we can always assign lengths to the two cross field directions to produce an anisotropic orthogonal frame field. There is a one-dimensional family of such length functions, and we optimize within this family so that the two lengths are everywhere as similar as possible. This gives a numerical bound on the minimal anisotropy of any quad mesh exactly following the input cross field. We also show how to remove some limit cycles.
{"title":"Anisotropy and Cross Fields","authors":"L. Simons, N. Amenta","doi":"10.1111/cgf.15132","DOIUrl":"10.1111/cgf.15132","url":null,"abstract":"<p>We consider a cross field, possibly with singular points of valence 3 or 5, in which all streamlines are finite, and either end on the boundary or form cycles. We show that we can always assign lengths to the two cross field directions to produce an anisotropic orthogonal frame field. There is a one-dimensional family of such length functions, and we optimize within this family so that the two lengths are everywhere as similar as possible. This gives a numerical bound on the minimal anisotropy of any quad mesh exactly following the input cross field. We also show how to remove some limit cycles.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141948649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}