{"title":"利用混沌动力学生成循环图像","authors":"Takaya Tanaka, Yutaka Yamaguti","doi":"arxiv-2405.20717","DOIUrl":null,"url":null,"abstract":"Successive image generation using cyclic transformations is demonstrated by\nextending the CycleGAN model to transform images among three different\ncategories. Repeated application of the trained generators produces sequences\nof images that transition among the different categories. The generated image\nsequences occupy a more limited region of the image space compared with the\noriginal training dataset. Quantitative evaluation using precision and recall\nmetrics indicates that the generated images have high quality but reduced\ndiversity relative to the training dataset. Such successive generation\nprocesses are characterized as chaotic dynamics in terms of dynamical system\ntheory. Positive Lyapunov exponents estimated from the generated trajectories\nconfirm the presence of chaotic dynamics, with the Lyapunov dimension of the\nattractor found to be comparable to the intrinsic dimension of the training\ndata manifold. The results suggest that chaotic dynamics in the image space\ndefined by the deep generative model contribute to the diversity of the\ngenerated images, constituting a novel approach for multi-class image\ngeneration. This model can be interpreted as an extension of classical\nassociative memory to perform hetero-association among image categories.","PeriodicalId":501167,"journal":{"name":"arXiv - PHYS - Chaotic Dynamics","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cyclic image generation using chaotic dynamics\",\"authors\":\"Takaya Tanaka, Yutaka Yamaguti\",\"doi\":\"arxiv-2405.20717\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Successive image generation using cyclic transformations is demonstrated by\\nextending the CycleGAN model to transform images among three different\\ncategories. Repeated application of the trained generators produces sequences\\nof images that transition among the different categories. The generated image\\nsequences occupy a more limited region of the image space compared with the\\noriginal training dataset. Quantitative evaluation using precision and recall\\nmetrics indicates that the generated images have high quality but reduced\\ndiversity relative to the training dataset. Such successive generation\\nprocesses are characterized as chaotic dynamics in terms of dynamical system\\ntheory. Positive Lyapunov exponents estimated from the generated trajectories\\nconfirm the presence of chaotic dynamics, with the Lyapunov dimension of the\\nattractor found to be comparable to the intrinsic dimension of the training\\ndata manifold. The results suggest that chaotic dynamics in the image space\\ndefined by the deep generative model contribute to the diversity of the\\ngenerated images, constituting a novel approach for multi-class image\\ngeneration. This model can be interpreted as an extension of classical\\nassociative memory to perform hetero-association among image categories.\",\"PeriodicalId\":501167,\"journal\":{\"name\":\"arXiv - PHYS - Chaotic Dynamics\",\"volume\":\"66 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Chaotic Dynamics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.20717\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Chaotic Dynamics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.20717","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Successive image generation using cyclic transformations is demonstrated by
extending the CycleGAN model to transform images among three different
categories. Repeated application of the trained generators produces sequences
of images that transition among the different categories. The generated image
sequences occupy a more limited region of the image space compared with the
original training dataset. Quantitative evaluation using precision and recall
metrics indicates that the generated images have high quality but reduced
diversity relative to the training dataset. Such successive generation
processes are characterized as chaotic dynamics in terms of dynamical system
theory. Positive Lyapunov exponents estimated from the generated trajectories
confirm the presence of chaotic dynamics, with the Lyapunov dimension of the
attractor found to be comparable to the intrinsic dimension of the training
data manifold. The results suggest that chaotic dynamics in the image space
defined by the deep generative model contribute to the diversity of the
generated images, constituting a novel approach for multi-class image
generation. This model can be interpreted as an extension of classical
associative memory to perform hetero-association among image categories.