{"title":"基于轨迹时空图像的混合城市功能自监督检测方法","authors":"Zhixing Chen , Luliang Tang , Xiaogang Guo , Guizhou Zheng","doi":"10.1016/j.compenvurbsys.2024.102113","DOIUrl":null,"url":null,"abstract":"<div><p>Urban function detection plays a significant role in urban complex system recognition and smart city construction. The location big data obtained from human activities, which is cohesive with urban functions, provides valuable insights into human mobility patterns. However, as urban functions become highly mixed, existing feature representation structures struggle to explicitly depict the latent human activity features, limiting their applicability for detecting mixed urban functions in a supervised manner. To close the gap, this study analogizes the latent human activity features to the shape, texture, and color semantics of images, with a contrastive learning framework being introduced to extract image-based crowd mobility features for detecting mixed urban functions. Firstly, by translating human activity features into image semantics, a novel feature representation structure termed the Trajectory Temporal Image (TTI) is proposed to explicitly represent human activity features. Secondly, the Vision Transformer (ViT) model is employed to extract image-based semantics in a self-supervised manner. Lastly, based on urban dynamics, a mathematical model is developed to represent mixed urban functions, and the decomposition of mixed urban functions is achieved using the theory of fuzzy sets. A case study is conducted using taxi trajectory data in three cities in China. Experimental results indicate the high discriminability of our proposed method, especially in areas with weak activity intensity, and reveal the relationship between the mixture index and the trip distance. The proposed method is promising to establish a solid scientific foundation for comprehending the urban complex system.</p></div>","PeriodicalId":48241,"journal":{"name":"Computers Environment and Urban Systems","volume":null,"pages":null},"PeriodicalIF":7.1000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A self-supervised detection method for mixed urban functions based on trajectory temporal image\",\"authors\":\"Zhixing Chen , Luliang Tang , Xiaogang Guo , Guizhou Zheng\",\"doi\":\"10.1016/j.compenvurbsys.2024.102113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Urban function detection plays a significant role in urban complex system recognition and smart city construction. The location big data obtained from human activities, which is cohesive with urban functions, provides valuable insights into human mobility patterns. However, as urban functions become highly mixed, existing feature representation structures struggle to explicitly depict the latent human activity features, limiting their applicability for detecting mixed urban functions in a supervised manner. To close the gap, this study analogizes the latent human activity features to the shape, texture, and color semantics of images, with a contrastive learning framework being introduced to extract image-based crowd mobility features for detecting mixed urban functions. Firstly, by translating human activity features into image semantics, a novel feature representation structure termed the Trajectory Temporal Image (TTI) is proposed to explicitly represent human activity features. Secondly, the Vision Transformer (ViT) model is employed to extract image-based semantics in a self-supervised manner. Lastly, based on urban dynamics, a mathematical model is developed to represent mixed urban functions, and the decomposition of mixed urban functions is achieved using the theory of fuzzy sets. A case study is conducted using taxi trajectory data in three cities in China. Experimental results indicate the high discriminability of our proposed method, especially in areas with weak activity intensity, and reveal the relationship between the mixture index and the trip distance. The proposed method is promising to establish a solid scientific foundation for comprehending the urban complex system.</p></div>\",\"PeriodicalId\":48241,\"journal\":{\"name\":\"Computers Environment and Urban Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2024-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers Environment and Urban Systems\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0198971524000425\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL STUDIES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers Environment and Urban Systems","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0198971524000425","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL STUDIES","Score":null,"Total":0}
A self-supervised detection method for mixed urban functions based on trajectory temporal image
Urban function detection plays a significant role in urban complex system recognition and smart city construction. The location big data obtained from human activities, which is cohesive with urban functions, provides valuable insights into human mobility patterns. However, as urban functions become highly mixed, existing feature representation structures struggle to explicitly depict the latent human activity features, limiting their applicability for detecting mixed urban functions in a supervised manner. To close the gap, this study analogizes the latent human activity features to the shape, texture, and color semantics of images, with a contrastive learning framework being introduced to extract image-based crowd mobility features for detecting mixed urban functions. Firstly, by translating human activity features into image semantics, a novel feature representation structure termed the Trajectory Temporal Image (TTI) is proposed to explicitly represent human activity features. Secondly, the Vision Transformer (ViT) model is employed to extract image-based semantics in a self-supervised manner. Lastly, based on urban dynamics, a mathematical model is developed to represent mixed urban functions, and the decomposition of mixed urban functions is achieved using the theory of fuzzy sets. A case study is conducted using taxi trajectory data in three cities in China. Experimental results indicate the high discriminability of our proposed method, especially in areas with weak activity intensity, and reveal the relationship between the mixture index and the trip distance. The proposed method is promising to establish a solid scientific foundation for comprehending the urban complex system.
期刊介绍:
Computers, Environment and Urban Systemsis an interdisciplinary journal publishing cutting-edge and innovative computer-based research on environmental and urban systems, that privileges the geospatial perspective. The journal welcomes original high quality scholarship of a theoretical, applied or technological nature, and provides a stimulating presentation of perspectives, research developments, overviews of important new technologies and uses of major computational, information-based, and visualization innovations. Applied and theoretical contributions demonstrate the scope of computer-based analysis fostering a better understanding of environmental and urban systems, their spatial scope and their dynamics.