利用街景图像和混合语义图进行多层次城市街道表征

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-18 DOI:10.1016/j.isprsjprs.2024.09.032
Yan Zhang , Yong Li , Fan Zhang
{"title":"利用街景图像和混合语义图进行多层次城市街道表征","authors":"Yan Zhang ,&nbsp;Yong Li ,&nbsp;Fan Zhang","doi":"10.1016/j.isprsjprs.2024.09.032","DOIUrl":null,"url":null,"abstract":"<div><div>Street-view imagery has been densely covering cities. They provide a close-up perspective of the urban physical environment, allowing a comprehensive perception and understanding of cities. There has been a significant amount of effort to represent the urban physical environment based on street view imagery, and this representation has been utilized to study the relationships between the physical environment, human dynamics, and socioeconomic environments. However, there are two key challenges in representing the urban physical environment of streets based on street-view images for downstream tasks. First, current research mainly focuses on the proportions of visual elements within the scene, neglecting the spatial adjacency between them. Second, the spatial dependency and spatial interaction between streets have not been adequately accounted for. These limitations hinder the effective representation and understanding of urban streets. To address these challenges, we propose a dynamic graph representation framework based on dual spatial semantics. At the intra-street level, we consider the spatial adjacency relationships of visual elements. Our method dynamically parses visual elements within the scene, achieving context-specific representations. At the inter-street level, we construct two spatial weight matrices by integrating the spatial dependency and the spatial interaction relationships. It could account for the hybrid spatial relationships between streets comprehensively, enhancing the model’s ability to represent human dynamics and socioeconomic status. Furthermore, aside from these two modules, we also provide a spatial interpretability analysis tool for downstream tasks. A case study of our research framework shows that our method improves vehicle speed and flow estimation by 2.4% and 6.4%, respectively. This not only indicates that street-view imagery provides rich information about urban transportation but also offers a more accurate and reliable data-driven framework for urban studies. The code is available at: (<span><span>https://github.com/yemanzhongting/HybridGraph</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 19-32"},"PeriodicalIF":10.6000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-level urban street representation with street-view imagery and hybrid semantic graph\",\"authors\":\"Yan Zhang ,&nbsp;Yong Li ,&nbsp;Fan Zhang\",\"doi\":\"10.1016/j.isprsjprs.2024.09.032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Street-view imagery has been densely covering cities. They provide a close-up perspective of the urban physical environment, allowing a comprehensive perception and understanding of cities. There has been a significant amount of effort to represent the urban physical environment based on street view imagery, and this representation has been utilized to study the relationships between the physical environment, human dynamics, and socioeconomic environments. However, there are two key challenges in representing the urban physical environment of streets based on street-view images for downstream tasks. First, current research mainly focuses on the proportions of visual elements within the scene, neglecting the spatial adjacency between them. Second, the spatial dependency and spatial interaction between streets have not been adequately accounted for. These limitations hinder the effective representation and understanding of urban streets. To address these challenges, we propose a dynamic graph representation framework based on dual spatial semantics. At the intra-street level, we consider the spatial adjacency relationships of visual elements. Our method dynamically parses visual elements within the scene, achieving context-specific representations. At the inter-street level, we construct two spatial weight matrices by integrating the spatial dependency and the spatial interaction relationships. It could account for the hybrid spatial relationships between streets comprehensively, enhancing the model’s ability to represent human dynamics and socioeconomic status. Furthermore, aside from these two modules, we also provide a spatial interpretability analysis tool for downstream tasks. A case study of our research framework shows that our method improves vehicle speed and flow estimation by 2.4% and 6.4%, respectively. This not only indicates that street-view imagery provides rich information about urban transportation but also offers a more accurate and reliable data-driven framework for urban studies. The code is available at: (<span><span>https://github.com/yemanzhongting/HybridGraph</span><svg><path></path></svg></span>).</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"218 \",\"pages\":\"Pages 19-32\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271624003708\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624003708","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

街景图像已密集覆盖城市。它们提供了城市物理环境的特写视角,使人们能够全面感知和了解城市。基于街景图像表现城市物理环境的工作已经开展了大量工作,这种表现形式已被用于研究物理环境、人类动态和社会经济环境之间的关系。然而,基于街景图像的城市街道物理环境表征在下游任务中存在两大挑战。首先,目前的研究主要关注场景中视觉元素的比例,而忽视了它们之间的空间邻接性。其次,街道之间的空间依赖性和空间互动性也没有得到充分考虑。这些局限性阻碍了对城市街道的有效呈现和理解。为了应对这些挑战,我们提出了一种基于双重空间语义的动态图表征框架。在街道内部层面,我们考虑视觉元素的空间邻接关系。我们的方法可以动态解析场景中的视觉元素,从而实现针对具体场景的表示。在街道间层面,我们通过整合空间依赖关系和空间交互关系来构建两个空间权重矩阵。这可以全面考虑街道之间的混合空间关系,增强模型对人类动态和社会经济状况的表现能力。此外,除了这两个模块,我们还为下游任务提供了空间可解释性分析工具。对我们的研究框架进行的案例研究表明,我们的方法将车辆速度和流量估算分别提高了 2.4% 和 6.4%。这不仅表明街景图像提供了丰富的城市交通信息,还为城市研究提供了更准确、更可靠的数据驱动框架。代码见(https://github.com/yemanzhongting/HybridGraph)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-level urban street representation with street-view imagery and hybrid semantic graph
Street-view imagery has been densely covering cities. They provide a close-up perspective of the urban physical environment, allowing a comprehensive perception and understanding of cities. There has been a significant amount of effort to represent the urban physical environment based on street view imagery, and this representation has been utilized to study the relationships between the physical environment, human dynamics, and socioeconomic environments. However, there are two key challenges in representing the urban physical environment of streets based on street-view images for downstream tasks. First, current research mainly focuses on the proportions of visual elements within the scene, neglecting the spatial adjacency between them. Second, the spatial dependency and spatial interaction between streets have not been adequately accounted for. These limitations hinder the effective representation and understanding of urban streets. To address these challenges, we propose a dynamic graph representation framework based on dual spatial semantics. At the intra-street level, we consider the spatial adjacency relationships of visual elements. Our method dynamically parses visual elements within the scene, achieving context-specific representations. At the inter-street level, we construct two spatial weight matrices by integrating the spatial dependency and the spatial interaction relationships. It could account for the hybrid spatial relationships between streets comprehensively, enhancing the model’s ability to represent human dynamics and socioeconomic status. Furthermore, aside from these two modules, we also provide a spatial interpretability analysis tool for downstream tasks. A case study of our research framework shows that our method improves vehicle speed and flow estimation by 2.4% and 6.4%, respectively. This not only indicates that street-view imagery provides rich information about urban transportation but also offers a more accurate and reliable data-driven framework for urban studies. The code is available at: (https://github.com/yemanzhongting/HybridGraph).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
Optimizing hybrid models for canopy nitrogen mapping from Sentinel-2 in Google Earth Engine A unique dielectric constant estimation for lunar surface through PolSAR model-based decomposition Unwrap-Net: A deep neural network-based InSAR phase unwrapping method assisted by airborne LiDAR data METNet: A mesh exploring approach for segmenting 3D textured urban scenes On-orbit geometric calibration of MERSI whiskbroom scanner
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1