Urban structure can be better comprehended through analyzing its cores. Geospatial big data facilitate the identification of urban centers in terms of high accuracy and accessibility. However, previous studies seldom leverage multi-source geospatial big data to identify urban centers from a topological perspective. This study attempts to identify urban centers through the spatial integration of multi-source geospatial big data, including nighttime light imagery (NTL), building footprints (BFP) and street nodes of OpenStreetMap (OSM). We use a novel topological approach to construct complex networks from intra-urban hotspots based on the theory of centers by Christopher Alexander. We compute the degree of wholeness value for each hotspot as the centric index. The overlapped hotspots with the highest centric indices are regarded as urban centers. The identified urban centers in New York, Los Angeles, and Houston are consistent with their downtown areas, with overall accuracy of 90.23%. In Chicago, a new urban center is identified considering a larger spatial extent. The proposed approach can effectively and objectively prevent counting those hotspots with high intensity values but few neighbors into the result. This study proposes a topological approach for urban center identification and a bottom-up perspective for sustainable urban design.
Street-level imagery has emerged as a valuable tool for observing large-scale urban spaces with unprecedented detail. However, previous studies have been limited to analyzing individual street-level images. This approach falls short in representing the characteristics of a spatial unit, such as a street or grid, which may contain varying numbers of street-level images ranging from several to hundreds. As a result, a more comprehensive and representative approach is required to capture the complexity and diversity of urban environments at different spatial scales. To address this issue, this study proposes a deep learning-based module called Vision-LSTM, which can effectively obtain vector representation from varying numbers of street-level images in spatial units. The effectiveness of the module is validated through experiments to recognize urban villages, achieving reliable recognition results (overall accuracy: 91.6%) through multimodal learning that combines street-level imagery with remote sensing imagery and social sensing data. Compared to existing image fusion methods, Vision-LSTM demonstrates significant effectiveness in capturing associations between street-level images. The proposed module can provide a more comprehensive understanding of urban spaces, enhancing the research value of street-level imagery and facilitating multimodal learning-based urban research. Our models are available at https://github.com/yingjinghuang/Vision-LSTM.
360-degree video is an immersive technology used in research across academic disciplines. This paper provides the first comprehensive review on the use of 360-degree video for virtual place-based research, highlighting its use in experimental, experiential, and environmental observation studies. Five key research domains for 360-degree video are described: tourism and cultural heritage; built environment and land use; natural environment; health and wellbeing; and transportation and safety. 360-degree video offers considerable advantages compared to unidirectional video, computer-generated virtual reality, and map-based geographic representation. Benefits include ease of use, low-cost, interactivity, sense of immersive realism, remote accessibility, and the ability to capture and analyze places in a fully panoramic field of view. Limitations include additional costs associated with virtual reality viewing technologies, simulation sickness and discomfort, and viewer distraction due to the technology's novelty and immersive affordances. This paper also outlines a future research agenda, including the possibility of moving beyond the ‘testing and trialling’ of 360-degree video since it provides novel research opportunities distinct from either ‘real’ experience or conventional forms of visual and spatial representation. Overall, this paper provides detailed evidence for researchers interested in using 360-degree video for virtual research on built, social, and natural environments and human-environment interactions.
Urban forests are becoming increasingly important for human well-being as they provide ecosystem services that contribute to improving well-being of city dwellers and to addressing climate change. However, despite their importance, there is an information gap in most of the world's urban forests due to the high cost and complexity of conducting standard forest inventories in urban environments. New technologies based on artificial intelligence can represent a smart and efficient alternative to costly traditional inventories. In this paper, we present an approach based on deep learning algorithms for the detection, counting, and geopositioning of trees using a combination of ground-level and aerial/satellite imagery. We tested several convolutional networks, exploring different combinations of hyperparameters and adjusting the query distance between ground-level images, detection radius, and various resolutions of satellite and aerial images. Our methodology is able to detect and accurately locate 79% of the urban street tree with a positional accuracy of 60 cm to the center of the canopy. Additionally, this approach allows us to determine the availability of photographs of urban trees, indicating from which Google Street View image each tree is visible. Our research provides a scalable and replicable solution to the scarcity of urban tree data and information worldwide, demonstrating the potential of artificial intelligence to revolutionize the way in which we inventory and monitor urban forests.
The rapid urbanization leads to the dynamic changes of the urban external landscape and forms different urban growth patterns (UGP), which in turn affects the development level of the urban internal functions as well. However, few studies have quantitatively examined the spatial stratified heterogeneity (SSH) and driving mechanism of the urban development level (UDL) under different UGPs. Based on the multi-source geographic data of 368 Chinese cities, this study identified the UGP at the patch scale from 2010 to 2020. It furthermore quantified the UDL of newly added construction land. In order to reveal the SSH pattern, motivating factors, and interaction mechanism of the UDL under different UGPs, this paper chose to use the optimal parameter-based geographic detector (OPGD) model, which accounts for the modifiable areal unit problem (MAUP). The results indicate that: 1) There are significant spatial differences in the UDL among different UGPs. Namely, the infilling pattern exhibits the highest UDL, followed by the edge pattern, and the outlying pattern, which has the worst UDL; 2) The SSH of the UDL is defined by the interaction of multiple factors. Different UGPs have both differences and similarities in their motivating factors, thus affecting the spatial distribution of UDL. GDP density and road network density are the two factors with the strongest driving force for all UGPs. Specifically, the UDL of infilling-expansion areas is more sensitive to the industrial structure and infrastructure conditions. On the other hand, factors such as residential density and socio-economic activities are more important to the UDL of edge-expansion areas, while population, topography, and location factors have a stronger influence on the UDL of outlying-expansion; 3) A change of spatial scale will result in the heterogeneity of the influence of motivating factors in each UGP. In general, the systematic comparison of the SSH and driving mechanism of UDL under different UGPs helps us explore high-quality and sustainable urbanization paths. As a result, this scientific field is given theoretical basis for urban planners and managers to rationally regulate external urban forms and optimize the internal structure layout.
To stay within 1.5 °C of global warming, reducing energy-related emissions in the building sector is essential. Rather than generic climate recommendations, this requires tailored, low-carbon urban planning solutions and spatially explicit methods that can inform policy measures at urban, street and building scale. Here, we propose a scalable method that is able to predict building age information in different European countries using only open urban morphology data. We find that spatially cross-validated regression models are sufficiently robust to generalize and predict building age in unseen cities with a mean absolute error (MAE) between 15.3 years (Netherlands) and 19.9 years (Spain). Our experiments show that large-scale models improve generalization for predicting across cities, but are not needed to infer missing data within known cities. Filling data gaps within known cities is possible with a MAE between 9.6 years (Netherlands) and 16.7 years (Spain). Overall, our results demonstrate the feasibility of generating missing age data in different contexts across Europe and informing climate mitigation policies such as large-scale energy retrofits. For the French residential building stock, we find that using age predictions to target retrofit efforts can increase energy savings by more than 50% compared to missing age data. Finally, we highlight challenges posed by data inconsistencies and urban form differences between countries that need to be addressed for an actual roll-out of such methods.