Eduardo R. Corral-Soto, R. Tal, Langyue Wang, R. Persad, Luo Chao, C. Solomon, Bob Hou, G. Sohn, J. Elder
{"title":"3D城镇:自动城市意识项目","authors":"Eduardo R. Corral-Soto, R. Tal, Langyue Wang, R. Persad, Luo Chao, C. Solomon, Bob Hou, G. Sohn, J. Elder","doi":"10.1109/CRV.2012.64","DOIUrl":null,"url":null,"abstract":"The 3DTown project is focused on the development of a distributed system for sensing, interpreting and visualizing the real-time dynamics of urban life within the 3D context of a city. At the heart of this technology lies a core of algorithms that automatically integrate 3D urban models with data from pan/tilt video cameras, environmental sensors and other real-time information sources. A key challenge is the \"three-dimensionalization\" of pedestrians and vehicles tracked in 2D camera video, which requires automatic real-time computation of camera pose relative to the 3D urban environment. In this paper we report preliminary results from a prototype system we call 3DTown, which is composed of discrete modules connected through pre-determined communication protocols. Currently, these modules consist of: 1) A 3D modeling module that allows for the efficient reconstruction of building models and integration with indoor architectural plans, 2) A GeoWeb server that indexes a 3D urban database to render perspective views of both outdoor and indoor environments from any requested vantage, 3) Sensor modules that receive and distribute real-time data, 4) Tracking modules that detect and track pedestrians and vehicles in urban spaces and access highways, 5) Camera pose modules that automatically estimate camera pose relative to the urban environment, 6) Three-dimensionalization modules that receive information from the GeoWeb server, tracking and camera pose modules in order to back-project image tracks to geolocate pedestrians and vehicles within the 3D model, 7) An animation module that represents geo-located dynamic agents as sprites, and 8) A web-based visualization module that allows a user to explore the resulting dynamic 3D visualization in a number of interesting ways. To demonstrate our system we have used a blend of automatic and semi-automatic methods to construct a rich and accurate 3D model of a university campus, including both outdoor and indoor detail. The demonstration allows web-based 3D visualization of recorded patterns of pedestrian and vehicle traffic on streets and highways, estimations of vehicle speed, and real-time (live) visualization of pedestrian traffic and temperature data at a particular test site. Having demonstrated the system for hundreds of people, we report our informal observations on the user reaction, potential application areas and on the main challenges that must be addressed to bring the system closer to deployment.","PeriodicalId":372951,"journal":{"name":"2012 Ninth Conference on Computer and Robot Vision","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"3D Town: The Automatic Urban Awareness Project\",\"authors\":\"Eduardo R. Corral-Soto, R. Tal, Langyue Wang, R. Persad, Luo Chao, C. Solomon, Bob Hou, G. Sohn, J. Elder\",\"doi\":\"10.1109/CRV.2012.64\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The 3DTown project is focused on the development of a distributed system for sensing, interpreting and visualizing the real-time dynamics of urban life within the 3D context of a city. At the heart of this technology lies a core of algorithms that automatically integrate 3D urban models with data from pan/tilt video cameras, environmental sensors and other real-time information sources. A key challenge is the \\\"three-dimensionalization\\\" of pedestrians and vehicles tracked in 2D camera video, which requires automatic real-time computation of camera pose relative to the 3D urban environment. In this paper we report preliminary results from a prototype system we call 3DTown, which is composed of discrete modules connected through pre-determined communication protocols. Currently, these modules consist of: 1) A 3D modeling module that allows for the efficient reconstruction of building models and integration with indoor architectural plans, 2) A GeoWeb server that indexes a 3D urban database to render perspective views of both outdoor and indoor environments from any requested vantage, 3) Sensor modules that receive and distribute real-time data, 4) Tracking modules that detect and track pedestrians and vehicles in urban spaces and access highways, 5) Camera pose modules that automatically estimate camera pose relative to the urban environment, 6) Three-dimensionalization modules that receive information from the GeoWeb server, tracking and camera pose modules in order to back-project image tracks to geolocate pedestrians and vehicles within the 3D model, 7) An animation module that represents geo-located dynamic agents as sprites, and 8) A web-based visualization module that allows a user to explore the resulting dynamic 3D visualization in a number of interesting ways. To demonstrate our system we have used a blend of automatic and semi-automatic methods to construct a rich and accurate 3D model of a university campus, including both outdoor and indoor detail. The demonstration allows web-based 3D visualization of recorded patterns of pedestrian and vehicle traffic on streets and highways, estimations of vehicle speed, and real-time (live) visualization of pedestrian traffic and temperature data at a particular test site. Having demonstrated the system for hundreds of people, we report our informal observations on the user reaction, potential application areas and on the main challenges that must be addressed to bring the system closer to deployment.\",\"PeriodicalId\":372951,\"journal\":{\"name\":\"2012 Ninth Conference on Computer and Robot Vision\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 Ninth Conference on Computer and Robot Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CRV.2012.64\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Ninth Conference on Computer and Robot Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2012.64","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The 3DTown project is focused on the development of a distributed system for sensing, interpreting and visualizing the real-time dynamics of urban life within the 3D context of a city. At the heart of this technology lies a core of algorithms that automatically integrate 3D urban models with data from pan/tilt video cameras, environmental sensors and other real-time information sources. A key challenge is the "three-dimensionalization" of pedestrians and vehicles tracked in 2D camera video, which requires automatic real-time computation of camera pose relative to the 3D urban environment. In this paper we report preliminary results from a prototype system we call 3DTown, which is composed of discrete modules connected through pre-determined communication protocols. Currently, these modules consist of: 1) A 3D modeling module that allows for the efficient reconstruction of building models and integration with indoor architectural plans, 2) A GeoWeb server that indexes a 3D urban database to render perspective views of both outdoor and indoor environments from any requested vantage, 3) Sensor modules that receive and distribute real-time data, 4) Tracking modules that detect and track pedestrians and vehicles in urban spaces and access highways, 5) Camera pose modules that automatically estimate camera pose relative to the urban environment, 6) Three-dimensionalization modules that receive information from the GeoWeb server, tracking and camera pose modules in order to back-project image tracks to geolocate pedestrians and vehicles within the 3D model, 7) An animation module that represents geo-located dynamic agents as sprites, and 8) A web-based visualization module that allows a user to explore the resulting dynamic 3D visualization in a number of interesting ways. To demonstrate our system we have used a blend of automatic and semi-automatic methods to construct a rich and accurate 3D model of a university campus, including both outdoor and indoor detail. The demonstration allows web-based 3D visualization of recorded patterns of pedestrian and vehicle traffic on streets and highways, estimations of vehicle speed, and real-time (live) visualization of pedestrian traffic and temperature data at a particular test site. Having demonstrated the system for hundreds of people, we report our informal observations on the user reaction, potential application areas and on the main challenges that must be addressed to bring the system closer to deployment.