J. Santana, Sebastián Ortega, J. M. Santana, A. Trujillo, J. P. Suárez
Due to the importance of electricity supply, electric companies must inspect their infrastructure in order to guarantee the reliability of the service. In this scenario, many companies use LiDAR technology for modeling the power line corridors and detect possible anomalies and risks. This process is quite expensive in terms of costs and human dependency so, maximizing the automation of the process is critical. In this paper, a method for reducing turbulence-noise in airborne LiDAR point clouds for a posterior visualization of a power line corridor in a virtual 3D-globe is presented. Based on an analysis performed against a set of point clouds that indicates that most noise is composed of a mass which follows the helicopter trajectory, the method attempts to integrate a noise reduction process using the distance between points and the helicopter as cleaning criterion. A comparison between a proposed variation of a classification method using a point cloud manually filtered and the same method variation but integrating the presented noise reduction method is carried out to validate the automation effectiveness. Finally, the resulting model is displayed using a virtual 3D-globe, easing analytical tasks.
{"title":"Noise Reduction Automation of LiDAR Point Clouds for Modeling and Representation of High Voltage Lines in a 3D Virtual Globe","authors":"J. Santana, Sebastián Ortega, J. M. Santana, A. Trujillo, J. P. Suárez","doi":"10.2312/CEIG.20181160","DOIUrl":"https://doi.org/10.2312/CEIG.20181160","url":null,"abstract":"Due to the importance of electricity supply, electric companies must inspect their infrastructure in order to guarantee the reliability of the service. In this scenario, many companies use LiDAR technology for modeling the power line corridors and detect possible anomalies and risks. This process is quite expensive in terms of costs and human dependency so, maximizing the automation of the process is critical. In this paper, a method for reducing turbulence-noise in airborne LiDAR point clouds for a posterior visualization of a power line corridor in a virtual 3D-globe is presented. Based on an analysis performed against a set of point clouds that indicates that most noise is composed of a mass which follows the helicopter trajectory, the method attempts to integrate a noise reduction process using the distance between points and the helicopter as cleaning criterion. A comparison between a proposed variation of a classification method using a point cloud manually filtered and the same method variation but integrating the presented noise reduction method is carried out to validate the automation effectiveness. Finally, the resulting model is displayed using a virtual 3D-globe, easing analytical tasks.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116567450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inma García-Pereira, J. Gimeno, C. Portalés, María Vidal-González, P. Morillo
The introduction of Augmented Reality (AR) and Virtual Reality (VR) in the inspection works carried out during the construction of prefabricated buildings can allow the early detection and elimination of deviations in their quality and energy efficiency. These new tools let us change from the traditional note taking on paper to the use of an AR application that allows to make rich annotations. The later on-site or in-office revision of the information taken as well as the remote communication while the inspection is going on can speed up and optimize the detection of errors and the maintenance of quality through the use of AR and VR. In this paper, the work in progress that is being carried out within the SIRAE project is shown. With it, we intend to implement the use of AR annotations for their visualization and modification in real time or later either in situ (AR) or from any other location (VR). The obtained first lab results are quite promising, since the usability of the system, still in development, augurs an easy adaptation of the workers to the new work tool and a substantial streamlining of the inspection processes.
{"title":"On the Design of a Mixed-Reality Annotations Tool for the Inspection of Pre-fab Buildings","authors":"Inma García-Pereira, J. Gimeno, C. Portalés, María Vidal-González, P. Morillo","doi":"10.2312/CEIG.20181157","DOIUrl":"https://doi.org/10.2312/CEIG.20181157","url":null,"abstract":"The introduction of Augmented Reality (AR) and Virtual Reality (VR) in the inspection works carried out during the construction of prefabricated buildings can allow the early detection and elimination of deviations in their quality and energy efficiency. These new tools let us change from the traditional note taking on paper to the use of an AR application that allows to make rich annotations. The later on-site or in-office revision of the information taken as well as the remote communication while the inspection is going on can speed up and optimize the detection of errors and the maintenance of quality through the use of AR and VR. In this paper, the work in progress that is being carried out within the SIRAE project is shown. With it, we intend to implement the use of AR annotations for their visualization and modification in real time or later either in situ (AR) or from any other location (VR). The obtained first lab results are quite promising, since the usability of the system, still in development, augurs an easy adaptation of the workers to the new work tool and a substantial streamlining of the inspection processes.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124923395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Tammaro, Álvaro Segura, A. Moreno, Jairo R. Sánchez
In the last year, the concept of Industry 4.0 and smart factories has increasingly gained more importance. One of the central aspects of this innovation is the coupling of physical systems with a corresponding virtual representation, known as the Digital Twin. This technology enables new powerful applications, such as real-time production optimization or advanced cloud services. To ensure the real-virtual equivalence it is necessary to implement multimodal data acquisition frameworks for each production system using their sensing capabilities, as well as appropriate communication and control architectures. In this paper we extend the concept of the digital twin of a production system adding a virtual representation of its operational environment. In this way the paper describes a proof of concept using an industrial robot, where the objects inside its working volume are captured by an optical tracking system. Detected objects are added to the digital twin model of the cell along with the robot, having in this way a synchronized virtual representation of the complete system that is updated in real time. The paper describes this tracking system as well as the integration of the digital twin in a Web3D based virtual environment that can be accessed from any compatible devices such as PCs, tablets and smartphones.
{"title":"Extending Industrial Digital Twins with Optical Object Tracking","authors":"A. Tammaro, Álvaro Segura, A. Moreno, Jairo R. Sánchez","doi":"10.2312/CEIG.20171204","DOIUrl":"https://doi.org/10.2312/CEIG.20171204","url":null,"abstract":"In the last year, the concept of Industry 4.0 and smart factories has increasingly gained more importance. One of the central aspects of this innovation is the coupling of physical systems with a corresponding virtual representation, known as the Digital Twin. This technology enables new powerful applications, such as real-time production optimization or advanced cloud services. To ensure the real-virtual equivalence it is necessary to implement multimodal data acquisition frameworks for each production system using their sensing capabilities, as well as appropriate communication and control architectures. In this paper we extend the concept of the digital twin of a production system adding a virtual representation of its operational environment. In this way the paper describes a proof of concept using an industrial robot, where the objects inside its working volume are captured by an optical tracking system. Detected objects are added to the digital twin model of the cell along with the robot, having in this way a synchronized virtual representation of the complete system that is updated in real time. The paper describes this tracking system as well as the integration of the digital twin in a Web3D based virtual environment that can be accessed from any compatible devices such as PCs, tablets and smartphones.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121470692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Graciano, Antonio J. Rueda Ruiz, F. Feito-Higueruela
Traditionally, the rendering of volumetric terrain data, as many other scientific 3D data, has been carried out performing direct volume rendering techniques on voxel-based representations. A main problem with this kind of representation is its large memory footprint. Several solutions have emerged in order to reduce the memory consumption and improve the rendering performance. An example of this is the hierarchical data structures for space division based on octrees. Although these representations have produced excellent outcomes, especially for binary datasets, their use in data containing internal structures and organized in a layered style, as in the case of surface-subsurface terrain, still leads to a high memory usage. In this paper, we propose the use of a compact stack-based representation for 3D terrain data, allowing a real-time rendering using classic volume rendering procedures. In contrast with previous work that used this representation as an assistant for rendering purposes, we suggest its use as main data structure maintaining the whole dataset in the GPU in a compact way. Furthermore, the implementation of some visual operations included in geoscientific applications such as borehole visualization, attenuation of material layers or cross sections has been carried out.
{"title":"Direct Volume Rendering of Stack-Based Terrains","authors":"A. Graciano, Antonio J. Rueda Ruiz, F. Feito-Higueruela","doi":"10.2312/ceig.20171207","DOIUrl":"https://doi.org/10.2312/ceig.20171207","url":null,"abstract":"Traditionally, the rendering of volumetric terrain data, as many other scientific 3D data, has been carried out performing direct volume rendering techniques on voxel-based representations. A main problem with this kind of representation is its large memory footprint. Several solutions have emerged in order to reduce the memory consumption and improve the rendering performance. An example of this is the hierarchical data structures for space division based on octrees. Although these representations have produced excellent outcomes, especially for binary datasets, their use in data containing internal structures and organized in a layered style, as in the case of surface-subsurface terrain, still leads to a high memory usage. In this paper, we propose the use of a compact stack-based representation for 3D terrain data, allowing a real-time rendering using classic volume rendering procedures. In contrast with previous work that used this representation as an assistant for rendering purposes, we suggest its use as main data structure maintaining the whole dataset in the GPU in a compact way. Furthermore, the implementation of some visual operations included in geoscientific applications such as borehole visualization, attenuation of material layers or cross sections has been carried out.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128339479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work consists in the development of a Web-based system which allows museum managers to generate their own virtual 3D exhibitions, using a two-dimensional graphical user interface. These virtual 3D exhibitions can be displayed interactively in the museum's website, or as a immersive experience in a virtual headset (e.g. Google CardBoard) via a mobile app. We use the SVG specification to edit the exhibition, handling 3D models of the rooms, sculptures and pictures. Furthermore, the user can add to the scene a list of cameras with their respective control points, and then generate several routes through the scene. The scene is rendered in the browser using standard technologies, such as WebGL (ThreeJS) and X3D (X3DOM), and the mobile app is generated via Unity3D. Also, the X3D and MPEG-4 compression standards allow us to transform the scene and its camera routes into 360° videos, where user can manipulate camera orientation while playing the track. Also, audio tracks can be added to each route and hence, inserted in the 360° video.
{"title":"A Web 3D-Scene Editor of Virtual Reality Exhibitions and 360-degree Videos","authors":"Vicente Martínez, Alba M. Ríos, F. J. Melero","doi":"10.2312/CEIG.20161308","DOIUrl":"https://doi.org/10.2312/CEIG.20161308","url":null,"abstract":"This work consists in the development of a Web-based system which allows museum managers to generate their own virtual 3D exhibitions, using a two-dimensional graphical user interface. These virtual 3D exhibitions can be displayed interactively in the museum's website, or as a immersive experience in a virtual headset (e.g. Google CardBoard) via a mobile app. We use the SVG specification to edit the exhibition, handling 3D models of the rooms, sculptures and pictures. Furthermore, the user can add to the scene a list of cameras with their respective control points, and then generate several routes through the scene. The scene is rendered in the browser using standard technologies, such as WebGL (ThreeJS) and X3D (X3DOM), and the mobile app is generated via Unity3D. Also, the X3D and MPEG-4 compression standards allow us to transform the scene and its camera routes into 360° videos, where user can manipulate camera orientation while playing the track. Also, audio tracks can be added to each route and hence, inserted in the 360° video.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129035927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco R. Feito-Higueruela, José Negrillo-Cárdenas, R. J. Segura, C. J. Ogáyar, J. M. Fuertes, M. J. Lucena
El Centro de Estudios Avanzados en Tecnologias de la Informacion y Comunicacion (CEATIC) de la Universidad de Jaen se enmarca en la estrategia de apoyo a la excelencia investigadora y docente ademas de impulsar la transferencia de conocimiento. El objetivo es desarrollar un centro, sin animo de lucro, que aglutine a grupos de investigacion, recursos y medios instrumentales que permitan el avance del conocimiento, el desarrollo y la innovacion, en el campo de las tecnologias de la informacion y la comunicacion, mediante la educacion, la investigacion cientifica y el desarrollo tecnologico de excelencia.
{"title":"Controlador de aforo","authors":"Francisco R. Feito-Higueruela, José Negrillo-Cárdenas, R. J. Segura, C. J. Ogáyar, J. M. Fuertes, M. J. Lucena","doi":"10.2312/CEIG.20161325","DOIUrl":"https://doi.org/10.2312/CEIG.20161325","url":null,"abstract":"El Centro de Estudios Avanzados en Tecnologias de la Informacion y Comunicacion (CEATIC) de la Universidad de Jaen se enmarca en la estrategia de apoyo a la excelencia investigadora y docente ademas de impulsar la transferencia de conocimiento. El objetivo es desarrollar un centro, sin animo de lucro, que aglutine a grupos de investigacion, recursos y medios instrumentales que permitan el avance del conocimiento, el desarrollo y la innovacion, en el campo de las tecnologias de la informacion y la comunicacion, mediante la educacion, la investigacion cientifica y el desarrollo tecnologico de excelencia.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126417910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports about the experience, problems encountered, and the solutions found to develop Knight Lore 20XX, an experiment of using Computer Graphics techniques to bring a classic game from the 80's to modern technology.
{"title":"Knight Lore 20xx: Bringing a Classic Game to Modern Technology","authors":"Ricard Galvany, G. Patow","doi":"10.2312/CEIG.20161324","DOIUrl":"https://doi.org/10.2312/CEIG.20161324","url":null,"abstract":"This paper reports about the experience, problems encountered, and the solutions found to develop Knight Lore 20XX, an experiment of using Computer Graphics techniques to bring a classic game from the 80's to modern technology.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128112768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we report on a set of rules to visit triangles in triangulated height fields defined over regular grids in a back-to-front order with respect to an arbitrary viewpoint. With the viewpoint, we associate an axis-alligned local reference framework. Projections on the XY plane of the local axis and the bisector of the first and third quadrants define six sectors. Specific visiting rules for collections of triangles that project on each sector are then defined. The experiments conducted show that the implementation of a simple algorithm based on the set of visiting rules defined allows real time interaction when the viewing position moves along an arbitrary 3D path.
{"title":"Ordering Triangles in Triangulated Terrains Over Regular Grids","authors":"J. Alonso, R. Joan-Arinyo","doi":"10.2312/CEIG.20161313","DOIUrl":"https://doi.org/10.2312/CEIG.20161313","url":null,"abstract":"In this work we report on a set of rules to visit triangles in triangulated height fields defined over regular grids in a back-to-front order with respect to an arbitrary viewpoint. With the viewpoint, we associate an axis-alligned local reference framework. Projections on the XY plane of the local axis and the bisector of the first and third quadrants define six sectors. Specific visiting rules for collections of triangles that project on each sector are then defined. The experiments conducted show that the implementation of a simple algorithm based on the set of visiting rules defined allows real time interaction when the viewing position moves along an arbitrary 3D path.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Procedural modeling techniques provide an easy way for realistic terrain synthesis. There are many methods for this purpose, but few of them provide the necessary constraints to control the final result. The current controlling methods are slow and inaccurate. The goal of this paper is to present a new procedural method to synthesis realistic terrain that fulfills the constraint of a set of consecutive coordinates from GPS routes. In the new method, random terrain generation is based on Perlin noise algorithm. Instead of getting a random value for all points of each octave, the method presents the novelty of solving the needed values of each point to fulfill the coordinates of the GPS route when Perlin algorithm is applied. Points that are not included in the constraint, keep the randomness provided by the original Perlin method. The results show that the method allows, at low computational cost, to integrate different complexity paths with high accuracy while conserving the natural aspect of the procedural generation.
{"title":"Procedural Modeling of Terrain from GPS Routes","authors":"C. Gasch, M. Chover, I. Remolar, Cristina Rebollo","doi":"10.2312/CEIG.20161321","DOIUrl":"https://doi.org/10.2312/CEIG.20161321","url":null,"abstract":"Procedural modeling techniques provide an easy way for realistic terrain synthesis. There are many methods for this purpose, but few of them provide the necessary constraints to control the final result. The current controlling methods are slow and inaccurate. \u0000 \u0000The goal of this paper is to present a new procedural method to synthesis realistic terrain that fulfills the constraint of a set of consecutive coordinates from GPS routes. \u0000 \u0000In the new method, random terrain generation is based on Perlin noise algorithm. Instead of getting a random value for all points of each octave, the method presents the novelty of solving the needed values of each point to fulfill the coordinates of the GPS route when Perlin algorithm is applied. Points that are not included in the constraint, keep the randomness provided by the original Perlin method. \u0000 \u0000The results show that the method allows, at low computational cost, to integrate different complexity paths with high accuracy while conserving the natural aspect of the procedural generation.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127178204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the mature state of the volume rendering field, its adoption in medical applications is hindered by its complex parametrization and control, and 2D slice based tools are still preferred for clinical workflows. In this paper, we introduce the concept of spatial opacity maps as an interactive tool for exploring volumetric data focusing on the rendering of features of interest. In a region growing fashion, the maps are dynamically created from a user input on the 2D slices, taking into account not only the density values of the structure but also the topology. Using this approach, an inexperienced user is able to generate meaningful 3D renderings with no need to tweak non-intuitive visualization parameters. The spatial opacity maps are independent of the current visualization parameters and they can be easily plugged into the volume rendering integral and combined with other approaches for region of interest (ROI) visualization. We combine our approach with a simple automatic transfer function generation algorithm to improve the visualization of the contextual data.
{"title":"Spatial Opacity Maps for Direct Volume Rendering of Regions of Interest","authors":"A. Aguilera, Alejandro León","doi":"10.2312/CEIG.20161310","DOIUrl":"https://doi.org/10.2312/CEIG.20161310","url":null,"abstract":"Despite the mature state of the volume rendering field, its adoption in medical applications is hindered by its complex parametrization and control, and 2D slice based tools are still preferred for clinical workflows. \u0000 \u0000In this paper, we introduce the concept of spatial opacity maps as an interactive tool for exploring volumetric data focusing on the rendering of features of interest. In a region growing fashion, the maps are dynamically created from a user input on the 2D slices, taking into account not only the density values of the structure but also the topology. Using this approach, an inexperienced user is able to generate meaningful 3D renderings with no need to tweak non-intuitive visualization parameters. The spatial opacity maps are independent of the current visualization parameters and they can be easily plugged into the volume rendering integral and combined with other approaches for region of interest (ROI) visualization. We combine our approach with a simple automatic transfer function generation algorithm to improve the visualization of the contextual data.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126709969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}