Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/209-212
E. Bilotta, Pietro S. Pantano
{"title":"Patterns of Creativity in Design","authors":"E. Bilotta, Pietro S. Pantano","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/209-212","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/209-212","url":null,"abstract":"","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125693800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/081-087
U. Erra, V. Scarano, D. Guida
This paper describes an efficient technique for the renderingof large terrain surfaces. The technique is based on a simple rings structure:a sequenceof concentricrings at different resolutionsand centeredon the viewer’s position. Each ring is represented by a set of patches at identical resolutions. Rings near the viewer have a finer resolution than the rings further from the viewer. At runtime, the patches within the rings change resolution based on the viewer’s position. The GPU decodes in real time height maps encoded by a fractal compressorfrom which sample the height component of the terrain. Since adjacent patches of different rings can disagree on the resolution of common edge GPU stitches the meshes in order to avoid any cracks or degenerate triangles. The renderedmeshes ensurethe absenceof cracks that may cause the appearanceof visual artifacts. In addition, a tile managersupport is evaluated in order to maintain terrain datasets on disk storage avoiding a costly load of the entire datasets into the memory. Categories and Subject Descriptors (according to ACM CCS) : I.3.3 [Computer Graphics]: Picture and Image Gener-ation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism I.3.7 [Computer Graphics]: Fractals
{"title":"Fractal Compression Approach for Efficient Interactive Terrain Rendering on the GPU","authors":"U. Erra, V. Scarano, D. Guida","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/081-087","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/081-087","url":null,"abstract":"This paper describes an efficient technique for the renderingof large terrain surfaces. The technique is based on a simple rings structure:a sequenceof concentricrings at different resolutionsand centeredon the viewer’s position. Each ring is represented by a set of patches at identical resolutions. Rings near the viewer have a finer resolution than the rings further from the viewer. At runtime, the patches within the rings change resolution based on the viewer’s position. The GPU decodes in real time height maps encoded by a fractal compressorfrom which sample the height component of the terrain. Since adjacent patches of different rings can disagree on the resolution of common edge GPU stitches the meshes in order to avoid any cracks or degenerate triangles. The renderedmeshes ensurethe absenceof cracks that may cause the appearanceof visual artifacts. In addition, a tile managersupport is evaluated in order to maintain terrain datasets on disk storage avoiding a costly load of the entire datasets into the memory. Categories and Subject Descriptors (according to ACM CCS) : I.3.3 [Computer Graphics]: Picture and Image Gener-ation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism I.3.7 [Computer Graphics]: Fractals","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131621025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/053-059
Laura Papaleo, E. Puppo
Abstract This paper presents a general Surface Reconstruction framework which encapsulates the uncertainty of the sam-pled data, making no assumption on the shape of the surface to be reconstructed. Starting from the input points(either points clouds or multiple range images), an Estimated Existence Function (EEF) is built which modelsthe space in which the desired surface could exist and, by the extraction of EEF critical points, the surface isreconstructed. The nal goal is the development of a generic framework able to adapt the result to different kindsof additional information coming from multiple sensors. Categories and Subject Descriptors (according to ACM CCS) : I.3.3 [Computer Graphics]: Shape Modeling, Uncer-tain data, Multi-sensor Data Fusion 1. Introduction 3D scanning devices are becoming more and more availableand affordable. Thanks to modern acquisition technologies,heterogeneous data can be acquired from multiple acquisi-tion sensors, which often incorporate information about un-certainty of the data sampling process. Surface reconstruc-tion techniques designed over a specic sensor often takeinto account uncertainty during the reconstruction process,but they are limited to work with a single device. On thecontrary, general techniques that can process data comingfrom different sensors usually disregard much part of sensor-specic information, and seldom take into account uncer-tainty.The basic concept of our approach is
{"title":"Shape Reconstruction with Uncertainty","authors":"Laura Papaleo, E. Puppo","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/053-059","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/053-059","url":null,"abstract":"Abstract This paper presents a general Surface Reconstruction framework which encapsulates the uncertainty of the sam-pled data, making no assumption on the shape of the surface to be reconstructed. Starting from the input points(either points clouds or multiple range images), an Estimated Existence Function (EEF) is built which modelsthe space in which the desired surface could exist and, by the extraction of EEF critical points, the surface isreconstructed. The nal goal is the development of a generic framework able to adapt the result to different kindsof additional information coming from multiple sensors. Categories and Subject Descriptors (according to ACM CCS) : I.3.3 [Computer Graphics]: Shape Modeling, Uncer-tain data, Multi-sensor Data Fusion 1. Introduction 3D scanning devices are becoming more and more availableand affordable. Thanks to modern acquisition technologies,heterogeneous data can be acquired from multiple acquisi-tion sensors, which often incorporate information about un-certainty of the data sampling process. Surface reconstruc-tion techniques designed over a specic sensor often takeinto account uncertainty during the reconstruction process,but they are limited to work with a single device. On thecontrary, general techniques that can process data comingfrom different sensors usually disregard much part of sensor-specic information, and seldom take into account uncer-tainty.The basic concept of our approach is","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121837012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2011/029-033
Andrea Casanova, M. De Marsico, S. Ricciardi
Despite its potential advantages, gesture based interface usage is currently rather limited due to operational and practical issues, while most proposals aim at replacing mouse and keyboard functionalities for medical/surgical applications. This paper presents a crime scene interpretation framework which combines augmented reality visual paradigm and gesture based interaction to provide a new generation of detectives with interactive visualization and manipulation of virtual exhibits while seeing the real environment. The idea is to augment the exploration of the crime scene by means of a see-through head mounted display, exploiting a small set of simple (user-wise) gestures and the visual interface to enable a wider set of commands and functionalities, improving both the efficacy and the accuracy of user-system interaction. The proposed system allow the user to freely position virtual replicas of real object to interactively build visual hypothesis about the crime under investigation, or even to set virtual landmarks which can be used to take distance/angular measurements. All these action can be performed without mouse and keyboard but simply through intuitive gestures. Interaction techniques.
{"title":"Crime Scene Interpretation Through an Augmented Reality Environment","authors":"Andrea Casanova, M. De Marsico, S. Ricciardi","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2011/029-033","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2011/029-033","url":null,"abstract":"Despite its potential advantages, gesture based interface usage is currently rather limited due to operational and practical issues, while most proposals aim at replacing mouse and keyboard functionalities for medical/surgical applications. This paper presents a crime scene interpretation framework which combines augmented reality visual paradigm and gesture based interaction to provide a new generation of detectives with interactive visualization and manipulation of virtual exhibits while seeing the real environment. The idea is to augment the exploration of the crime scene by means of a see-through head mounted display, exploiting a small set of simple (user-wise) gestures and the visual interface to enable a wider set of commands and functionalities, improving both the efficacy and the accuracy of user-system interaction. The proposed system allow the user to freely position virtual replicas of real object to interactively build visual hypothesis about the crime under investigation, or even to set virtual landmarks which can be used to take distance/angular measurements. All these action can be performed without mouse and keyboard but simply through intuitive gestures. Interaction techniques.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132554274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/147-152
F. Guggeri, Marco Livesu, R. Scateni
In this paper we present the work in progress along with some preliminary research results in the field of Computational Geometry and Mesh Processing obtained by the Computer Graphics Group of the University of Cagliari, Italy. We focus on the work in mesh analysis by introducing the development of a lightweight visualization and processing tool that helped expanding the aims of the group by letting the students from the University move their first steps in Computer Graphics. We show some results obtained by the group with the focus on the usefulness of a common framework of reference.
{"title":"Tools and Applications for Teaching and Research in Computer Graphics","authors":"F. Guggeri, Marco Livesu, R. Scateni","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/147-152","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/147-152","url":null,"abstract":"In this paper we present the work in progress along with some preliminary research results in the field of Computational Geometry and Mesh Processing obtained by the Computer Graphics Group of the University of Cagliari, Italy. We focus on the work in mesh analysis by introducing the development of a lightweight visualization and processing tool that helped expanding the aims of the group by letting the students from the University move their first steps in Computer Graphics. We show some results obtained by the group with the focus on the usefulness of a common framework of reference.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"31 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133208251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/139-144
E. Bilotta, Pietro S. Pantano, F. Bertacchini, L. Gabriele, G. Longo, Vincenzo Mazzeo, C. Rizzuti, A. Talarico, G. Tocci, S. Vena
In this paper, we present ImaginationTOOLS, a software which uses mathematical models to generate, analyse and synthesize sound and music. User learns in an imaginary 3D-city basic concepts on mathematical models and music by interacting with pedagogical agents; composes in a laboratory room using a manipulatory agent-like interface; listens to music in an immersive room and creates interactive music with ad-hoc designed instruments and performs and shares music on the net. It represents a real advancement on the existing similar scientificallyoriented products available on the market and it is considered a fusion of different music and multimedia technologies, with a strong developmental trend into physical interaction design. It is the first software using 3D interaction in a 3D environment to produce sound and music, extending the potential of musicians by experimenting in the psycho-acoustical domain of sound and overcoming the problems of musical education, with simple interface.
{"title":"ImaginationTOOLS (TM)- A 3D Environment for Learning and Playing Music","authors":"E. Bilotta, Pietro S. Pantano, F. Bertacchini, L. Gabriele, G. Longo, Vincenzo Mazzeo, C. Rizzuti, A. Talarico, G. Tocci, S. Vena","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/139-144","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/139-144","url":null,"abstract":"In this paper, we present ImaginationTOOLS, a software which uses mathematical models to generate, analyse and synthesize sound and music. User learns in an imaginary 3D-city basic concepts on mathematical models and music by interacting with pedagogical agents; composes in a laboratory room using a manipulatory agent-like interface; listens to music in an immersive room and creates interactive music with ad-hoc designed instruments and performs and shares music on the net. It represents a real advancement on the existing similar scientificallyoriented products available on the market and it is considered a fusion of different music and multimedia technologies, with a strong developmental trend into physical interaction design. It is the first software using 3D interaction in a 3D environment to produce sound and music, extending the potential of musicians by experimenting in the psycho-acoustical domain of sound and overcoming the problems of musical education, with simple interface.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/183-187
Marco Agus, E. Gobbetti, G. Pintore, G. Zanetti, Antonio Zorcolo
Cataract is a clouding of the eye’s natural lens, normally due to natural aging changes, and involving at least half of the population over 65 years. Cataract extraction is the only solution for restoring a clear vision, and nowadays is probably the most frequently practiced surgical procedure. This paper describes a novel virtual reality simulation system for cataract surgery training, involving the capsulorhexis and phacoemulsication tasks. The simulator runs on a multiprocessing PC platform and provides realistic physically-based visual simulations of tools interactions. The current setup employs SensAble PHANToM for simulating the interaction devices, and a binocular display for presenting images to the user.
{"title":"Real-time Cataract Surgery Simulation for Training","authors":"Marco Agus, E. Gobbetti, G. Pintore, G. Zanetti, Antonio Zorcolo","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/183-187","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/183-187","url":null,"abstract":"Cataract is a clouding of the eye’s natural lens, normally due to natural aging changes, and involving at least half of the population over 65 years. Cataract extraction is the only solution for restoring a clear vision, and nowadays is probably the most frequently practiced surgical procedure. This paper describes a novel virtual reality simulation system for cataract surgery training, involving the capsulorhexis and phacoemulsication tasks. The simulator runs on a multiprocessing PC platform and provides realistic physically-based visual simulations of tools interactions. The current setup employs SensAble PHANToM for simulating the interaction devices, and a binocular display for presenting images to the user.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132013460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/247-253
M. Falchetto, M. Barone, D. Pau
{"title":"Adaptive Frame Rate Up-conversion with Motion Extraction from 3D Space for 3D Pipelines","authors":"M. Falchetto, M. Barone, D. Pau","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/247-253","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2006/247-253","url":null,"abstract":"","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132879779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/015-022
Alessandro Martinelli
We propose a new algorithm to get a representation of a curved surface with the precision of the image pixel. This technique uses some results from Scan-line algorithms, but it considers also the new functionalities from graphics hardware and takes advantages from it. We explain the general method, with principles common to every kind of surface: then we illustrate how these principles can be applied to quadratic and cubic beziér triangles, showing formulas and some algorithm details.
{"title":"An Efficient Algorithm for Adaptive Segmentation and Tessellation with Pixel Precision","authors":"Alessandro Martinelli","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/015-022","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2007/015-022","url":null,"abstract":"We propose a new algorithm to get a representation of a curved surface with the precision of the image pixel. This technique uses some results from Scan-line algorithms, but it considers also the new functionalities from graphics hardware and takes advantages from it. We explain the general method, with principles common to every kind of surface: then we illustrate how these principles can be applied to quadratic and cubic beziér triangles, showing formulas and some algorithm details.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117345900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/117-122
V. Garro, Andrea Fusiello
This paper describes a work in progress towards the implementation of a complete system that provides tourists with relevant visual information related to cultural heritage sites. Thanks to the diffusion of high-end mobile devices and the recent improvement in computer vision research on 3D Structure and Motion reconstruction, it is now possible to develop mobile mixed reality applications that can interact with spots of historical interest in the city. In particular we present an accurate localization of the mobile device that leverages on a pre-computed 3D structure to obtain image-model correspondences. Preliminary experiments with a calibrated camera – indoor and outdoor – demonstrate sufficient accuracy to support mixed reality.
{"title":"Toward Wide-Area Camera Localization for Mixed Reality","authors":"V. Garro, Andrea Fusiello","doi":"10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/117-122","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/ItalChap/ItalianChapConf2010/117-122","url":null,"abstract":"This paper describes a work in progress towards the implementation of a complete system that provides tourists with relevant visual information related to cultural heritage sites. Thanks to the diffusion of high-end mobile devices and the recent improvement in computer vision research on 3D Structure and Motion reconstruction, it is now possible to develop mobile mixed reality applications that can interact with spots of historical interest in the city. In particular we present an accurate localization of the mobile device that leverages on a pre-computed 3D structure to obtain image-model correspondences. Preliminary experiments with a calibrated camera – indoor and outdoor – demonstrate sufficient accuracy to support mixed reality.","PeriodicalId":405486,"journal":{"name":"European Interdisciplinary Cybersecurity Conference","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}