A laser scanning campaign to capture the geometry of a large heritage site can produce thousands of high resolution range scans. These must be cleaned to remove noise and artefacts. To accelerate the cleaning task, we can i) reduce the time required for batch-processing tasks, ii) reduce user interaction time, or iii) replace interactive tasks with more efficient automated algorithms. We present a point cloud cleaning framework that attempts to improve each of these aspects. First, we present a novel system architecture targeted point cloud segmentation. This architecture represents 'layers' of related points in a way that greatly reduces memory consumption and provides efficient set operations between layers. These set operations (union, difference, intersection) allow the creation of new layers which aid in the segmentation task. Next, we introduce roll-corrected 3D camera navigation that allows a user to look around freely while reducing disorientation. A user study showed that this camera mode significantly reduces a users navigation time between locations in a large point cloud thus accelerating point selection operations. Finally, we show how boosted random forests can be trained interactively, per scan, to assist users in a point cleaning task. To achieve interactivity, we sub-sample the training data on the fly and use efficient features adapted to the properties of range scans. Training and classification required 8--9s for point clouds up to 11 million points. Tests showed that a simple user-selected seed allowed walls to be recovered from tree and bush overgrowth with up to 92% accuracy (f-score). A preliminary user study showed that overall task time performance was improved. The study could however not confirm this result as statistically significant with 19 users. These results are, however, promising and suggest that even larger performance improvements are likely with more sophisticated features or the use of colour range images, which are now commonplace.
{"title":"Accelerating Point Cloud Cleaning","authors":"Rickert L. Mulder, P. Marais","doi":"10.2312/gch.20161410","DOIUrl":"https://doi.org/10.2312/gch.20161410","url":null,"abstract":"A laser scanning campaign to capture the geometry of a large heritage site can produce thousands of high resolution range scans. These must be cleaned to remove noise and artefacts. To accelerate the cleaning task, we can i) reduce the time required for batch-processing tasks, ii) reduce user interaction time, or iii) replace interactive tasks with more efficient automated algorithms. We present a point cloud cleaning framework that attempts to improve each of these aspects. First, we present a novel system architecture targeted point cloud segmentation. This architecture represents 'layers' of related points in a way that greatly reduces memory consumption and provides efficient set operations between layers. These set operations (union, difference, intersection) allow the creation of new layers which aid in the segmentation task. Next, we introduce roll-corrected 3D camera navigation that allows a user to look around freely while reducing disorientation. A user study showed that this camera mode significantly reduces a users navigation time between locations in a large point cloud thus accelerating point selection operations. Finally, we show how boosted random forests can be trained interactively, per scan, to assist users in a point cleaning task. To achieve interactivity, we sub-sample the training data on the fly and use efficient features adapted to the properties of range scans. Training and classification required 8--9s for point clouds up to 11 million points. Tests showed that a simple user-selected seed allowed walls to be recovered from tree and bush overgrowth with up to 92% accuracy (f-score). A preliminary user study showed that overall task time performance was improved. The study could however not confirm this result as statistically significant with 19 users. These results are, however, promising and suggest that even larger performance improvements are likely with more sophisticated features or the use of colour range images, which are now commonplace.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131801168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Lercari, Jurgen Shulze, W. Wendrich, Benjamin W. Porter, Margie Burton, T. Levy
Recent current events have dramatically highlighted the vulnerability of the world's material cultural heritage. The 3-D Digital Preservation of At-Risk Global Cultural Heritage project, led by Thomas Levy at UC San Diego, catalyzes a collaborative research effort by four University of California campuses (San Diego, Berkeley, Los Angeles and Merced) to use cyber-archaeology and computer graphics for cultural heritage to document and safeguard virtually some of the most at-risk heritage objects and places. Faculty and students involved in this project are conducting path-breaking archaeological research - covering more than 10,000 years of culture and architecture - in Cyprus, Greece, Egypt, Ethiopia, Israel, Jordan, Morocco, Turkey, and the United States. This project uses the 3-D archaeological data collected in numerous at-risk heritage places to study, forecast, and model the effects of human conflict, climate change, natural disasters and technological and cultural changes on these sites and landscapes. The greater challenge undertaken by this project is to integrate archaeological heritage data and digital heritage data using the recently-announced Pacific Research Platform (PRP) and its 10--100Gb/s network as well as virtual reality kiosks installed in each participating UC campus. Our aim is to link UC San Diego and the San Diego Supercomputer Center to other labs, libraries and museums at the other UC campuses to form a highly-networked collaborative platform for curation, analysis, and visualization of 3D archaeological heritage data.
{"title":"3-D Digital Preservation of At-Risk Global Cultural Heritage","authors":"Nicola Lercari, Jurgen Shulze, W. Wendrich, Benjamin W. Porter, Margie Burton, T. Levy","doi":"10.2312/gch.20161395","DOIUrl":"https://doi.org/10.2312/gch.20161395","url":null,"abstract":"Recent current events have dramatically highlighted the vulnerability of the world's material cultural heritage. The 3-D Digital Preservation of At-Risk Global Cultural Heritage project, led by Thomas Levy at UC San Diego, catalyzes a collaborative research effort by four University of California campuses (San Diego, Berkeley, Los Angeles and Merced) to use cyber-archaeology and computer graphics for cultural heritage to document and safeguard virtually some of the most at-risk heritage objects and places. Faculty and students involved in this project are conducting path-breaking archaeological research - covering more than 10,000 years of culture and architecture - in Cyprus, Greece, Egypt, Ethiopia, Israel, Jordan, Morocco, Turkey, and the United States. This project uses the 3-D archaeological data collected in numerous at-risk heritage places to study, forecast, and model the effects of human conflict, climate change, natural disasters and technological and cultural changes on these sites and landscapes. The greater challenge undertaken by this project is to integrate archaeological heritage data and digital heritage data using the recently-announced Pacific Research Platform (PRP) and its 10--100Gb/s network as well as virtual reality kiosks installed in each participating UC campus. Our aim is to link UC San Diego and the San Diego Supercomputer Center to other labs, libraries and museums at the other UC campuses to form a highly-networked collaborative platform for curation, analysis, and visualization of 3D archaeological heritage data.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132371623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Gregor, P. Mavridis, A. Wiltsche, T. Schreck
Recent improvements in 3D acquisition and shape processing methods lead to increased digitization of 3D Cultural Heritage (CH) objects. Beyond the mere digital archival of CH artifacts, there is an emerging research area dedicated to digital restoration of 3D Cultural Heritage artifacts. In particular several methods have been published recently that, from a digitized set of fragments, enable their reassembly or even the synthesis of missing or eroded parts. Usually the result of such methods is a set of aligned but disconnected parts. However, it is often desirable to produce a single, watertight mesh that can be easily 3D printed. We propose a method based on a volumetric soft union operation that can be used to combine such sets of aligned fragments to a single manifold mesh while producing smooth and plausible geometry at the seams. We assess its visual quality and efficiency in comparison to an adaption of the well-known Poisson Reconstruction method. Finally, we provide practical insights on printing the results produced by our method on digitized fragments of real CH objects.
{"title":"A Soft Union based Method for Virtual Restoration and 3D Printing of Cultural Heritage Objects","authors":"Robert Gregor, P. Mavridis, A. Wiltsche, T. Schreck","doi":"10.2312/gch.20161381","DOIUrl":"https://doi.org/10.2312/gch.20161381","url":null,"abstract":"Recent improvements in 3D acquisition and shape processing methods lead to increased digitization of 3D Cultural Heritage (CH) objects. Beyond the mere digital archival of CH artifacts, there is an emerging research area dedicated to digital restoration of 3D Cultural Heritage artifacts. In particular several methods have been published recently that, from a digitized set of fragments, enable their reassembly or even the synthesis of missing or eroded parts. Usually the result of such methods is a set of aligned but disconnected parts. However, it is often desirable to produce a single, watertight mesh that can be easily 3D printed. We propose a method based on a volumetric soft union operation that can be used to combine such sets of aligned fragments to a single manifold mesh while producing smooth and plausible geometry at the seams. We assess its visual quality and efficiency in comparison to an adaption of the well-known Poisson Reconstruction method. Finally, we provide practical insights on printing the results produced by our method on digitized fragments of real CH objects.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129090528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Gregor, I. Sipiran, Georgios Papaioannou, T. Schreck, Anthousis Andreadis, P. Mavridis
Due to recent improvements in 3D acquisition and shape processing technology, the digitization of Cultural Heritage (CH) artifacts is gaining increased application in context of archival and archaeological research. This increasing availability of acquisition technologies also implies a need for intelligent processing methods that can cope with imperfect object scans. Specifically for Cultural Heritage objects, besides imperfections given by the digitization process, also the original artifact objects may be imperfect due to deterioration or fragmentation processes. Currently, the reconstruction of previously digitized CH artifacts is mostly performed manually by expert users reassembling fragment parts and completing imperfect objects by modeling. However, more automatic methods for CH object repair and completion are needed to cope with increasingly large data becoming available. In this conceptual paper, we first provide a brief survey of typical imperfections in CH artifact scan data and in turn motivate the need for respective repair methods. We survey and classify a selection of existing reconstruction methods with respect to their applicability for CH objects, and then discuss how these approaches can be extended and combined to address various types of physical defects that are encountered in CH artifacts by proposing a flexible repair workflow for 3D digitizations of CH objects. The workflow accommodates an automatic reassembly step which can deal with fragmented input data. It also includes the similarity-based retrieval of appropriate complementary object data which is used to repair local and global object defects. Finally, we discuss options for evaluation of the effectiveness of such a CH repair workflow.
{"title":"Towards Automated 3D Reconstruction of Defective Cultural Heritage Objects","authors":"Robert Gregor, I. Sipiran, Georgios Papaioannou, T. Schreck, Anthousis Andreadis, P. Mavridis","doi":"10.2312/gch.20141311","DOIUrl":"https://doi.org/10.2312/gch.20141311","url":null,"abstract":"Due to recent improvements in 3D acquisition and shape processing technology, the digitization of Cultural Heritage (CH) artifacts is gaining increased application in context of archival and archaeological research. This increasing availability of acquisition technologies also implies a need for intelligent processing methods that can cope with imperfect object scans. Specifically for Cultural Heritage objects, besides imperfections given by the digitization process, also the original artifact objects may be imperfect due to deterioration or fragmentation processes. Currently, the reconstruction of previously digitized CH artifacts is mostly performed manually by expert users reassembling fragment parts and completing imperfect objects by modeling. However, more automatic methods for CH object repair and completion are needed to cope with increasingly large data becoming available. \u0000 \u0000In this conceptual paper, we first provide a brief survey of typical imperfections in CH artifact scan data and in turn motivate the need for respective repair methods. We survey and classify a selection of existing reconstruction methods with respect to their applicability for CH objects, and then discuss how these approaches can be extended and combined to address various types of physical defects that are encountered in CH artifacts by proposing a flexible repair workflow for 3D digitizations of CH objects. The workflow accommodates an automatic reassembly step which can deal with fragmented input data. It also includes the similarity-based retrieval of appropriate complementary object data which is used to repair local and global object defects. Finally, we discuss options for evaluation of the effectiveness of such a CH repair workflow.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116654172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work is concerned with MultiSpectral Imaging (MSI) and image processing of ancient manuscripts. The writings imaged are partially in a bad condition, since they are partially faded-out or have been erased and overwritten. Therefore, a transcription by philologists belonging to our project team is aggravated. In order to increase the legibility, the manuscripts investigated have been imaged with a portable MSI system. While the imaging in selected narrow spectral ranges gained a legibility increase, post-processing techniques can be applied to the MSI data in order to gain a further contrast enhancement. For this purpose, three different dimension reduction techniques are applied to the manuscripts. A qualitative analysis shows that these techniques are capable of increasing the legibility of the ancient writings, compared to unprocessed multispectral images.
{"title":"Enhancement of MultiSpectral Images of Ancient Manuscripts","authors":"Fabian Hollaus, Robert Sablatnig","doi":"10.2312/gch.20141303","DOIUrl":"https://doi.org/10.2312/gch.20141303","url":null,"abstract":"This work is concerned with MultiSpectral Imaging (MSI) and image processing of ancient manuscripts. The writings imaged are partially in a bad condition, since they are partially faded-out or have been erased and overwritten. Therefore, a transcription by philologists belonging to our project team is aggravated. In order to increase the legibility, the manuscripts investigated have been imaged with a portable MSI system. While the imaging in selected narrow spectral ranges gained a legibility increase, post-processing techniques can be applied to the MSI data in order to gain a further contrast enhancement. For this purpose, three different dimension reduction techniques are applied to the manuscripts. A qualitative analysis shows that these techniques are capable of increasing the legibility of the ancient writings, compared to unprocessed multispectral images.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133022162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Adami, I. Cerato, E. d’Annibale, E. Demetrescu, D. Ferdani
In recent years, digital photogrammetry has enjoyed a renewed approval in the field of Cultural Heritage. This is due both to the relative cheapness of the instruments (a high resolution camera, possibly a reflex with good lenses) and to new algorithms and software that simplified the use, perhaps at the expense of the necessary knowledge of its principles. The 3D survey of the Mausoleum of Romulus, along the Via Appia Antica, within the European project 3DICONS, provided the opportunity to test different photogrammetric techniques, with the aim to verify the results and to evaluate the positive and negative aspects. In particular two different approaches have been applied: spherical photogrammetry and dense image matching. The first technique is based on traditional photogrammetric principles, applied on panoramic images instead of frame images. The second one, the most recent and very widespread, is inspired by traditional photogrammetry and computer vision. In order to have a significant and correct comparison, a topographic support has been realized for the Mausoleum, to have all surveyed data in a single local reference system. The comparison has been made by using, as a reference, the point cloud acquired by laser scanner. In this paper, after a description of the funeral monument and its complexity, the two techniques will be described in order to investigate pros and cons, their algorithm and application fields. The acquisition and processing stage will be described in order to give all the necessary elements for the final judgement. At the end of the restitution and modelling process, the comparison will take into account many parameters: the scheme of image acquisition, the time required (on-site and in laboratory), the hardware (for data acquisition and post-processing), the results that can be obtained (2d and 3D representations with texture) and the metric accuracy achieved. Finally there will be some hints about different applications of these methods as concerning above all the visualization of data. For example, the exploration of the Mausoleum can be done through the navigation of bubbles, obtained by spherical photogrammetry.
{"title":"Different Photogrammetric Approaches to 3D Survey of the Mausoleum of Romulus in Rome","authors":"A. Adami, I. Cerato, E. d’Annibale, E. Demetrescu, D. Ferdani","doi":"10.2312/gch.20141300","DOIUrl":"https://doi.org/10.2312/gch.20141300","url":null,"abstract":"In recent years, digital photogrammetry has enjoyed a renewed approval in the field of Cultural Heritage. This is due both to the relative cheapness of the instruments (a high resolution camera, possibly a reflex with good lenses) and to new algorithms and software that simplified the use, perhaps at the expense of the necessary knowledge of its principles. The 3D survey of the Mausoleum of Romulus, along the Via Appia Antica, within the European project 3DICONS, provided the opportunity to test different photogrammetric techniques, with the aim to verify the results and to evaluate the positive and negative aspects. In particular two different approaches have been applied: spherical photogrammetry and dense image matching. The first technique is based on traditional photogrammetric principles, applied on panoramic images instead of frame images. The second one, the most recent and very widespread, is inspired by traditional photogrammetry and computer vision. In order to have a significant and correct comparison, a topographic support has been realized for the Mausoleum, to have all surveyed data in a single local reference system. The comparison has been made by using, as a reference, the point cloud acquired by laser scanner. In this paper, after a description of the funeral monument and its complexity, the two techniques will be described in order to investigate pros and cons, their algorithm and application fields. The acquisition and processing stage will be described in order to give all the necessary elements for the final judgement. At the end of the restitution and modelling process, the comparison will take into account many parameters: the scheme of image acquisition, the time required (on-site and in laboratory), the hardware (for data acquisition and post-processing), the results that can be obtained (2d and 3D representations with texture) and the metric accuracy achieved. Finally there will be some hints about different applications of these methods as concerning above all the visualization of data. For example, the exploration of the Mausoleum can be done through the navigation of bubbles, obtained by spherical photogrammetry.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129670618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This positioning paper seeks to evaluate how well the current state of interactive storytelling, content recommendation, and Linked Data can increase the efficaciousness of knowledge transfer in the context of cultural heritage. It considers the design scope of various interactive storytelling systems and investigates how the domain of semantic web fosters user satisfaction during explorative browsing by providing recommendations and related concepts. In conclusion, interactive storytelling systems have significant room for improvement in at least two aspects: 1. By telling a story that includes exhibits and employs their similarities and differences to describe the plot. 2. By adapting not only the content but also genre typical patterns to the individual user's taste. Furthermore, the required background and world knowledge necessary for interactive storytelling is retrievable from the Linked Data Cloud.
{"title":"The Design Scope of Adaptive Storytelling in Virtual Museums","authors":"Tilman Deuschel, Timm Heuss, Christian Broomfield","doi":"10.2312/gch.20141308","DOIUrl":"https://doi.org/10.2312/gch.20141308","url":null,"abstract":"This positioning paper seeks to evaluate how well the current state of interactive storytelling, content recommendation, and Linked Data can increase the efficaciousness of knowledge transfer in the context of cultural heritage. It considers the design scope of various interactive storytelling systems and investigates how the domain of semantic web fosters user satisfaction during explorative browsing by providing recommendations and related concepts. In conclusion, interactive storytelling systems have significant room for improvement in at least two aspects: \u0000 \u00001. By telling a story that includes exhibits and employs their similarities and differences to describe the plot. \u0000 \u00002. By adapting not only the content but also genre typical patterns to the individual user's taste. \u0000 \u0000Furthermore, the required background and world knowledge necessary for interactive storytelling is retrievable from the Linked Data Cloud.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129417633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Site-specific art is a concept that goes back to the beginning of human race: the works of art were often created by artists taking into account not only their shape and appearance, but also the context in which they would be put. For this reason, moving the artifacts from its original placement (or the changes which happen around it) tend to decrease its impact, and possibly weaken its potentials. Site-specific art is a very powerful concept also for contemporary artists. This paper focuses on the analysis of L.O.V.E., a sculpture from the controversial artist Maurizio Cattelan. Cattelan donated the sculpture to Milano, under the condition that it should not be moved from its original place (in front of Milano Stock Exchange). The aim of the paper is to use 3D reconstruction techniques to show and analyse the monument, stressing its relation with the context around it. A multi-view stereo matching campaign was perfomed to have an accurate reconstruction of the context, then the photos provided by the community were integrated in the reconstruction to show the "point of view" of the people. These data provide interesting indications about the aims of the authors, and they provide additional material for the interpretation of the work of art.
{"title":"Site-specific Art and 3D: an Example of Spatial Analysis and Reconstruction","authors":"M. Dellepiane, M. Matteis","doi":"10.2312/gch.20141301","DOIUrl":"https://doi.org/10.2312/gch.20141301","url":null,"abstract":"Site-specific art is a concept that goes back to the beginning of human race: the works of art were often created by artists taking into account not only their shape and appearance, but also the context in which they would be put. For this reason, moving the artifacts from its original placement (or the changes which happen around it) tend to decrease its impact, and possibly weaken its potentials. Site-specific art is a very powerful concept also for contemporary artists. This paper focuses on the analysis of L.O.V.E., a sculpture from the controversial artist Maurizio Cattelan. Cattelan donated the sculpture to Milano, under the condition that it should not be moved from its original place (in front of Milano Stock Exchange). The aim of the paper is to use 3D reconstruction techniques to show and analyse the monument, stressing its relation with the context around it. A multi-view stereo matching campaign was perfomed to have an accurate reconstruction of the context, then the photos provided by the community were integrated in the reconstruction to show the \"point of view\" of the people. These data provide interesting indications about the aims of the authors, and they provide additional material for the interpretation of the work of art.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128918707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present MVE, the Multi-View Environment. MVE is an end-to-end multi-view geometry reconstruction software which takes photos of a scene as input and produces a surface triangle mesh as result. The system covers a structure-from-motion algorithm, multi-view stereo reconstruction, generation of extremely dense point clouds, and reconstruction of surfaces from point clouds. In contrast to most image-based geometry reconstruction approaches, our system is focused on reconstruction of multi-scale scenes, an important aspect in many areas such as cultural heritage. It allows to reconstruct large datasets containing some detailed regions with much higher resolution than the rest of the scene. Our system provides a graphical user interface for structure-from-motion reconstruction, visual inspection of images, depth maps, and rendering of scenes and meshes.
{"title":"MVE - A Multi-View Reconstruction Environment","authors":"Simon Fuhrmann, Fabian Langguth, M. Goesele","doi":"10.2312/gch.20141299","DOIUrl":"https://doi.org/10.2312/gch.20141299","url":null,"abstract":"We present MVE, the Multi-View Environment. MVE is an end-to-end multi-view geometry reconstruction software which takes photos of a scene as input and produces a surface triangle mesh as result. The system covers a structure-from-motion algorithm, multi-view stereo reconstruction, generation of extremely dense point clouds, and reconstruction of surfaces from point clouds. In contrast to most image-based geometry reconstruction approaches, our system is focused on reconstruction of multi-scale scenes, an important aspect in many areas such as cultural heritage. It allows to reconstruct large datasets containing some detailed regions with much higher resolution than the rest of the scene. Our system provides a graphical user interface for structure-from-motion reconstruction, visual inspection of images, depth maps, and rendering of scenes and meshes.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129608793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large scale digitization campaigns are simplifying the accessibility of a rapidly increasing number of images from cultural heritage. However, digitization alone is not sufficient to effectively open up these valuable resources. Retrieval and analysis within these datasets is currently mainly based on manual annotation and laborious preprocessing. This is not only a tedious task, which rapidly becomes infeasible due to the enormous data load. We also risk to be biased to only see what an annotator beforehand has focused on. Thus a lot of potential is being wasted. One of the most prevalent tasks is that of discovering similar objects in a dataset to find relations therein. The majority of existing systems for this task are detecting similar objects using visual feature keypoints. While having a low processing time, these methods are limited to detect only close duplicates due to their keypoint based representation. In this work we propose a search method which can detect similar objects even if they exhibit considerable variability. Our procedure learns models of the appearance of objects and trains a classifier to find related instances. We address a central problem of such learning-based methods, the need for appropriate negative and positive training samples. To avoid a highly complicated hard negative mining stage we propose a pooling procedure for gathering generic negatives. Moreover, a bootstrap approach is presented to aggregate positive training samples. Comparison of existing search methods in cultural heritage benchmark problems demonstrates that our approach yields significantly improved detection performance. Moreover, we show examples of searching across different types of datasets, e.g., drafts and photographs.
{"title":"An Approach to Large Scale Interactive Retrieval of Cultural Heritage","authors":"Masato Takami, Peter Bell, B. Ommer","doi":"10.2312/gch.20141307","DOIUrl":"https://doi.org/10.2312/gch.20141307","url":null,"abstract":"Large scale digitization campaigns are simplifying the accessibility of a rapidly increasing number of images from cultural heritage. However, digitization alone is not sufficient to effectively open up these valuable resources. Retrieval and analysis within these datasets is currently mainly based on manual annotation and laborious preprocessing. This is not only a tedious task, which rapidly becomes infeasible due to the enormous data load. We also risk to be biased to only see what an annotator beforehand has focused on. Thus a lot of potential is being wasted. \u0000 \u0000One of the most prevalent tasks is that of discovering similar objects in a dataset to find relations therein. The majority of existing systems for this task are detecting similar objects using visual feature keypoints. While having a low processing time, these methods are limited to detect only close duplicates due to their keypoint based representation. In this work we propose a search method which can detect similar objects even if they exhibit considerable variability. Our procedure learns models of the appearance of objects and trains a classifier to find related instances. We address a central problem of such learning-based methods, the need for appropriate negative and positive training samples. To avoid a highly complicated hard negative mining stage we propose a pooling procedure for gathering generic negatives. Moreover, a bootstrap approach is presented to aggregate positive training samples. Comparison of existing search methods in cultural heritage benchmark problems demonstrates that our approach yields significantly improved detection performance. Moreover, we show examples of searching across different types of datasets, e.g., drafts and photographs.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}