Paper stitching technology can reconstruct a whole paper page from two sub-images separately scanned from a camera with limited vision field. Traditional technology usually chooses a global optimal seam, and the two sub-images are stitched along it. These methods perform well on the rigid object, but when distortion exists caused by the uneven placement of paper, local contents of two sub-images may be upside-down and their positions are misaligned. Although some methods choose two matching seams on each sub-image, they use either the local patch similarity or the global consistent constraint to get two matching seams. However, only the local matching may lead to stitching failure when wrong matching occurs at the local patch, while only the global constraint usually suffers from inaccuracy of the stitching result. After the two seams are obtained, the traditional methods usually construct the whole image through global transformation along the seams, and image deformation usually occurs in this stage. In this paper, we proposed a robust estimation algorithm to get the matched seams in the sub-images, and stitched the sub-images with a maximum tolerance to conquer the image deformation. Finally a whole image with a smooth stitching seam and the minimum deformation is generated. Experimental results show that this new paper stitching method can produce better results than state-of-arts methods even under challenging scenarios such as large distortion and large contrast difference.
{"title":"Paper stitching using maximum tolerant seam under local distortions","authors":"Wei-liang Fan, Jun Sun, S. Naoi","doi":"10.1145/2644866.2644873","DOIUrl":"https://doi.org/10.1145/2644866.2644873","url":null,"abstract":"Paper stitching technology can reconstruct a whole paper page from two sub-images separately scanned from a camera with limited vision field.\u0000 Traditional technology usually chooses a global optimal seam, and the two sub-images are stitched along it. These methods perform well on the rigid object, but when distortion exists caused by the uneven placement of paper, local contents of two sub-images may be upside-down and their positions are misaligned. Although some methods choose two matching seams on each sub-image, they use either the local patch similarity or the global consistent constraint to get two matching seams. However, only the local matching may lead to stitching failure when wrong matching occurs at the local patch, while only the global constraint usually suffers from inaccuracy of the stitching result. After the two seams are obtained, the traditional methods usually construct the whole image through global transformation along the seams, and image deformation usually occurs in this stage.\u0000 In this paper, we proposed a robust estimation algorithm to get the matched seams in the sub-images, and stitched the sub-images with a maximum tolerance to conquer the image deformation. Finally a whole image with a smooth stitching seam and the minimum deformation is generated. Experimental results show that this new paper stitching method can produce better results than state-of-arts methods even under challenging scenarios such as large distortion and large contrast difference.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"24 1","pages":"35-44"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90371942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern interactive documents are complex applications that give the user the editing experience of editing a document as it will look in its final visual form. Sections of the document can be either editable, or read-only, and can dynamically conform artifacts like images to specific users. The components underlying interactive documents are dynamically bound variables and a complex rule engine for adapting the document as the user edits. Web interactive documents deliver the dynamic editing experience through the web by using a web browser for deploying the editor. Document editors built-in the web browser as a native application provide a higher quality editing experience because the editor's look and feel is consistent with the web browser's innate controls and navigation. The majority of traditional interactive documents have been developed using proprietary formats which are not compatible with today's web browser implementations because they were originally intended as desk-top applications. As a consequence, traditional interactive documents are not inherently web applications. This talk will provide an overview of the technical challenges faced in developing a web-intrinsic interactive document solution that simultaneously addresses the need for simple, yet rich, user editing features combined with the scalability, and ease of deployment, demanded by enterprises today. By way of example, I will introduce, and demonstrate, a new interactive document representation and deployment model. A prerequisite for such representations is that they enable documents to account for traditional document roles and still behave as intrinsic web content for document interaction. Another is that they are also able to support conventional enterprise workflows and complex processes, e.g. approvals, audit, versioning, storage and archival.
{"title":"Web-intrinsic interactive documents","authors":"A. Wiley","doi":"10.1145/2644866.2644901","DOIUrl":"https://doi.org/10.1145/2644866.2644901","url":null,"abstract":"Modern interactive documents are complex applications that give the user the editing experience of editing a document as it will look in its final visual form. Sections of the document can be either editable, or read-only, and can dynamically conform artifacts like images to specific users. The components underlying interactive documents are dynamically bound variables and a complex rule engine for adapting the document as the user edits.\u0000 Web interactive documents deliver the dynamic editing experience through the web by using a web browser for deploying the editor. Document editors built-in the web browser as a native application provide a higher quality editing experience because the editor's look and feel is consistent with the web browser's innate controls and navigation.\u0000 The majority of traditional interactive documents have been developed using proprietary formats which are not compatible with today's web browser implementations because they were originally intended as desk-top applications. As a consequence, traditional interactive documents are not inherently web applications.\u0000 This talk will provide an overview of the technical challenges faced in developing a web-intrinsic interactive document solution that simultaneously addresses the need for simple, yet rich, user editing features combined with the scalability, and ease of deployment, demanded by enterprises today.\u0000 By way of example, I will introduce, and demonstrate, a new interactive document representation and deployment model. A prerequisite for such representations is that they enable documents to account for traditional document roles and still behave as intrinsic web content for document interaction. Another is that they are also able to support conventional enterprise workflows and complex processes, e.g. approvals, audit, versioning, storage and archival.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"13 1","pages":"85-86"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91013128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ferilli, D. Grieco, Domenico Redavid, F. Esposito
Detecting the reading order among the layout components of a document's page is fundamental to ensure effectiveness or even applicability of subsequent content extraction steps. While in single-column documents the reading flow can be straightforwardly determined, in more complex documents the task may become very hard. This paper proposes an automatic strategy for identifying the correct reading order of a document page's components based on abstract argumentation. The technique is unsupervised, and works on any kind of document based only on general assumptions about how humans behave when reading documents. Experimental results show that it is effective in more complex cases, and requires less background knowledge, than previous solutions that have been proposed in the literature.
{"title":"Abstract argumentation for reading order detection","authors":"S. Ferilli, D. Grieco, Domenico Redavid, F. Esposito","doi":"10.1145/2644866.2644883","DOIUrl":"https://doi.org/10.1145/2644866.2644883","url":null,"abstract":"Detecting the reading order among the layout components of a document's page is fundamental to ensure effectiveness or even applicability of subsequent content extraction steps. While in single-column documents the reading flow can be straightforwardly determined, in more complex documents the task may become very hard. This paper proposes an automatic strategy for identifying the correct reading order of a document page's components based on abstract argumentation. The technique is unsupervised, and works on any kind of document based only on general assumptions about how humans behave when reading documents. Experimental results show that it is effective in more complex cases, and requires less background knowledge, than previous solutions that have been proposed in the literature.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"47 1","pages":"45-48"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85315155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software documents are used to capture and communicate knowledge in software projects. It is important that this knowledge can be retrieved efficiently and effectively, to prevent wasted time and errors that negatively affect the quality of software. In this paper we investigate how software professionals search for knowledge in documentation. We studied the search behaviour of professionals in industry. Prior knowledge helps professionals to search software documents efficiently and effectively. However, it can also misguide professionals to an incomplete search.
{"title":"The impact of prior knowledge on searching in software documentation","authors":"K. A. D. Graaf, Peng Liang, A. Tang, H. Vliet","doi":"10.1145/2644866.2644878","DOIUrl":"https://doi.org/10.1145/2644866.2644878","url":null,"abstract":"Software documents are used to capture and communicate knowledge in software projects. It is important that this knowledge can be retrieved efficiently and effectively, to prevent wasted time and errors that negatively affect the quality of software. In this paper we investigate how software professionals search for knowledge in documentation. We studied the search behaviour of professionals in industry. Prior knowledge helps professionals to search software documents efficiently and effectively. However, it can also misguide professionals to an incomplete search.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"1 1","pages":"189-198"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89341401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large number of document management problems would benefit from having the semantics of documents explicitly represented. However, manually assigning semantic descriptions to documents is labour intensive and error prone. At the same time, the manual generation of domain specific taxonomies is not only labour intensive, but it also needs to be repeated often as the domains themselves and their key concepts shift with time. In this workshop we focus on document content analysis and semantic enrichment to generate a layer of semantic description of documents that is useful for document management tasks, such as semantic information retrieval, conceptual organization and clustering of document collections for sense making, semantic expert profiling, and document recommender systems. The aim of the workshop is to bring together researchers and practitioners, and discuss different perspectives on the problems, challenges encountered in various application scenarios, and potential solutions. We have invited submissions in all areas of semantic analysis and enrichment of documents, such as automatic tagging, named entity disambiguation, semantic linking, interactive classification and clustering of documents, document summarization, curation and validation of the analysis process, generation of visualizations of document, author and document collection semantics, user engagement in the semantic analysis process via suitable annotation and correction tools, and study of the trade off between accuracy of the results and user effort. Submissions aimed at solving practical problems in specific application domains, including but not limited to digital libraries, legal document management, personalized online learning systems, news media, are especially welcome. The workshop is timely and relevant to the Document Engineering community, as its focus is on semantically enriching documents and document collections, to make them more accessible to their readers. The task is nontrivial due to the volume of text data and the rate at which text data is accumulated by companies, government, and individuals.
{"title":"Semantic analysis of documents workshop (SemADoc): extended abstract","authors":"E. Milios, C. Domeniconi","doi":"10.1145/2644866.2644897","DOIUrl":"https://doi.org/10.1145/2644866.2644897","url":null,"abstract":"A large number of document management problems would benefit from having the semantics of documents explicitly represented. However, manually assigning semantic descriptions to documents is labour intensive and error prone. At the same time, the manual generation of domain specific taxonomies is not only labour intensive, but it also needs to be repeated often as the domains themselves and their key concepts shift with time. In this workshop we focus on document content analysis and semantic enrichment to generate a layer of semantic description of documents that is useful for document management tasks, such as semantic information retrieval, conceptual organization and clustering of document collections for sense making, semantic expert profiling, and document recommender systems. The aim of the workshop is to bring together researchers and practitioners, and discuss different perspectives on the problems, challenges encountered in various application scenarios, and potential solutions. We have invited submissions in all areas of semantic analysis and enrichment of documents, such as automatic tagging, named entity disambiguation, semantic linking, interactive classification and clustering of documents, document summarization, curation and validation of the analysis process, generation of visualizations of document, author and document collection semantics, user engagement in the semantic analysis process via suitable annotation and correction tools, and study of the trade off between accuracy of the results and user effort. Submissions aimed at solving practical problems in specific application domains, including but not limited to digital libraries, legal document management, personalized online learning systems, news media, are especially welcome. The workshop is timely and relevant to the Document Engineering community, as its focus is on semantically enriching documents and document collections, to make them more accessible to their readers. The task is nontrivial due to the volume of text data and the rate at which text data is accumulated by companies, government, and individuals.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"73 3 1","pages":"209-210"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83367378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A general two-dimensional coding method is presented that allows recovery of data based on only a cropped portion of the code, and without knowledge of the carrier image. A description of both an encoding and recovery system is provided. Our solution involves repeating a payload with a fixed number of bits, assigning one bit to every symbol in the image - whether that symbol is data carrying or non-data carrying - with the goal of guaranteeing recovery of all the bits in the payload. Because the technique is applied to images, for aesthetic reasons we do not use fiducials, and do not employ any end-of-payload symbols. The beginning of the payload is determined by a phase code that is interleaved between groups of payload rows. The recovery system finds the phase row by evaluating candidate rows, and ranks confidence based on the sample variance. The target application is data-bearing clustered-dot halftones, so special consideration is given to the resulting checkerboard subsampling. This particular application is examined via exhaustive simulations to quantify the likelihood of unrecoverable bits and bit redundancy as a function of offset, crop window size, and phase code spacing.
{"title":"Circular coding with interleaving phase","authors":"R. Ulichney, Matthew Gaubatz, S. Simske","doi":"10.1145/2644866.2644888","DOIUrl":"https://doi.org/10.1145/2644866.2644888","url":null,"abstract":"A general two-dimensional coding method is presented that allows recovery of data based on only a cropped portion of the code, and without knowledge of the carrier image. A description of both an encoding and recovery system is provided. Our solution involves repeating a payload with a fixed number of bits, assigning one bit to every symbol in the image - whether that symbol is data carrying or non-data carrying - with the goal of guaranteeing recovery of all the bits in the payload. Because the technique is applied to images, for aesthetic reasons we do not use fiducials, and do not employ any end-of-payload symbols. The beginning of the payload is determined by a phase code that is interleaved between groups of payload rows. The recovery system finds the phase row by evaluating candidate rows, and ranks confidence based on the sample variance. The target application is data-bearing clustered-dot halftones, so special consideration is given to the resulting checkerboard subsampling. This particular application is examined via exhaustive simulations to quantify the likelihood of unrecoverable bits and bit redundancy as a function of offset, crop window size, and phase code spacing.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"111 1","pages":"21-24"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88797234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods for authoring Web-based multimedia presentations have advanced considerably with the improvements provided by HTML5. However, authors of these multimedia presentations still lack expressive, declarative language constructs to encode synchronized multimedia scenarios. The SMIL Timesheets language is a serious contender to tackle this problem as it provides alternatives to associate a declarative timing specification to an HTML document. However, in its current form, the SMIL Timesheets language does not meet important requirements observed in Web-based multimedia applications. In order to tackle this problem, this paper presents the ActiveTimesheets engine, which extends the SMIL Timesheets language by providing dynamic client-side modifications, temporal linking and reuse of temporal constructs in fine granularity. All these contributions are demonstrated in the context of a Web-based annotation and extension tool for multimedia documents.
{"title":"ActiveTimesheets: extending web-based multimedia documents with dynamic modification and reuse features","authors":"D. Martins, M. G. Pimentel","doi":"10.1145/2644866.2644877","DOIUrl":"https://doi.org/10.1145/2644866.2644877","url":null,"abstract":"Methods for authoring Web-based multimedia presentations have advanced considerably with the improvements provided by HTML5. However, authors of these multimedia presentations still lack expressive, declarative language constructs to encode synchronized multimedia scenarios. The SMIL Timesheets language is a serious contender to tackle this problem as it provides alternatives to associate a declarative timing specification to an HTML document. However, in its current form, the SMIL Timesheets language does not meet important requirements observed in Web-based multimedia applications. In order to tackle this problem, this paper presents the ActiveTimesheets engine, which extends the SMIL Timesheets language by providing dynamic client-side modifications, temporal linking and reuse of temporal constructs in fine granularity. All these contributions are demonstrated in the context of a Web-based annotation and extension tool for multimedia documents.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"63 1","pages":"3-12"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86033359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One common use for interactive whiteboards (IWBs) is to mark up content provided from a connected laptop. Typically a marking layer is provided which is independent of the laptop content. This leads to problems when the laptop content changes while the strokes in the mark up layer do not. The LiveStroke prototype described in this document uses computer vision techniques to associate the marks with the image of the underlying content from the laptop. For instance, if marks are made on the first page of a document, those marks disappear when the laptop user scrolls to a different page. The marks reappear in the right location on the page when the user returns to the first page. While we have integrated these techniques with interactive whiteboards the techniques are also applicable to screen sharing with mobile touch devices and projectors.
{"title":"Connecting content and annotations with livestroke","authors":"M. Gormish, J. Barrus","doi":"10.1145/2644866.2644884","DOIUrl":"https://doi.org/10.1145/2644866.2644884","url":null,"abstract":"One common use for interactive whiteboards (IWBs) is to mark up content provided from a connected laptop. Typically a marking layer is provided which is independent of the laptop content. This leads to problems when the laptop content changes while the strokes in the mark up layer do not. The LiveStroke prototype described in this document uses computer vision techniques to associate the marks with the image of the underlying content from the laptop. For instance, if marks are made on the first page of a document, those marks disappear when the laptop user scrolls to a different page. The marks reappear in the right location on the page when the user returns to the first page. While we have integrated these techniques with interactive whiteboards the techniques are also applicable to screen sharing with mobile touch devices and projectors.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"107 1","pages":"131-134"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81485781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe Berkeley Prosopography Services (BPS), a new set of tools for prosopography - the identification of individuals and study of their interactions - in support of humanities research. Prosopography is an example of "big data" in the humanities, characterized not by the size of the datasets, but by the way that computational and data-driven methods can transform scholarly workflows. BPS is based upon re-usable infrastructure, supporting generalized web services for corpus management, social network analysis, and visualization. The BPS disambiguation model is a formal implementation of the traditional heuristics used by humanists, and supports plug-in rules for adaptation to a wide range of domain corpora. A workspace model supports exploratory research and collaboration. We contrast the BPS model of configurable heuristic rules to other approaches for automated text analysis, and explain how our model facilitates interpretation by humanist researchers. We describe the significance of the BPS assertion model in which researchers assert conclusions or possibilities, allowing them to override automated inference, to explore ideas in what-if scenarios, and to formally publish and subscribe-to asserted annotations among colleagues, and/or with students. We present an initial evaluation of researchers' experience using the tools to study corpora of cuneiform tablets, and describe plans to expand the application of the tools to a broader range of corpora.
{"title":"Humanist-centric tools for big data: berkeley prosopography services","authors":"P. Schmitz, L. Pearce","doi":"10.1145/2644866.2644870","DOIUrl":"https://doi.org/10.1145/2644866.2644870","url":null,"abstract":"In this paper, we describe Berkeley Prosopography Services (BPS), a new set of tools for prosopography - the identification of individuals and study of their interactions - in support of humanities research. Prosopography is an example of \"big data\" in the humanities, characterized not by the size of the datasets, but by the way that computational and data-driven methods can transform scholarly workflows. BPS is based upon re-usable infrastructure, supporting generalized web services for corpus management, social network analysis, and visualization. The BPS disambiguation model is a formal implementation of the traditional heuristics used by humanists, and supports plug-in rules for adaptation to a wide range of domain corpora. A workspace model supports exploratory research and collaboration. We contrast the BPS model of configurable heuristic rules to other approaches for automated text analysis, and explain how our model facilitates interpretation by humanist researchers. We describe the significance of the BPS assertion model in which researchers assert conclusions or possibilities, allowing them to override automated inference, to explore ideas in what-if scenarios, and to formally publish and subscribe-to asserted annotations among colleagues, and/or with students. We present an initial evaluation of researchers' experience using the tools to study corpora of cuneiform tablets, and describe plans to expand the application of the tools to a broader range of corpora.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"78 1 1","pages":"179-188"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78290101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gioele Barabucci, Uwe M. Borghoff, A. Iorio, Sonja Maier, E. Munson
With collaborative tools getting more and more widespread, users have started to become accustomized to features like automatic versioning of their documents or the visualization of changes made by other users. The research community, however, sees that the state of the current tools is seriously lack lusting. The second edition of the DChanges workshop focuses on these shortcomings, introducing new ways to produce version-aware documents and merge changes from multiple sources. Other aspects - in particular, the standardization of formats for tracking changes - are discussed, too. The gathering is also an occasion to follow up on the projects that were discussed or presented during DChanges 2013, and to foster new collaborations among researchers.
{"title":"Document changes: modeling, detection, storage and visualization (DChanges 2014)","authors":"Gioele Barabucci, Uwe M. Borghoff, A. Iorio, Sonja Maier, E. Munson","doi":"10.1145/2644866.2644896","DOIUrl":"https://doi.org/10.1145/2644866.2644896","url":null,"abstract":"With collaborative tools getting more and more widespread, users have started to become accustomized to features like automatic versioning of their documents or the visualization of changes made by other users. The research community, however, sees that the state of the current tools is seriously lack lusting. The second edition of the DChanges workshop focuses on these shortcomings, introducing new ways to produce version-aware documents and merge changes from multiple sources. Other aspects - in particular, the standardization of formats for tracking changes - are discussed, too.\u0000 The gathering is also an occasion to follow up on the projects that were discussed or presented during DChanges 2013, and to foster new collaborations among researchers.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"256 1","pages":"207-208"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73317657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}