Pub Date : 2023-06-24DOI: https://dl.acm.org/doi/10.1145/3575865
Nikolay Banar, Walter Daelemans, Mike Kestemont
Iconclass is an iconographic thesaurus, which is widely used in the digital heritage domain to describe subjects depicted in artworks. Each subject is assigned a unique descriptive code, which has a corresponding textual definition. The assignment of Iconclass codes is a challenging task for computational systems, due to the large number of available labels in comparison to the limited amount of training data available. Transfer learning has become a common strategy to overcome such a data shortage. In deep learning, transfer learning consists in fine-tuning the weights of a deep neural network for a downstream task. In this work, we present a deep retrieval framework, which can be fully fine-tuned for the task under consideration. Our work is based on a recent approach to this task, which already yielded state-of-the-art performance, although it could not be fully fine-tuned yet. This approach exploits the multi-linguality and multi-modality that is inherent to digital heritage data. Our framework jointly processes multiple input modalities, namely, textual and visual features. We extract the textual features from the artwork titles in multiple languages, whereas the visual features are derived from photographic reproductions of the artworks. The definitions of the Iconclass codes, containing useful textual information, are used as target labels instead of the codes themselves. As our main contribution, we demonstrate that our approach outperforms the state-of-the-art by a large margin. In addition, our approach is superior to the M3P feature extractor and outperforms the multi-lingual CLIP in most experiments due to the better quality of the visual features. Our out-of-domain and zero-shot experiments show poor results and demonstrate that the Iconclass retrieval remains a challenging task. We make our source code and models publicly available to support heritage institutions in the further enrichment of their digital collections.
{"title":"Transfer Learning for the Visual Arts: The Multi-modal Retrieval of Iconclass Codes","authors":"Nikolay Banar, Walter Daelemans, Mike Kestemont","doi":"https://dl.acm.org/doi/10.1145/3575865","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3575865","url":null,"abstract":"<p>Iconclass is an iconographic thesaurus, which is widely used in the digital heritage domain to describe subjects depicted in artworks. Each subject is assigned a unique descriptive code, which has a corresponding textual definition. The assignment of Iconclass codes is a challenging task for computational systems, due to the large number of available labels in comparison to the limited amount of training data available. Transfer learning has become a common strategy to overcome such a data shortage. In deep learning, transfer learning consists in fine-tuning the weights of a deep neural network for a downstream task. In this work, we present a deep retrieval framework, which can be fully fine-tuned for the task under consideration. Our work is based on a recent approach to this task, which already yielded state-of-the-art performance, although it could not be fully fine-tuned yet. This approach exploits the multi-linguality and multi-modality that is inherent to digital heritage data. Our framework jointly processes multiple input modalities, namely, textual and visual features. We extract the textual features from the artwork titles in multiple languages, whereas the visual features are derived from photographic reproductions of the artworks. The definitions of the Iconclass codes, containing useful textual information, are used as target labels instead of the codes themselves. As our main contribution, we demonstrate that our approach outperforms the state-of-the-art by a large margin. In addition, our approach is superior to the M<sup>3</sup>P feature extractor and outperforms the multi-lingual CLIP in most experiments due to the better quality of the visual features. Our out-of-domain and zero-shot experiments show poor results and demonstrate that the Iconclass retrieval remains a challenging task. We make our source code and models publicly available to support heritage institutions in the further enrichment of their digital collections.</p>","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"23 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-24DOI: https://dl.acm.org/doi/10.1145/3583557
Manjeeta R. Kale, Priti P. Rege, Radhika D. Joshi
The existing methods of Facial Expression Recognition (FER) primarily analyze six basic expressions, namely, surprise, happiness, anger, sadness, fear, and disgust. The Indian performing arts use three more well-defined expressions—peaceful, proud, and erotic. This study proposes an intelligent dual-level expression evaluation system that classifies performance-specific expressions into nine classes, assigns intensity level to the expression, and suggests modifications to the user for precise exhibition of an expression. At decision level-1 of a dual-level system, an 11-state model is designed to classify the nine expressions. The model is verified using the Colored Petri Net that helps analyze the rules used for the classification. Decision level-1 is also implemented using input feature database and SVM classifier, which yields 95.77% accuracy. Further, at decision level-2, SVM is used to assign an intensity level to the correctly classified images. In case of incorrectly exhibited expressions, feedback is provided to the user about the incorrect facial component state. The application-specific image dataset is used for the present study. The qualitative comparison with the other FER approaches is also carried out. With the increasing popularity of Indian classical dance in Western and Asian countries, the dual-level system enables learners of performing arts to practice, evaluate, and improvise their expression skills.
{"title":"Designing a Dual-level Facial Expression Evaluation System for Performers Using Geometric Features and Petri Nets","authors":"Manjeeta R. Kale, Priti P. Rege, Radhika D. Joshi","doi":"https://dl.acm.org/doi/10.1145/3583557","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583557","url":null,"abstract":"<p>The existing methods of Facial Expression Recognition (FER) primarily analyze six basic expressions, namely, surprise, happiness, anger, sadness, fear, and disgust. The Indian performing arts use three more well-defined expressions—peaceful, proud, and erotic. This study proposes an intelligent dual-level expression evaluation system that classifies performance-specific expressions into nine classes, assigns intensity level to the expression, and suggests modifications to the user for precise exhibition of an expression. At decision level-1 of a dual-level system, an 11-state model is designed to classify the nine expressions. The model is verified using the Colored Petri Net that helps analyze the rules used for the classification. Decision level-1 is also implemented using input feature database and SVM classifier, which yields 95.77% accuracy. Further, at decision level-2, SVM is used to assign an intensity level to the correctly classified images. In case of incorrectly exhibited expressions, feedback is provided to the user about the incorrect facial component state. The application-specific image dataset is used for the present study. The qualitative comparison with the other FER approaches is also carried out. With the increasing popularity of Indian classical dance in Western and Asian countries, the dual-level system enables learners of performing arts to practice, evaluate, and improvise their expression skills.</p>","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"73 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-24DOI: https://dl.acm.org/doi/10.1145/3575866
Alireza Rezaei, Emanuel Aldea, Piercarlo Dondi, Sylvie Le Hégarat-Mascle, Marco Malagodi
Artworks need to be constantly monitored to check their state of conservation and to quickly spot the eventual presence of alterations or damages. Preventive conservation is the set of practices employed to reach this goal. Unfortunately, this results generally in a cumbersome process involving multiple analytical techniques. Consequently, methods able to provide a quick preliminary examination of the artworks (e.g., optical monitoring) seem very promising to streamline preventive conservation procedures. We are especially interested in the study of historical wood musical instruments, a kind of artwork particularly subject to mechanical wear since they are both held in museums and also occasionally played in concerts. Our primary goal is to detect possible altered regions on the surface of the instruments early and thus provide the experts some precise indications on where to apply more in-depth examinations to check for potential damages. In this work, we propose an optical monitoring method based on the a-contrario probabilistic framework. Tests were conducted on the “Violins UVIFL imagery” dataset, a collection of UV-induced fluorescence image sequences of artificially altered wood samples and violins. Obtained results showed the robustness of the proposed method and its capability to properly detect altered regions while rejecting noise.
{"title":"Multi-Temporal Image Analysis for Preventive Conservation of Historical Musical Instruments","authors":"Alireza Rezaei, Emanuel Aldea, Piercarlo Dondi, Sylvie Le Hégarat-Mascle, Marco Malagodi","doi":"https://dl.acm.org/doi/10.1145/3575866","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3575866","url":null,"abstract":"<p>Artworks need to be constantly monitored to check their state of conservation and to quickly spot the eventual presence of alterations or damages. Preventive conservation is the set of practices employed to reach this goal. Unfortunately, this results generally in a cumbersome process involving multiple analytical techniques. Consequently, methods able to provide a quick preliminary examination of the artworks (e.g., optical monitoring) seem very promising to streamline preventive conservation procedures. We are especially interested in the study of historical wood musical instruments, a kind of artwork particularly subject to mechanical wear since they are both held in museums and also occasionally played in concerts. Our primary goal is to detect possible altered regions on the surface of the instruments early and thus provide the experts some precise indications on where to apply more in-depth examinations to check for potential damages. In this work, we propose an optical monitoring method based on the a-contrario probabilistic framework. Tests were conducted on the “Violins UVIFL imagery” dataset, a collection of UV-induced fluorescence image sequences of artificially altered wood samples and violins. Obtained results showed the robustness of the proposed method and its capability to properly detect altered regions while rejecting noise.</p>","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"1 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-24DOI: https://dl.acm.org/doi/10.1145/3593430
Francisco Barrientos, Aitziber Egusquiza, Claudia De Luca, Simona Tondelli, Pedro Martín-Lerones, David Olmedo, John Martin, Irina Pavlova, Jaime Gómez-García-Bermejo, Eduardo Zalama Casanova
Rural areas in Europe represent outstanding examples of Cultural and Natural Heritage (CNH) that could be used as a valuable asset for social and economic development. This article describes the process for developing a monitoring platform based on Key Performance Indicators (KPI) and implemented in six rural areas around Europe. The goal of this monitoring system is to provide evidence of the role of CNH in rural areas as a driver for sustainable growth. Several data collection procedures are described, including regular, non-regular, and co-monitoring. To combine the selected cross-thematic and multi-scale KPIs, weights have been assigned to indicators, according to the knowledge provided by domain experts and using group decision-making techniques. A detailed description of the dashboards developed for the monitoring platform, and all the information gathered is included. Several dashboards have been designed focusing on KPI values and their evolution.
{"title":"A Robust Monitoring Platform for Rural Cultural and Natural Heritage","authors":"Francisco Barrientos, Aitziber Egusquiza, Claudia De Luca, Simona Tondelli, Pedro Martín-Lerones, David Olmedo, John Martin, Irina Pavlova, Jaime Gómez-García-Bermejo, Eduardo Zalama Casanova","doi":"https://dl.acm.org/doi/10.1145/3593430","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3593430","url":null,"abstract":"<p>Rural areas in Europe represent outstanding examples of Cultural and Natural Heritage (CNH) that could be used as a valuable asset for social and economic development. This article describes the process for developing a monitoring platform based on Key Performance Indicators (KPI) and implemented in six rural areas around Europe. The goal of this monitoring system is to provide evidence of the role of CNH in rural areas as a driver for sustainable growth. Several data collection procedures are described, including regular, non-regular, and co-monitoring. To combine the selected cross-thematic and multi-scale KPIs, weights have been assigned to indicators, according to the knowledge provided by domain experts and using group decision-making techniques. A detailed description of the dashboards developed for the monitoring platform, and all the information gathered is included. Several dashboards have been designed focusing on KPI values and their evolution.</p>","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"2 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hanik, B. Ducke, H. Hege, Friederike Fless, C. V. Tycowicz
The fact that the physical shapes of man-made objects are subject to overlapping influences—such as technological, economic, geographic, and stylistic progressions—holds great information potential. On the other hand, it is also a major analytical challenge to uncover these overlapping trends and to disentagle them in an unbiased way. This paper explores a novel mathematical approach to extract archaeological insights from ensembles of similar artifact shapes. We show that by considering all shape information in a find collection, it is possible to identify shape patterns that would be difficult to discern by considering the artifacts individually or by classifying shapes into predefined archaeological types and analyzing the associated distinguishing characteristics. Recently, series of high-resolution digital representations of artifacts have become available. Such data sets enable the application of extremely sensitive and flexible methods of shape analysis. We explore this potential on a set of 3D models of ancient Greek and Roman sundials, with the aim of providing alternatives to the traditional archaeological method of “trend extraction by ordination” (typology). In the proposed approach, each 3D shape is represented as a point in a shape space—a high-dimensional, curved, non-Euclidean space. Proper consideration of its mathematical properties reduces bias in data analysis and thus improves analytical power. By performing regression in shape space, we find that for Roman sundials, the bend of the shadow-receiving surface of the sundials changes with the latitude of the location. This suggests that, apart from the inscribed hour lines, also a sundial’s shape was adjusted to the place of installation. As an example of more advanced inference, we use the identified trend to infer the latitude at which a sundial, whose location of installation is unknown, was placed. We also derive a novel method for differentiated morphological trend assertion, building upon and extending the theory of geometric statistics and shape analysis. Specifically, we present a regression-based method for statistical normalization of shapes that serves as a means of disentangling parameter-dependent effects (trends) and unexplained variability. In addition, we show that this approach is robust to noise in the digital reconstructions of the artifact shapes.
{"title":"Intrinsic shape analysis in archaeology: A case study on ancient sundials","authors":"M. Hanik, B. Ducke, H. Hege, Friederike Fless, C. V. Tycowicz","doi":"10.1145/3606698","DOIUrl":"https://doi.org/10.1145/3606698","url":null,"abstract":"The fact that the physical shapes of man-made objects are subject to overlapping influences—such as technological, economic, geographic, and stylistic progressions—holds great information potential. On the other hand, it is also a major analytical challenge to uncover these overlapping trends and to disentagle them in an unbiased way. This paper explores a novel mathematical approach to extract archaeological insights from ensembles of similar artifact shapes. We show that by considering all shape information in a find collection, it is possible to identify shape patterns that would be difficult to discern by considering the artifacts individually or by classifying shapes into predefined archaeological types and analyzing the associated distinguishing characteristics. Recently, series of high-resolution digital representations of artifacts have become available. Such data sets enable the application of extremely sensitive and flexible methods of shape analysis. We explore this potential on a set of 3D models of ancient Greek and Roman sundials, with the aim of providing alternatives to the traditional archaeological method of “trend extraction by ordination” (typology). In the proposed approach, each 3D shape is represented as a point in a shape space—a high-dimensional, curved, non-Euclidean space. Proper consideration of its mathematical properties reduces bias in data analysis and thus improves analytical power. By performing regression in shape space, we find that for Roman sundials, the bend of the shadow-receiving surface of the sundials changes with the latitude of the location. This suggests that, apart from the inscribed hour lines, also a sundial’s shape was adjusted to the place of installation. As an example of more advanced inference, we use the identified trend to infer the latitude at which a sundial, whose location of installation is unknown, was placed. We also derive a novel method for differentiated morphological trend assertion, building upon and extending the theory of geometric statistics and shape analysis. Specifically, we present a regression-based method for statistical normalization of shapes that serves as a means of disentangling parameter-dependent effects (trends) and unexplained variability. In addition, we show that this approach is robust to noise in the digital reconstructions of the artifact shapes.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"30 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82998628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Ferraris, Tom Davis, C. Gatzidis, C. Hargood
Cultural heritage practitioners continue to engage with ever-changing technological opportunities and digital cultural items (DCIs) offer the potential for engaging interactive experiences. As DCIs become more prevalent, we are motivated to seek new presentation opportunities from the medium and understand its affordances with regards to contextual information. In this publication, through a series of Speak Aloud tasks with (n=15) participants, we explore how contextual information can improve user experiences with DCIs. The aforementioned study’s results demonstrate that the inclusion of contextual information when presenting a DCI can, in fact, improve a visitor’s understanding of a DCI’s size and scale plus also the perceived realism of a DCI. Moreover, we observe that contextual information, and its recommended addition, supports the generation of a narrative by the visitor audience. In conclusion, we advise on how contextual information can improve the relationship between a visitor and a DCI, towards interacting with a DCI in a manner very similar to that of its analogue counterpart.
{"title":"Digital Cultural Items in Space: The Impact of Contextual Information on Presenting Digital Cultural Items","authors":"Christopher Ferraris, Tom Davis, C. Gatzidis, C. Hargood","doi":"10.1145/3594725","DOIUrl":"https://doi.org/10.1145/3594725","url":null,"abstract":"Cultural heritage practitioners continue to engage with ever-changing technological opportunities and digital cultural items (DCIs) offer the potential for engaging interactive experiences. As DCIs become more prevalent, we are motivated to seek new presentation opportunities from the medium and understand its affordances with regards to contextual information. In this publication, through a series of Speak Aloud tasks with (n=15) participants, we explore how contextual information can improve user experiences with DCIs. The aforementioned study’s results demonstrate that the inclusion of contextual information when presenting a DCI can, in fact, improve a visitor’s understanding of a DCI’s size and scale plus also the perceived realism of a DCI. Moreover, we observe that contextual information, and its recommended addition, supports the generation of a narrative by the visitor audience. In conclusion, we advise on how contextual information can improve the relationship between a visitor and a DCI, towards interacting with a DCI in a manner very similar to that of its analogue counterpart.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"1 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82334231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Smithies, Pascal Flohr, F. Bala'awi, Sahar Idwan, Carol Palmer, Alessandra Esposito, Shatha Mubaideen, Shaher Moh'd Rababeh
Approaches used to design, build, and maintain digital cultural heritage communities and infrastructure in Europe, North America, and Australasia need to be tailored to regional contexts such as the Middle East and North Africa. Cultural and political differences, inherited issues with technical infrastructure and funding, and the need to build trusting and healthy working relationships across national boundaries makes this challenging. The framework and roadmap used during the MaDiH (): Mapping Digital Cultural Heritage in Jordan project (2019–2021) provides one of several possible models for such work, as well as highlighting its myriad challenges and opportunities.
{"title":"MaDiH (): A Transnational Approach to Building Digital Cultural Heritage Capacity","authors":"J. Smithies, Pascal Flohr, F. Bala'awi, Sahar Idwan, Carol Palmer, Alessandra Esposito, Shatha Mubaideen, Shaher Moh'd Rababeh","doi":"10.1145/3513261","DOIUrl":"https://doi.org/10.1145/3513261","url":null,"abstract":"Approaches used to design, build, and maintain digital cultural heritage communities and infrastructure in Europe, North America, and Australasia need to be tailored to regional contexts such as the Middle East and North Africa. Cultural and political differences, inherited issues with technical infrastructure and funding, and the need to build trusting and healthy working relationships across national boundaries makes this challenging. The framework and roadmap used during the MaDiH (): Mapping Digital Cultural Heritage in Jordan project (2019–2021) provides one of several possible models for such work, as well as highlighting its myriad challenges and opportunities.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"166 1","pages":"1 - 14"},"PeriodicalIF":2.4,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85004160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discussing the current AHRC/LABEX-funded EyCon (Early Conflict Photography 1890-1918 and Visual AI) project, this article considers potentially problematic metadata and how it affects the accessibility of digital visual archives. The authors deliberate how metadata creation and enrichment could be improved through Artificial Intelligence (AI) tools and explore the practical applications of AI-reliant tools to analyse a large corpus of photographs and create or enrich metadata. The amount of visual data created by digitisation efforts is not always followed by the creation of contextual metadata, which is a major problem for archival institutions and their users, as metadata directly affects the accessibility of digitised records. Moreover, the scale of digitisation efforts means it is often beyond the scope of archivists and other record managers to individually assess problematic or sensitive images and their metadata. Additionally, existing metadata for photographic and visual records are presenting issues in terms of out-dated descriptions or inconsistent contextual information. As more attention is given to the creation of accessible digital content within archival institutions, we argue that too little is being given to the enrichment of record data. In this article, the authors ask how new tools can address incomplete or inaccurate metadata and improve the transparency and accessibility of digital visual records.
{"title":"(Mis)matching Metadata: Improving Accessibility in Digital Visual Archives through the EyCon Project","authors":"Katherine Aske, Marina Giardinetti","doi":"10.1145/3594726","DOIUrl":"https://doi.org/10.1145/3594726","url":null,"abstract":"Discussing the current AHRC/LABEX-funded EyCon (Early Conflict Photography 1890-1918 and Visual AI) project, this article considers potentially problematic metadata and how it affects the accessibility of digital visual archives. The authors deliberate how metadata creation and enrichment could be improved through Artificial Intelligence (AI) tools and explore the practical applications of AI-reliant tools to analyse a large corpus of photographs and create or enrich metadata. The amount of visual data created by digitisation efforts is not always followed by the creation of contextual metadata, which is a major problem for archival institutions and their users, as metadata directly affects the accessibility of digitised records. Moreover, the scale of digitisation efforts means it is often beyond the scope of archivists and other record managers to individually assess problematic or sensitive images and their metadata. Additionally, existing metadata for photographic and visual records are presenting issues in terms of out-dated descriptions or inconsistent contextual information. As more attention is given to the creation of accessible digital content within archival institutions, we argue that too little is being given to the enrichment of record data. In this article, the authors ask how new tools can address incomplete or inaccurate metadata and improve the transparency and accessibility of digital visual records.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"16 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90005767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To facilitate discovery of premodern manuscripts in US memory institutions, Digital Scriptorium, a growing consortium of over thirty-five institutional members representing American libraries, museums, and other cultural heritage institutions, has developed a digital platform for an online national union catalog. The platform will allow low-barrier and efficient collection, aggregation, and enrichment of member metadata and sustainably publish it as Linked Open Data. This article describes the methods and principles behind the data model development and the decision to use Wikibase. The results of the prototype implementation and testing phase demonstrate the practicality and sustainability of Digital Scriptorium’s approach to building an online national union catalog based on Linked Open Data technologies and practices.
{"title":"A Wikibase Model for Premodern Manuscript Metadata Harmonization, Linked Data Integration, and Discovery","authors":"M. Koho, L. Coladangelo, Lynn Ransom, Doug Emery","doi":"10.1145/3594723","DOIUrl":"https://doi.org/10.1145/3594723","url":null,"abstract":"To facilitate discovery of premodern manuscripts in US memory institutions, Digital Scriptorium, a growing consortium of over thirty-five institutional members representing American libraries, museums, and other cultural heritage institutions, has developed a digital platform for an online national union catalog. The platform will allow low-barrier and efficient collection, aggregation, and enrichment of member metadata and sustainably publish it as Linked Open Data. This article describes the methods and principles behind the data model development and the decision to use Wikibase. The results of the prototype implementation and testing phase demonstrate the practicality and sustainability of Digital Scriptorium’s approach to building an online national union catalog based on Linked Open Data technologies and practices.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"111 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79618382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alireza Rezaei, Emanuel Aldea, Piercarlo Dondi, S. Le Hégarat-Mascle, M. Malagodi
Artworks need to be constantly monitored to check their state of conservation and to quickly spot the eventual presence of alterations or damages. Preventive conservation is the set of practices employed to reach this goal. Unfortunately, this results generally in a cumbersome process involving multiple analytical techniques. Consequently, methods able to provide a quick preliminary examination of the artworks (e.g., optical monitoring) seem very promising to streamline preventive conservation procedures. We are especially interested in the study of historical wood musical instruments, a kind of artwork particularly subject to mechanical wear since they are both held in museums and also occasionally played in concerts. Our primary goal is to detect possible altered regions on the surface of the instruments early and thus provide the experts some precise indications on where to apply more in-depth examinations to check for potential damages. In this work, we propose an optical monitoring method based on the a-contrario probabilistic framework. Tests were conducted on the “Violins UVIFL imagery” dataset, a collection of UV-induced fluorescence image sequences of artificially altered wood samples and violins. Obtained results showed the robustness of the proposed method and its capability to properly detect altered regions while rejecting noise.
{"title":"Multi-Temporal Image Analysis for Preventive Conservation of Historical Musical Instruments","authors":"Alireza Rezaei, Emanuel Aldea, Piercarlo Dondi, S. Le Hégarat-Mascle, M. Malagodi","doi":"10.1145/3575866","DOIUrl":"https://doi.org/10.1145/3575866","url":null,"abstract":"Artworks need to be constantly monitored to check their state of conservation and to quickly spot the eventual presence of alterations or damages. Preventive conservation is the set of practices employed to reach this goal. Unfortunately, this results generally in a cumbersome process involving multiple analytical techniques. Consequently, methods able to provide a quick preliminary examination of the artworks (e.g., optical monitoring) seem very promising to streamline preventive conservation procedures. We are especially interested in the study of historical wood musical instruments, a kind of artwork particularly subject to mechanical wear since they are both held in museums and also occasionally played in concerts. Our primary goal is to detect possible altered regions on the surface of the instruments early and thus provide the experts some precise indications on where to apply more in-depth examinations to check for potential damages. In this work, we propose an optical monitoring method based on the a-contrario probabilistic framework. Tests were conducted on the “Violins UVIFL imagery” dataset, a collection of UV-induced fluorescence image sequences of artificially altered wood samples and violins. Obtained results showed the robustness of the proposed method and its capability to properly detect altered regions while rejecting noise.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"6 1","pages":"1 - 19"},"PeriodicalIF":2.4,"publicationDate":"2023-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78856782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}