Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847045
Hani K. M. Abd El-Salam
The extravagance of the product electronic channel media and/or digital data is no longer a (product-big-data) facet, where “True Satisfaction” understanding of an Electronic Marketing (e-M) (data-information-knowledge strategic-to-operational) management; is provoking and sharing the satisfaction multidimensional inter-disciplinary and cross-disciplinary, resources data, experience information, consequences knowledge implications of all participant actors in a product e-M satisfaction true process environment. The “True Satisfaction” logic argues; that operational qualitative information research facilitates and illustrates strategic quantitative data research, and quantitative research do the same route, where both approach shape the available User Satisfaction (US) functional data-information suitability and PS context strategic-to-operational completeness interoperability, in a accumulative e-M Environment (e-ME) knowledge ability, to shape an intentional logical satisfaction perspective knowledge. While the framework semantic; convoys that, “ e-M implementation methodology is; the satisfaction projection and projection inferences of, US qualitative strategic physiological requirements, upon the Product Satisfaction (PS) quantitative operational physiological requirements, for an entity investigation of theoretical, intentional perspective and philosophical satisfaction backgrounds.”. Consequentially, the implementation technology; utilizes both quantitative and qualitative satisfaction research approaches, where they are strategically alignment, convoyed with integrity development, and implemented consuming transformation logic evaluation; in parallel, with satisfaction web analytics; knowledge production, validation process and integration perspectives’; nonetheless, e-ME web analytics semantic mechanisms evaluation is obtainable in one methodology.
{"title":"True Satisfaction in Product e-Marketing Data-Information Knowledge Acquisition Evaluation Ontology; with focus on User Satisfaction Implementation Semantic Mechanism Methodology","authors":"Hani K. M. Abd El-Salam","doi":"10.1109/ICDIM.2018.8847045","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847045","url":null,"abstract":"The extravagance of the product electronic channel media and/or digital data is no longer a (product-big-data) facet, where “True Satisfaction” understanding of an Electronic Marketing (e-M) (data-information-knowledge strategic-to-operational) management; is provoking and sharing the satisfaction multidimensional inter-disciplinary and cross-disciplinary, resources data, experience information, consequences knowledge implications of all participant actors in a product e-M satisfaction true process environment. The “True Satisfaction” logic argues; that operational qualitative information research facilitates and illustrates strategic quantitative data research, and quantitative research do the same route, where both approach shape the available User Satisfaction (US) functional data-information suitability and PS context strategic-to-operational completeness interoperability, in a accumulative e-M Environment (e-ME) knowledge ability, to shape an intentional logical satisfaction perspective knowledge. While the framework semantic; convoys that, “ e-M implementation methodology is; the satisfaction projection and projection inferences of, US qualitative strategic physiological requirements, upon the Product Satisfaction (PS) quantitative operational physiological requirements, for an entity investigation of theoretical, intentional perspective and philosophical satisfaction backgrounds.”. Consequentially, the implementation technology; utilizes both quantitative and qualitative satisfaction research approaches, where they are strategically alignment, convoyed with integrity development, and implemented consuming transformation logic evaluation; in parallel, with satisfaction web analytics; knowledge production, validation process and integration perspectives’; nonetheless, e-ME web analytics semantic mechanisms evaluation is obtainable in one methodology.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127725134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847004
Pingpeng Yuan, Lijian Fan, Hai Jin
The volume of RDF data continues to grow over the past decade and many known RDF datasets have billions of triples. A grant challenge of managing this huge RDF data is how to access this big RDF data efficiently. A popular approach to addressing the problem is to build a full set of permutations of (S, P, O) indexes. Although this approach has shown to accelerate joins by orders of magnitude, the large space overhead limits the scalability of this approach and makes it heavyweight. In this paper, we present TripleBit +, a fast and compact system for updating RDF data. The design of TripleBit + has two salient features. First, the efficient maintenance strategies of TripleBit + reduces both the overhead to update data and indexes. Second, effective maintenance technologies to handle online updates over RDF repositories are proposed. Our experiments show that TripleBit + outperforms RDF-3X, MonetDB, BitMat on LUBM, UniProt, and BTC 2012 benchmark queries and it offers orders of mangnitude performance improvement for some complex join queries. Our design also yields high task rates as high as 660,000 per second and fast average response time of task which is faster than x-RDF-3X and PostgreSQL.
{"title":"High Performance RDF Updates with TripleBit +","authors":"Pingpeng Yuan, Lijian Fan, Hai Jin","doi":"10.1109/ICDIM.2018.8847004","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847004","url":null,"abstract":"The volume of RDF data continues to grow over the past decade and many known RDF datasets have billions of triples. A grant challenge of managing this huge RDF data is how to access this big RDF data efficiently. A popular approach to addressing the problem is to build a full set of permutations of (S, P, O) indexes. Although this approach has shown to accelerate joins by orders of magnitude, the large space overhead limits the scalability of this approach and makes it heavyweight. In this paper, we present TripleBit +, a fast and compact system for updating RDF data. The design of TripleBit + has two salient features. First, the efficient maintenance strategies of TripleBit + reduces both the overhead to update data and indexes. Second, effective maintenance technologies to handle online updates over RDF repositories are proposed. Our experiments show that TripleBit + outperforms RDF-3X, MonetDB, BitMat on LUBM, UniProt, and BTC 2012 benchmark queries and it offers orders of mangnitude performance improvement for some complex join queries. Our design also yields high task rates as high as 660,000 per second and fast average response time of task which is faster than x-RDF-3X and PostgreSQL.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133891852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/icdim.2018.8847098
{"title":"ICDIM 2018 Message from the Chairs","authors":"","doi":"10.1109/icdim.2018.8847098","DOIUrl":"https://doi.org/10.1109/icdim.2018.8847098","url":null,"abstract":"","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129725687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/icdim.2018.8846994
{"title":"ICDIM 2018 Author Index","authors":"","doi":"10.1109/icdim.2018.8846994","DOIUrl":"https://doi.org/10.1109/icdim.2018.8846994","url":null,"abstract":"","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124436447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847061
Xuemei Bai
An improved text classification method combining long short-term memory (LSTM) units and attention mechanism is proposed in this paper. First, the preliminary features are extracted from the convolution layer. Then, LSTM stores context history information with three gate structures - input gates, forget gates, and output gates. Attention mechanism generates semantic code containing the attention probability distribution and highlights the effect of input on the output. This mixed system model optimizes traditional models to represent features more accurately. The simulation shows that the proposed algorithm in this paper outperformed the RNN algorithm and the CNN algorithm which have long-distance dependency problem. Besides, the results also prove that the proposed algorithm works better than the LSTM algorithm by highlighting the impact of critical input in LSTM on the model.
{"title":"Text classification based on LSTM and attention","authors":"Xuemei Bai","doi":"10.1109/ICDIM.2018.8847061","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847061","url":null,"abstract":"An improved text classification method combining long short-term memory (LSTM) units and attention mechanism is proposed in this paper. First, the preliminary features are extracted from the convolution layer. Then, LSTM stores context history information with three gate structures - input gates, forget gates, and output gates. Attention mechanism generates semantic code containing the attention probability distribution and highlights the effect of input on the output. This mixed system model optimizes traditional models to represent features more accurately. The simulation shows that the proposed algorithm in this paper outperformed the RNN algorithm and the CNN algorithm which have long-distance dependency problem. Besides, the results also prove that the proposed algorithm works better than the LSTM algorithm by highlighting the impact of critical input in LSTM on the model.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847079
E. Sultanow, André Ullrich, Stefan Konopik, Gergana Vladova
Machine Learning is often associated with predictive analytics, for example with the prediction of buying and termination behavior, with maintenance times or the lifespan of parts, tools or products. However, Machine Learning can also serve other purposes such as identifying potential errors in a mission-critical large-scale IT process of the public sector. A delay of troubleshooting can be expensive depending on the error's severity- a hotfix may become essential. This paper examines an approach, which is particularly suitable for Static Code Analysis in such a critical environment. For this, we utilize a specially developed Machine Learning based approach including a prototype that finds hidden potential for failure that classical Static Code Analysis does not detect.
{"title":"Machine Learning based Static Code Analysis for Software Quality Assurance","authors":"E. Sultanow, André Ullrich, Stefan Konopik, Gergana Vladova","doi":"10.1109/ICDIM.2018.8847079","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847079","url":null,"abstract":"Machine Learning is often associated with predictive analytics, for example with the prediction of buying and termination behavior, with maintenance times or the lifespan of parts, tools or products. However, Machine Learning can also serve other purposes such as identifying potential errors in a mission-critical large-scale IT process of the public sector. A delay of troubleshooting can be expensive depending on the error's severity- a hotfix may become essential. This paper examines an approach, which is particularly suitable for Static Code Analysis in such a critical environment. For this, we utilize a specially developed Machine Learning based approach including a prototype that finds hidden potential for failure that classical Static Code Analysis does not detect.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847010
P. K. Wamuyu, J. R. Ndiege
Devolved governments such as the county and regional governments around the world have a constitutional responsibility to find sustainable ways through which they can meet material, social, and economic responsibilities of improving the quality of the lives of their citizens by providing high-quality services and decent work for their employees. The 2014-2017 Kenya’s Council of Governors strategic plan postulated enactment of a knowledge management strategy where good practices and lessons learnt within any county government should be documented and disseminated in appropriate forums to other counties. However, the 2017-2022 strategic plan indicates that there is lack of a structured mechanism for systematic knowledge sharing and organizational learning among the county governments despite the council’s effort to share information through statutory annual reports, devolution conferences and quarterly sectoral committee meetings. But, the 2017-2022 strategic plan envisions a systematic mechanism for sharing experiences among the county governments. The intention of this study was to assess the current knowledge management practices among the county governments in Kenya; to identify, and articulate knowledge management concepts that are useful to the public services sector among devolved governments in developing countries; and to model these practices into a framework that can support continuous sharing of experiences, lessons and innovations within and among the county governments in Kenya. Theoretical frameworks and models of knowledge management in governance, governments and e-governments were considered and a conceptual framework for successful knowledge management initiatives among county and regional governments was formulated. The proposed conceptual framework was evaluated using a focus group discussion with participants drawn from the Council of Governors’ Maarifa Center employees. The study proposes a framework to facilitate effective sharing of experiences among county employees, between different county governments and to manage and enhance knowledge management initiatives among the devolved governments. The study results indicate some sporadic nascent knowledge management practices rather than well planned initiatives within the counties. The study provides recommendations for the Council of Governors and other policy makers on how to manage knowledge management initiatives, while suggestions for future research directions for researchers with similar interests are given.
{"title":"Conceptualization of a Knowledge Management Framework for Governments: A case of Devolved County Governments in Kenya","authors":"P. K. Wamuyu, J. R. Ndiege","doi":"10.1109/ICDIM.2018.8847010","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847010","url":null,"abstract":"Devolved governments such as the county and regional governments around the world have a constitutional responsibility to find sustainable ways through which they can meet material, social, and economic responsibilities of improving the quality of the lives of their citizens by providing high-quality services and decent work for their employees. The 2014-2017 Kenya’s Council of Governors strategic plan postulated enactment of a knowledge management strategy where good practices and lessons learnt within any county government should be documented and disseminated in appropriate forums to other counties. However, the 2017-2022 strategic plan indicates that there is lack of a structured mechanism for systematic knowledge sharing and organizational learning among the county governments despite the council’s effort to share information through statutory annual reports, devolution conferences and quarterly sectoral committee meetings. But, the 2017-2022 strategic plan envisions a systematic mechanism for sharing experiences among the county governments. The intention of this study was to assess the current knowledge management practices among the county governments in Kenya; to identify, and articulate knowledge management concepts that are useful to the public services sector among devolved governments in developing countries; and to model these practices into a framework that can support continuous sharing of experiences, lessons and innovations within and among the county governments in Kenya. Theoretical frameworks and models of knowledge management in governance, governments and e-governments were considered and a conceptual framework for successful knowledge management initiatives among county and regional governments was formulated. The proposed conceptual framework was evaluated using a focus group discussion with participants drawn from the Council of Governors’ Maarifa Center employees. The study proposes a framework to facilitate effective sharing of experiences among county employees, between different county governments and to manage and enhance knowledge management initiatives among the devolved governments. The study results indicate some sporadic nascent knowledge management practices rather than well planned initiatives within the counties. The study provides recommendations for the Council of Governors and other policy makers on how to manage knowledge management initiatives, while suggestions for future research directions for researchers with similar interests are given.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133496086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847076
Ferry Astika Saputra, Muhammad Fajar Masputra, I. Syarif, K. Ramli
To date, malware caused by botnet activities is one of the most serious cybersecurity threats faced by internet communities. Researchers have proposed data-mining-based IDS as an alternative solution to misuse-based IDS and anomaly-based IDS to detect botnet activities. In this paper, we propose a new method that improves IDS performance to detect botnets. Our method combines two statistical methods, namely low variance filter and Pearson correlation filter, in the feature-selection process. To prove our method can increase the performance of a data-mining-based IDS, we use accuracy and computational time as parameters. A benchmark intrusion dataset (ISCX2017) is used to evaluate our work. Thus, our method reduces the number of features to be processed by the IDS from 77 to 15. Although the number of features decreases, it does not significantly change the accuracy. The computational time is decreased from 71 seconds to 5.6 seconds.
{"title":"Botnet Detection in Network System Through Hybrid Low Variance Filter, Correlation Filter and Supervised Mining Process","authors":"Ferry Astika Saputra, Muhammad Fajar Masputra, I. Syarif, K. Ramli","doi":"10.1109/ICDIM.2018.8847076","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847076","url":null,"abstract":"To date, malware caused by botnet activities is one of the most serious cybersecurity threats faced by internet communities. Researchers have proposed data-mining-based IDS as an alternative solution to misuse-based IDS and anomaly-based IDS to detect botnet activities. In this paper, we propose a new method that improves IDS performance to detect botnets. Our method combines two statistical methods, namely low variance filter and Pearson correlation filter, in the feature-selection process. To prove our method can increase the performance of a data-mining-based IDS, we use accuracy and computational time as parameters. A benchmark intrusion dataset (ISCX2017) is used to evaluate our work. Thus, our method reduces the number of features to be processed by the IDS from 77 to 15. Although the number of features decreases, it does not significantly change the accuracy. The computational time is decreased from 71 seconds to 5.6 seconds.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125616920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847169
J. Monteiro, João Barata, M. Veloso, Luis Veloso, J. Nunes
We present a model to implement digital twins in sustainable agriculture. Our two-year research project follows the design science research paradigm, aiming at the joint creation of physical and digital layers of IoT-enabled structures for vertical farming. The proposed model deploys IoT to (1) improve productivity, (2) allow self-configuration to environmental changes, (3) promote energy saving, (4) ensure self-protection with continuous structural monitoring, and (5) reach self-optimization learning from multiple data sources. Our model shows how digital twins can contribute to the agrofood lifecycle of planning, operation, monitoring, and optimization. Moreover, it clarifies the interconnections between goals, tasks, and resources of IoT-enabled structures for sustainable agriculture, which is one of the biggest human challenges of this century.
{"title":"Towards Sustainable Digital Twins for Vertical Farming","authors":"J. Monteiro, João Barata, M. Veloso, Luis Veloso, J. Nunes","doi":"10.1109/ICDIM.2018.8847169","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847169","url":null,"abstract":"We present a model to implement digital twins in sustainable agriculture. Our two-year research project follows the design science research paradigm, aiming at the joint creation of physical and digital layers of IoT-enabled structures for vertical farming. The proposed model deploys IoT to (1) improve productivity, (2) allow self-configuration to environmental changes, (3) promote energy saving, (4) ensure self-protection with continuous structural monitoring, and (5) reach self-optimization learning from multiple data sources. Our model shows how digital twins can contribute to the agrofood lifecycle of planning, operation, monitoring, and optimization. Moreover, it clarifies the interconnections between goals, tasks, and resources of IoT-enabled structures for sustainable agriculture, which is one of the biggest human challenges of this century.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124781913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICDIM.2018.8847133
R. Berka, Bohus Ziskal, Z. Trávníček
Multimedia performance documentation and preservation processes in digital domain mean a serious challenge as there is a necessity to store and search through many data types (e.g. video, audio, text, images, generic documents and motion data) while maintaining proper relations among all performance components. Memory institutions express the need for appropriate data models and tools that allow for preserving complexity of a work preserved together with all metadata already created in existing cataloguing systems. Additionally, the performance documentation should include a component describing actor’s movement on stage that can serve both for its reconstruction and presentation, moreover, its specific segments need to be identified and documented/linked separately. In this paper, we discuss existing models, suggest an adequate approach informed by existing data aggregation projects and standards, and evaluate methods for documenting motion including search and segmentation algorithms. Based on actual needs and using the data from Laterna Magica project aimed at national heritage preservation, we propose suitable data structures and an application for the complex documentation management and presentation intended both for professionals and the general public.
{"title":"Flexible Approach to Documenting and Presenting Multimedia Performances Using Motion Capture Data","authors":"R. Berka, Bohus Ziskal, Z. Trávníček","doi":"10.1109/ICDIM.2018.8847133","DOIUrl":"https://doi.org/10.1109/ICDIM.2018.8847133","url":null,"abstract":"Multimedia performance documentation and preservation processes in digital domain mean a serious challenge as there is a necessity to store and search through many data types (e.g. video, audio, text, images, generic documents and motion data) while maintaining proper relations among all performance components. Memory institutions express the need for appropriate data models and tools that allow for preserving complexity of a work preserved together with all metadata already created in existing cataloguing systems. Additionally, the performance documentation should include a component describing actor’s movement on stage that can serve both for its reconstruction and presentation, moreover, its specific segments need to be identified and documented/linked separately. In this paper, we discuss existing models, suggest an adequate approach informed by existing data aggregation projects and standards, and evaluate methods for documenting motion including search and segmentation algorithms. Based on actual needs and using the data from Laterna Magica project aimed at national heritage preservation, we propose suitable data structures and an application for the complex documentation management and presentation intended both for professionals and the general public.","PeriodicalId":120884,"journal":{"name":"2018 Thirteenth International Conference on Digital Information Management (ICDIM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125514937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}