The concept of persistent identification is increasingly important for research data management. At the beginnings it was only considered as a persistent naming mechanism for research datasets, which is achieved by providing an abstraction for addresses of research datasets. However, recent developments in research data management have led persistent identification to move towards a concept which realizes a virtual global research data network. The base for this is the ability of persistent identifiers of holding semantic information about the identified dataset itself. Hence, community-specific representations of research datasets are mapped into globally common data structures provided by persistent identifiers. This ultimately enables a standardized data exchange between diverse scientific fields. Therefore, for the immense amount of research datasets, a robust and performant global resolution system is essential. However, for persistent identifiers the number of resolution systems is in comparison to the count of DNS resolvers extremely small. For the Handle System for instance, which is the most established persistent identifier system, there are currently only five globally distributed resolvers available. The fundamental idea of this work is therefore to enable persistent identifier resolution over DNS traffic. On the one side, this leads to a faster resolution of persistent identifiers. On the other side, this approach transforms the DNS system to a data dissemination system.
{"title":"DNS as resolution infrastructure for persistent identifiers","authors":"Fatih Berber, R. Yahyapour","doi":"10.15439/2017F114","DOIUrl":"https://doi.org/10.15439/2017F114","url":null,"abstract":"The concept of persistent identification is increasingly important for research data management. At the beginnings it was only considered as a persistent naming mechanism for research datasets, which is achieved by providing an abstraction for addresses of research datasets. However, recent developments in research data management have led persistent identification to move towards a concept which realizes a virtual global research data network. The base for this is the ability of persistent identifiers of holding semantic information about the identified dataset itself. Hence, community-specific representations of research datasets are mapped into globally common data structures provided by persistent identifiers. This ultimately enables a standardized data exchange between diverse scientific fields. Therefore, for the immense amount of research datasets, a robust and performant global resolution system is essential. However, for persistent identifiers the number of resolution systems is in comparison to the count of DNS resolvers extremely small. For the Handle System for instance, which is the most established persistent identifier system, there are currently only five globally distributed resolvers available. The fundamental idea of this work is therefore to enable persistent identifier resolution over DNS traffic. On the one side, this leads to a faster resolution of persistent identifiers. On the other side, this approach transforms the DNS system to a data dissemination system.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"12 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120862259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High Efficiency Video Coding (HEVC), a modern video compression standard, exceeds the predecessor H.264 in efficiency by 50%, but with cost of increased complexity. It is one of main research topics for FPGA engineers working on image compression algorithms. On the other hand high-level synthesis tools after few years of lower interest from the industry and academic research, started to gain more of it recently. This paper presents FPGA implementation of HEVC 2D Inverse DCT transform implemented on Xilinx Virtex-6 using Impulse C high level language. Achieved results exceed 1080p@30fps with relatively high FPGA clock frequency and moderate resource usage.
高效视频编码(High Efficiency Video Coding, HEVC)是一种现代视频压缩标准,其效率比H.264高出50%,但其成本增加了复杂度。它是研究图像压缩算法的FPGA工程师的主要研究课题之一。另一方面,高水平的合成工具在经过几年的工业和学术研究的低兴趣之后,最近开始获得更多的关注。本文介绍了在Xilinx Virtex-6上使用Impulse C高级语言实现HEVC二维逆DCT变换的FPGA实现。通过相对较高的FPGA时钟频率和适度的资源使用,实现的结果超过1080p@30fps。
{"title":"H.265 inverse transform FPGA implementation in Impulse C","authors":"Slawomir Cichon, M. Gorgon","doi":"10.15439/2017F185","DOIUrl":"https://doi.org/10.15439/2017F185","url":null,"abstract":"High Efficiency Video Coding (HEVC), a modern video compression standard, exceeds the predecessor H.264 in efficiency by 50%, but with cost of increased complexity. It is one of main research topics for FPGA engineers working on image compression algorithms. On the other hand high-level synthesis tools after few years of lower interest from the industry and academic research, started to gain more of it recently. This paper presents FPGA implementation of HEVC 2D Inverse DCT transform implemented on Xilinx Virtex-6 using Impulse C high level language. Achieved results exceed 1080p@30fps with relatively high FPGA clock frequency and moderate resource usage.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127244493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Przemysław Kucharski, Dawid Sielski, K. Grudzień, Wiktor Kozakiewicz, Michal Basiuras, Klaudia Greif, Jakub Santorek, L. Babout
The subject of this paper is to compare two different modality multi-touch interactive surfaces based on both: user experience and results of measurements in order to examine how different properties influence usefulness, in specific, their fitness to act as a “coffee table”. Tests were conducted on the Microsoft PixelSense (AKA Surface) and a Samsung touch screen overlay both 40+ inches diagonally. The study covers analysis of obtained measurements and summary of user experience collected over a number of summits and experiments. While tests for both devices returned very similar results, with the overlay more favorable, neither device could truly fit the tested use case due to their inconvenience, form factor and other issues.
{"title":"Comparative analysis of multitouch interactive surfaces","authors":"Przemysław Kucharski, Dawid Sielski, K. Grudzień, Wiktor Kozakiewicz, Michal Basiuras, Klaudia Greif, Jakub Santorek, L. Babout","doi":"10.15439/2017F393","DOIUrl":"https://doi.org/10.15439/2017F393","url":null,"abstract":"The subject of this paper is to compare two different modality multi-touch interactive surfaces based on both: user experience and results of measurements in order to examine how different properties influence usefulness, in specific, their fitness to act as a “coffee table”. Tests were conducted on the Microsoft PixelSense (AKA Surface) and a Samsung touch screen overlay both 40+ inches diagonally. The study covers analysis of obtained measurements and summary of user experience collected over a number of summits and experiments. While tests for both devices returned very similar results, with the overlay more favorable, neither device could truly fit the tested use case due to their inconvenience, form factor and other issues.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125796199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The component-based software development enables to construct applications from reusable components providing particular functionalities and simplifies application evolution. To ensure the correct functioning of a given component-based application and its preservation across evolution steps, it is necessary to test not only the functional properties of the individual components but also the correctness of their mutual interactions and cooperation. This is complicated by the fact that third-party components often come without source code and/or documentation of functional and interaction properties. In this paper, we describe an approach for performing rigorous semi-automated testing of software components with unavailable source code. Utilizing an automated analysis of the component interfaces, scenarios invoking methods with generated parameter values are created. When they are performed on a stable application version and their runtime effects (component interactions) are recorded, the resulting scenarios with recorded effects can be used for accurate regression testing of newly installed versions of selected components. Our experiences with a prototype implementation show that the approach has acceptable demands on manual work and computational resources.
{"title":"Interface-based semi-automated testing of software components","authors":"T. Potuzak, Richard Lipka, Přemek Brada","doi":"10.15439/2017F139","DOIUrl":"https://doi.org/10.15439/2017F139","url":null,"abstract":"The component-based software development enables to construct applications from reusable components providing particular functionalities and simplifies application evolution. To ensure the correct functioning of a given component-based application and its preservation across evolution steps, it is necessary to test not only the functional properties of the individual components but also the correctness of their mutual interactions and cooperation. This is complicated by the fact that third-party components often come without source code and/or documentation of functional and interaction properties. In this paper, we describe an approach for performing rigorous semi-automated testing of software components with unavailable source code. Utilizing an automated analysis of the component interfaces, scenarios invoking methods with generated parameter values are created. When they are performed on a stable application version and their runtime effects (component interactions) are recorded, the resulting scenarios with recorded effects can be used for accurate regression testing of newly installed versions of selected components. Our experiences with a prototype implementation show that the approach has acceptable demands on manual work and computational resources.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123567666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An Enterprise Architecture (EA) is used for the design and realization of the business processes, along with user roles, applications, data, and technical infrastructures. Over time, maintaining an EA update may become a complex issue, let alone an organization-wide architecture and its related artifacts. EA practices provide much of the required guidelines for the design and development of EAs. However, they cannot present a comprehensive method or solution for the re-engineering processes of EAs. In this paper, we propose an EA re-engineering model and present its potential contributions. The study is conducted according to the Design Science Research Method. The research contribution is classified as an “application of a new solution (process model) to a known problem (re-engineering EA)”. The future research efforts will focus on the implementation and evaluation of the model in case studies for gathering empirical evidences.
{"title":"Re-engineering enterprise architectures","authors":"M. Uysal, Ali Halici, A. Mergen","doi":"10.15439/2017F17","DOIUrl":"https://doi.org/10.15439/2017F17","url":null,"abstract":"An Enterprise Architecture (EA) is used for the design and realization of the business processes, along with user roles, applications, data, and technical infrastructures. Over time, maintaining an EA update may become a complex issue, let alone an organization-wide architecture and its related artifacts. EA practices provide much of the required guidelines for the design and development of EAs. However, they cannot present a comprehensive method or solution for the re-engineering processes of EAs. In this paper, we propose an EA re-engineering model and present its potential contributions. The study is conducted according to the Design Science Research Method. The research contribution is classified as an “application of a new solution (process model) to a known problem (re-engineering EA)”. The future research efforts will focus on the implementation and evaluation of the model in case studies for gathering empirical evidences.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125307726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the problem of determining the significance of data features is considered. For this purpose the algorithm is proposed, which with the use of Sobol method, provides the global sensitivity indices. On the basis of these indices, the aggregated sensitivity coefficients are determined which are used to indicate significant features. Using such an information, the process of features' removal is performed. The results are verified by the probabilistic neural network in the classification of medical data sets by computing model's quality. We show that it is possible to point the least significant features which can be removed from the input space achieving higher classification performance.
{"title":"Determining the significance of features with the use of Sobol method in probabilistic neural network classification tasks","authors":"Piotr A. Kowalski, Maciej Kusy","doi":"10.15439/2017F225","DOIUrl":"https://doi.org/10.15439/2017F225","url":null,"abstract":"In this article, the problem of determining the significance of data features is considered. For this purpose the algorithm is proposed, which with the use of Sobol method, provides the global sensitivity indices. On the basis of these indices, the aggregated sensitivity coefficients are determined which are used to indicate significant features. Using such an information, the process of features' removal is performed. The results are verified by the probabilistic neural network in the classification of medical data sets by computing model's quality. We show that it is possible to point the least significant features which can be removed from the input space achieving higher classification performance.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126914644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Li, Bassem Abd-El-Atty, A. El-latif, A. Ghoneim
In this paper, a novel quantum encryption algorithm for color image is proposed based on multiple discrete chaotic systems. The proposed quantum image encryption algorithm utilize the quantum controlled-NOT image generated by chaotic logistic map, asymmetric tent map and logistic Chebyshev map to control the XOR operation in the encryption process. Experiment results and analysis show that the proposed algorithm has high efficiency and security against differential and statistical attacks.
{"title":"Quantum color image encryption based on multiple discrete chaotic systems","authors":"Li Li, Bassem Abd-El-Atty, A. El-latif, A. Ghoneim","doi":"10.15439/2017F163","DOIUrl":"https://doi.org/10.15439/2017F163","url":null,"abstract":"In this paper, a novel quantum encryption algorithm for color image is proposed based on multiple discrete chaotic systems. The proposed quantum image encryption algorithm utilize the quantum controlled-NOT image generated by chaotic logistic map, asymmetric tent map and logistic Chebyshev map to control the XOR operation in the encryption process. Experiment results and analysis show that the proposed algorithm has high efficiency and security against differential and statistical attacks.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122365573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Kotulski, T. Nowak, Mariusz Sepczuk, M. Tunia, Rafal Artych, Krzysztof Bocianiak, Tomasz Osko, Jean-Philippe Wary
There are several reports and white papers which attempt to precise 5G architectural requirements presenting them from different points of view, including techno-socio-economic impacts and technological constraints. Most of them deal with network slicing aspects as a central point, often strengthening slices with slice isolation. The goal of this paper is to present and examine the isolation capabilities and selected approaches for its realization in network slicing context. As the 5G architecture is still evolving, the specification of isolated slices operation and management brings new requirements that need to be addressed, especially in a context of End-to-End (E2E) security. Thus, an outline of recent trends in slice isolation and a set of challenges are proposed, which (if properly addressed) could be a step to E2E user's security based on slices isolation.
{"title":"On end-to-end approach for slice isolation in 5G networks. Fundamental challenges","authors":"Z. Kotulski, T. Nowak, Mariusz Sepczuk, M. Tunia, Rafal Artych, Krzysztof Bocianiak, Tomasz Osko, Jean-Philippe Wary","doi":"10.15439/2017F228","DOIUrl":"https://doi.org/10.15439/2017F228","url":null,"abstract":"There are several reports and white papers which attempt to precise 5G architectural requirements presenting them from different points of view, including techno-socio-economic impacts and technological constraints. Most of them deal with network slicing aspects as a central point, often strengthening slices with slice isolation. The goal of this paper is to present and examine the isolation capabilities and selected approaches for its realization in network slicing context. As the 5G architecture is still evolving, the specification of isolated slices operation and management brings new requirements that need to be addressed, especially in a context of End-to-End (E2E) security. Thus, an outline of recent trends in slice isolation and a set of challenges are proposed, which (if properly addressed) could be a step to E2E user's security based on slices isolation.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126983513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Korczak, Helena Dudycz, Bartłomiej Nita, Piotr Oleksyk
The article presents an approach to integrate a business process knowledge of Decision Support Systems. It concerns two major aspects of the system, i.e. the formalization of processes predefined in Business Process Modeling Notation, the reuse of a domain ontology, and the analysis of economic and financial information. The described approach is a continuation of the construction of the intelligent cockpit for managers (InKoM project), whose main objective was to facilitate financial analysis and the evaluation of the economic status of the company in a competitive market. The current project is related to the design of smart decision support systems based on static (structural) and procedural knowledge. The content of the knowledge is focused on essential financial concepts and relationships related to the management of small and medium enterprises (SME). An experiment was carried out on real financial data extracted from the financial information system.
{"title":"Towards process-oriented ontology for financial analysis","authors":"J. Korczak, Helena Dudycz, Bartłomiej Nita, Piotr Oleksyk","doi":"10.15439/2017F181","DOIUrl":"https://doi.org/10.15439/2017F181","url":null,"abstract":"The article presents an approach to integrate a business process knowledge of Decision Support Systems. It concerns two major aspects of the system, i.e. the formalization of processes predefined in Business Process Modeling Notation, the reuse of a domain ontology, and the analysis of economic and financial information. The described approach is a continuation of the construction of the intelligent cockpit for managers (InKoM project), whose main objective was to facilitate financial analysis and the evaluation of the economic status of the company in a competitive market. The current project is related to the design of smart decision support systems based on static (structural) and procedural knowledge. The content of the knowledge is focused on essential financial concepts and relationships related to the management of small and medium enterprises (SME). An experiment was carried out on real financial data extracted from the financial information system.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since software plays an ever more important role in measuring instruments, risk assessments for such instruments required by European regulations will usually include also a risk assessment of the software. Although previously introduced methods still lack efficient means for the representation of attacker motivation and have no prescribed way of constructing attack scenarios, attack trees have been used for several years in similar application scenarios. These trees are here developed into attack probability trees, specifically tailored to meet the requirements for software risk assessment. A real-world example based on taximeters is given to illustrate the application of attack probability trees approach and their advantages.
{"title":"Representation of attacker motivation in software risk assessment using attack probability trees","authors":"M. Esche, F. G. Toro, F. Thiel","doi":"10.15439/2017F112","DOIUrl":"https://doi.org/10.15439/2017F112","url":null,"abstract":"Since software plays an ever more important role in measuring instruments, risk assessments for such instruments required by European regulations will usually include also a risk assessment of the software. Although previously introduced methods still lack efficient means for the representation of attacker motivation and have no prescribed way of constructing attack scenarios, attack trees have been used for several years in similar application scenarios. These trees are here developed into attack probability trees, specifically tailored to meet the requirements for software risk assessment. A real-world example based on taximeters is given to illustrate the application of attack probability trees approach and their advantages.","PeriodicalId":402724,"journal":{"name":"2017 Federated Conference on Computer Science and Information Systems (FedCSIS)","volume":"393 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124293680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}