We expect that WebRTC will experience high adoption rate as a peer-to-peer realtime communication standard for browsers. WebRTC allows direct media and data transport between browsers without having to go through a web server. In this study, we use a black-box testing technique to evaluate, via PESQ, the voice quality of WebRTC sessions under varying network delay and jitter. Network emulators are employed to implement the delay and jitter variations. Our results highlight the dangers of black-box testing, whereby test-bed issues can result in very misleading results. This is especially the case when executed on a single machine. This paper also provides an extendable baseline methodology for WebRTC centric research.
{"title":"WebRTC quality assessment: Dangers of black-box testing","authors":"Yusuf Cinar, H. Melvin","doi":"10.1109/DT.2014.6868687","DOIUrl":"https://doi.org/10.1109/DT.2014.6868687","url":null,"abstract":"We expect that WebRTC will experience high adoption rate as a peer-to-peer realtime communication standard for browsers. WebRTC allows direct media and data transport between browsers without having to go through a web server. In this study, we use a black-box testing technique to evaluate, via PESQ, the voice quality of WebRTC sessions under varying network delay and jitter. Network emulators are employed to implement the delay and jitter variations. Our results highlight the dangers of black-box testing, whereby test-bed issues can result in very misleading results. This is especially the case when executed on a single machine. This paper also provides an extendable baseline methodology for WebRTC centric research.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127729565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers the aspects of green-technologies application in the human-machine interfaces (HMI) of information and control systems (I&Cs). The improved quality model of user-program interface, which introduces and elaborates green characteristics, is presented. The I&Cs reliability analysis is provided using Markov's model, that takes into account HMI properties and operator errors. The model is solved using mathematical package Mathematica and obtained results analysis is presented. In addition, the paper considers the structure of information technology for HMI quality assessment, that allows to increase the assessment trustworthiness.
{"title":"Human-machine interface quality assessment techniques: Green and safety issues","authors":"V. Kharchenko, A. Orekhova, A. Orekhov","doi":"10.1109/DT.2014.6868723","DOIUrl":"https://doi.org/10.1109/DT.2014.6868723","url":null,"abstract":"This paper considers the aspects of green-technologies application in the human-machine interfaces (HMI) of information and control systems (I&Cs). The improved quality model of user-program interface, which introduces and elaborates green characteristics, is presented. The I&Cs reliability analysis is provided using Markov's model, that takes into account HMI properties and operator errors. The model is solved using mathematical package Mathematica and obtained results analysis is presented. In addition, the paper considers the structure of information technology for HMI quality assessment, that allows to increase the assessment trustworthiness.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130196853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several methods, tools and measures have been developed which highlight the questions about the power systems reliability each from its particular viewpoint. The objective is to review the selected methods in the field of electric power system reliability keeping in mind the complexity of the system and its features. Their review is performed in sense of applicability to conventional sources of energy and in sense of alternative sources of energy. Particular importance is placed to alternative sources of energy, which power depends on weather conditions such as solar power and wind power, which may require different approach regarding the control of the frequency and the voltage. If the conventional sources of certain power are replaced with the alternative sources with variable power depending on weather parameters, the reliability of the system may decrease, if the replaced power is not sufficiently large. Defined power replacement has to suit requirements about sufficient power, about sufficient energy and about reliability, which should not be reduced.
{"title":"Reliability of power systems considering conventional and alternative sources of energy","authors":"M. Čepin","doi":"10.1109/DT.2014.6868690","DOIUrl":"https://doi.org/10.1109/DT.2014.6868690","url":null,"abstract":"Several methods, tools and measures have been developed which highlight the questions about the power systems reliability each from its particular viewpoint. The objective is to review the selected methods in the field of electric power system reliability keeping in mind the complexity of the system and its features. Their review is performed in sense of applicability to conventional sources of energy and in sense of alternative sources of energy. Particular importance is placed to alternative sources of energy, which power depends on weather conditions such as solar power and wind power, which may require different approach regarding the control of the frequency and the voltage. If the conventional sources of certain power are replaced with the alternative sources with variable power depending on weather parameters, the reliability of the system may decrease, if the replaced power is not sufficiently large. Defined power replacement has to suit requirements about sufficient power, about sufficient energy and about reliability, which should not be reduced.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124869110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bioimage Informatics is a rapidly growing research field that is giving fundamental contributions to research in biology and biomedicine aiming at facilitating the extraction of quantitative information from images. Great advances in biological tissue labeling and microscopic imaging are radically changing how biologists visualize and study the molecular and cellular structures. These devices nowadays produce terabyte-sized multi-dimensional images: how to automatically and efficiently extract objective knowledge from such images has become a major challenge. In this manuscript we analyze the state-of-the-art of Bioimage Informatics, with a special focus on neuroscience. We show that there are increasing efforts to deliver methods and software tools providing functionalities for visualization, representation, management and analysis of 3D multichannel images. Nevertheless, most of them have been applied on datasets with size of MVoxel or few GVoxel, where the variations in contrast, illumination, as well as object shape and dimensions are limited. The huge dimensions of new 3D image stacks therefore ask for fully automated processing methods, whose parameters should be dynamically adapted to different regions in the volume. In this respect, this manuscript deepens in a recent contribution that digitally charts the Purkinje cells of whole mouse cerebellum, corresponding to an image dataset of 120 GVoxels.
{"title":"BioImage Informatics: The challenge of knowledge extraction from biological images","authors":"P. Soda","doi":"10.1109/DT.2014.6868733","DOIUrl":"https://doi.org/10.1109/DT.2014.6868733","url":null,"abstract":"Bioimage Informatics is a rapidly growing research field that is giving fundamental contributions to research in biology and biomedicine aiming at facilitating the extraction of quantitative information from images. Great advances in biological tissue labeling and microscopic imaging are radically changing how biologists visualize and study the molecular and cellular structures. These devices nowadays produce terabyte-sized multi-dimensional images: how to automatically and efficiently extract objective knowledge from such images has become a major challenge. In this manuscript we analyze the state-of-the-art of Bioimage Informatics, with a special focus on neuroscience. We show that there are increasing efforts to deliver methods and software tools providing functionalities for visualization, representation, management and analysis of 3D multichannel images. Nevertheless, most of them have been applied on datasets with size of MVoxel or few GVoxel, where the variations in contrast, illumination, as well as object shape and dimensions are limited. The huge dimensions of new 3D image stacks therefore ask for fully automated processing methods, whose parameters should be dynamically adapted to different regions in the volume. In this respect, this manuscript deepens in a recent contribution that digitally charts the Purkinje cells of whole mouse cerebellum, corresponding to an image dataset of 120 GVoxels.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126822217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D model reconstruction has many application possibilities, for example: person detection and authentication, model scanning for computer simulation, monitoring, object recognition, navigation, etc. The biggest problem of this approach is its computation complexity. More precisely the problem lies in process of searching for differences in multiple input images (e.g. stereovision). Most of existing algorithms searches for the shift in each image point to obtain most detailed disparity map. But it is possible to speed up this process by reducing the number of points that must be processed. This paper is describing a new method for a fast key-point extraction using sparse disparity. The effectiveness of the proposed algorithm comes from its ability to divide input images into segments in two steps: First initial division identifies key-points and is based on local extremes in Difference of Gaussian. Second division is used to obtain results with better detail from initial division. Therefore, it is possible control level of detail for the output 3D model so it is possible to control the computational demands.
{"title":"Fast segment iterative algorithm for 3D reconstruction","authors":"Matej Mesko, Emil Krsák","doi":"10.1109/DT.2014.6868721","DOIUrl":"https://doi.org/10.1109/DT.2014.6868721","url":null,"abstract":"3D model reconstruction has many application possibilities, for example: person detection and authentication, model scanning for computer simulation, monitoring, object recognition, navigation, etc. The biggest problem of this approach is its computation complexity. More precisely the problem lies in process of searching for differences in multiple input images (e.g. stereovision). Most of existing algorithms searches for the shift in each image point to obtain most detailed disparity map. But it is possible to speed up this process by reducing the number of points that must be processed. This paper is describing a new method for a fast key-point extraction using sparse disparity. The effectiveness of the proposed algorithm comes from its ability to divide input images into segments in two steps: First initial division identifies key-points and is based on local extremes in Difference of Gaussian. Second division is used to obtain results with better detail from initial division. Therefore, it is possible control level of detail for the output 3D model so it is possible to control the computational demands.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116357514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain tumour is one of the serious diseases and can cause death. Research in the medical treatment is focused on early diagnosis as well as treatment of the patient. This paper deals with the management for magnetic resonance imaging results processing and monitoring over time, which is very important during the treatment of the patient. It requires sophisticated methods for storing these values with regard to efficiency. Conventional approach does not cover the complexity of the problem, an important part is the method of storing data during the time to quickly obtain the required data as well as complex structure optimization based on disc space requirements. This paper describes the characteristics and approaches for dealing with time management, valid data processing. It is very important for data transmission in networks. The section Experiments compares and evaluates various approaches based on time consumption and disc storage requirements.
{"title":"Epsilon temporal data in MRI results processing","authors":"Michal Kvet, K. Matiaško","doi":"10.1109/DT.2014.6868714","DOIUrl":"https://doi.org/10.1109/DT.2014.6868714","url":null,"abstract":"Brain tumour is one of the serious diseases and can cause death. Research in the medical treatment is focused on early diagnosis as well as treatment of the patient. This paper deals with the management for magnetic resonance imaging results processing and monitoring over time, which is very important during the treatment of the patient. It requires sophisticated methods for storing these values with regard to efficiency. Conventional approach does not cover the complexity of the problem, an important part is the method of storing data during the time to quickly obtain the required data as well as complex structure optimization based on disc space requirements. This paper describes the characteristics and approaches for dealing with time management, valid data processing. It is very important for data transmission in networks. The section Experiments compares and evaluates various approaches based on time consumption and disc storage requirements.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124119972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This contribution deals with the problem of designing the optimal structure of most public service systems, where the total discomfort of all users is to be minimized. Such combinatorial problems are often formulated as weighted p-median problem described by a location-allocation model. Real instances are characterized by considerably big number of possible service center locations, which may take the value of several thousands. In such cases, the exact algorithm embedded into universal optimization tools for the location-allocation model usually fails due to enormous computational time or huge memory demands. Mentioned weakness can be overcome by approximate covering approach based on a radial formulation of the problem. This method constitutes such solving technique, which can be easily implemented within commercial IP-solver and enables to solve huge instances in admissible time. The generalized system utility studied in this paper follows the idea that the individual users utility comes from more than one located service center. This approach constitutes an extension of previously developed methods, where only one nearest center was taken as a source of individual users utility. Hereby, we study and compare both exact and radial approaches from the point of their impact on the solution accuracy and saved computational time.
{"title":"Computational study of radial approach to public service system design with generalized utility","authors":"Marek Kvet","doi":"10.1109/DT.2014.6868713","DOIUrl":"https://doi.org/10.1109/DT.2014.6868713","url":null,"abstract":"This contribution deals with the problem of designing the optimal structure of most public service systems, where the total discomfort of all users is to be minimized. Such combinatorial problems are often formulated as weighted p-median problem described by a location-allocation model. Real instances are characterized by considerably big number of possible service center locations, which may take the value of several thousands. In such cases, the exact algorithm embedded into universal optimization tools for the location-allocation model usually fails due to enormous computational time or huge memory demands. Mentioned weakness can be overcome by approximate covering approach based on a radial formulation of the problem. This method constitutes such solving technique, which can be easily implemented within commercial IP-solver and enables to solve huge instances in admissible time. The generalized system utility studied in this paper follows the idea that the individual users utility comes from more than one located service center. This approach constitutes an extension of previously developed methods, where only one nearest center was taken as a source of individual users utility. Hereby, we study and compare both exact and radial approaches from the point of their impact on the solution accuracy and saved computational time.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133022816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper deals with the modelling of interactions among mobile entities (e.g. vehicles, handling equipment, pedestrians) in agent-oriented simulation models of transportation terminals. Presented model of transportation terminal is based on the ABAsim simulation architecture, which provides means for efficient creation of flexible, highly maintainable simulation models of complex service systems. Two types of agents are utilised: managing agents organised in static hierarchical structure and dynamic agents that are representing intelligent entities of modelled system. The simplified internal structure of generic simulation model of transportation terminal is presented, focusing on the modelling of various transportation modes found in transportation terminals (e.g. rail, road) and their mutual interactions. Interaction zones are utilised to define distinct areas where two or more transportation modes collide (such as railway crossing or pedestrian crossing). We propose a dedicated managing agent responsible for zones handling and managing of the interactions among mobile entities of the system. According to defined rules, the agent grants/denies zone entry permissions to agents responsible for modelling of specific transportation mode. Presented concept has been developed for the generic transportation terminal simulation tool Villon.
{"title":"Modelling interactions in agent based models of transportation terminals","authors":"M. Kocifaj, Michal Varga, N. Adamko","doi":"10.1109/DT.2014.6868701","DOIUrl":"https://doi.org/10.1109/DT.2014.6868701","url":null,"abstract":"The paper deals with the modelling of interactions among mobile entities (e.g. vehicles, handling equipment, pedestrians) in agent-oriented simulation models of transportation terminals. Presented model of transportation terminal is based on the ABAsim simulation architecture, which provides means for efficient creation of flexible, highly maintainable simulation models of complex service systems. Two types of agents are utilised: managing agents organised in static hierarchical structure and dynamic agents that are representing intelligent entities of modelled system. The simplified internal structure of generic simulation model of transportation terminal is presented, focusing on the modelling of various transportation modes found in transportation terminals (e.g. rail, road) and their mutual interactions. Interaction zones are utilised to define distinct areas where two or more transportation modes collide (such as railway crossing or pedestrian crossing). We propose a dedicated managing agent responsible for zones handling and managing of the interactions among mobile entities of the system. According to defined rules, the agent grants/denies zone entry permissions to agents responsible for modelling of specific transportation mode. Presented concept has been developed for the generic transportation terminal simulation tool Villon.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132019978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper represents a survey of the existing approaches to deploy biometric technologies in biomedical and health care applications. It also shows recent examples and pilot projects being implemented by the authors, experimenting with new sensors of biometric data and new uses of human behavioral biometrics, besides the application to security in physical access systems. It also shows the new and promising horizons of using biometrics in natural and contactless control interfaces for surgical control, rehabilitation and accessibility applications.
{"title":"Application of biometric technologies in biomedical systems","authors":"K. Lai, S. Samoil, S. Yanushkevich, G. Collaud","doi":"10.1109/DT.2014.6868715","DOIUrl":"https://doi.org/10.1109/DT.2014.6868715","url":null,"abstract":"This paper represents a survey of the existing approaches to deploy biometric technologies in biomedical and health care applications. It also shows recent examples and pilot projects being implemented by the authors, experimenting with new sensors of biometric data and new uses of human behavioral biometrics, besides the application to security in physical access systems. It also shows the new and promising horizons of using biometrics in natural and contactless control interfaces for surgical control, rehabilitation and accessibility applications.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"573 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116259009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Androulidakis, V. Levashenko, Elena N. Zaitseva
Smart phones by now have overwhelmed the mobile phone market, to the point that it becomes increasingly difficult to find an “old-type”, classical, feature phone. Although they offer a wealth of features and services to their users, they are far more power hungry, requiring to be charged almost daily! At the same time many interesting questions arise, in regards to their users: Are they aware of green practices? Do they follow them? In this work, using quantitative statistics and Fuzzy Decision Tree analysis we will present the preliminary findings from an empirical study among 313 users that was held in order to answer all these questions. We proposed a Fuzzy Decision Rules based on this data.
{"title":"Smart phone users: Are they green users?","authors":"I. Androulidakis, V. Levashenko, Elena N. Zaitseva","doi":"10.1109/DT.2014.6868682","DOIUrl":"https://doi.org/10.1109/DT.2014.6868682","url":null,"abstract":"Smart phones by now have overwhelmed the mobile phone market, to the point that it becomes increasingly difficult to find an “old-type”, classical, feature phone. Although they offer a wealth of features and services to their users, they are far more power hungry, requiring to be charged almost daily! At the same time many interesting questions arise, in regards to their users: Are they aware of green practices? Do they follow them? In this work, using quantitative statistics and Fuzzy Decision Tree analysis we will present the preliminary findings from an empirical study among 313 users that was held in order to answer all these questions. We proposed a Fuzzy Decision Rules based on this data.","PeriodicalId":330975,"journal":{"name":"The 10th International Conference on Digital Technologies 2014","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116405634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}