Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529360
S. Samsonov, K. Tiampo
The advanced processing methodology is presented for extracting information from earth observation data. Using imagery from various Synthetic Aperture Radar (SAR) satellites and Multidimensional Small Baseline Subset Differential Interferometric SAR (MSBAS-DInSAR) processing methodology we observed ground subsidence in Vancouver (British Columbia, Canada) and Seattle (Washington, USA). In Vancouver, subsidence with rate of up to 2 cm/year was detected during 1995-2012 over a broad area, including the Vancouver International Airport. In Seattle, subsidence with rate of up to 3 cm/ year was detected during 2012-2015. Between August 2014 and August 2015, unusually fast subsidence occurred beneath the city center. This subsidence is caused mainly by human activities, such as construction, urban infrastructure development and groundwater extraction, but also by natural processes, such as consolidation of sediments. Located in coastal areas, these cities may become affected by flooding if ground level subsides below the sea level or due to storm surge. The risk of flooding increases as the sea level continues to rise due to the climate change. The advanced image processing methodology of earth observation data described here allows near real-time monitoring of ground subsidence with high spatial resolution and high precision, therefore increasing level of preparedness and mitigating risk.
{"title":"Monitoring of urban subsidence in coastal cities: Case studies Vancouver and Seattle","authors":"S. Samsonov, K. Tiampo","doi":"10.1109/DIPDMWC.2016.7529360","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529360","url":null,"abstract":"The advanced processing methodology is presented for extracting information from earth observation data. Using imagery from various Synthetic Aperture Radar (SAR) satellites and Multidimensional Small Baseline Subset Differential Interferometric SAR (MSBAS-DInSAR) processing methodology we observed ground subsidence in Vancouver (British Columbia, Canada) and Seattle (Washington, USA). In Vancouver, subsidence with rate of up to 2 cm/year was detected during 1995-2012 over a broad area, including the Vancouver International Airport. In Seattle, subsidence with rate of up to 3 cm/ year was detected during 2012-2015. Between August 2014 and August 2015, unusually fast subsidence occurred beneath the city center. This subsidence is caused mainly by human activities, such as construction, urban infrastructure development and groundwater extraction, but also by natural processes, such as consolidation of sediments. Located in coastal areas, these cities may become affected by flooding if ground level subsides below the sea level or due to storm surge. The risk of flooding increases as the sea level continues to rise due to the climate change. The advanced image processing methodology of earth observation data described here allows near real-time monitoring of ground subsidence with high spatial resolution and high precision, therefore increasing level of preparedness and mitigating risk.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131819923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529380
A. Alyushin, S. Alyushin, V. Arkhangelsky
In recent pattern matching architecture researches, there has been much attention to high-throughput implementations with reconfiguration on the fly on FPGAs as well as ASICs. In this paper, we propose to use self-organizing approach to synthesize two-dimensional map (cluster) of a simple processing units with lateral links for fast pattern matching of one-dimensional input event. We suggest scalable processor core with heterogeneous cluster architecture. Experimental results show that the proposed architecture has advantages over the previously developed architectures in the terms of operating frequency, time delay and data bandwidth. For state-of-the-art FPGA we achieve operating frequency 600-500 MHz for the processor core with single cluster (input pattern of 8-512 bits, rule set of 64 bits), 490-440 MHz for the processor core with multiple clusters (rule set of 128 - 4096 bits, input pattern of 512 bits). Each cluster is characterized by low pipeline time delay of 3 ~ 5 clock cycles.
{"title":"Scalable processor core for high-speed pattern matching architecture on FPGA","authors":"A. Alyushin, S. Alyushin, V. Arkhangelsky","doi":"10.1109/DIPDMWC.2016.7529380","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529380","url":null,"abstract":"In recent pattern matching architecture researches, there has been much attention to high-throughput implementations with reconfiguration on the fly on FPGAs as well as ASICs. In this paper, we propose to use self-organizing approach to synthesize two-dimensional map (cluster) of a simple processing units with lateral links for fast pattern matching of one-dimensional input event. We suggest scalable processor core with heterogeneous cluster architecture. Experimental results show that the proposed architecture has advantages over the previously developed architectures in the terms of operating frequency, time delay and data bandwidth. For state-of-the-art FPGA we achieve operating frequency 600-500 MHz for the processor core with single cluster (input pattern of 8-512 bits, rule set of 64 bits), 490-440 MHz for the processor core with multiple clusters (rule set of 128 - 4096 bits, input pattern of 512 bits). Each cluster is characterized by low pipeline time delay of 3 ~ 5 clock cycles.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529392
B. Ivanc, B. Blažič
Critical infrastructure faces changed landscape of threats which requires progress in the understanding of highly sophisticated attacks. A reflection of this awareness is the upcoming technical documentation of umbrella organizations in critical infrastructure. The attack modeling is an important approach in the design stage of the system. The attack tree is a structural technique for attack modeling. In terms of graphic presentation, attack trees are not complex and can be designed manually, also, they are an important tool in recognizing threats and evaluating risks. The absence of the presentation of comprehensive and systematic approaches to the attack modeling is often reflected in rather generalized and inconsistent presentations of the attack modeling as well as in difficult transfer of attack modeling techniques into practice. The current absence of the agreement and lack of consistency in the development approach of structural attack models limits the transfer of concepts in the field of cyber-attacks to educational environments. The purpose of the paper is to present, in clear practical example, a proposal for the development approach to the attack modeling. Thus, we want to contribute to the implementation of attack modeling into practice, especially in the field of cyber security education.
{"title":"Development approach to the attack modeling for the needs of cyber security education","authors":"B. Ivanc, B. Blažič","doi":"10.1109/DIPDMWC.2016.7529392","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529392","url":null,"abstract":"Critical infrastructure faces changed landscape of threats which requires progress in the understanding of highly sophisticated attacks. A reflection of this awareness is the upcoming technical documentation of umbrella organizations in critical infrastructure. The attack modeling is an important approach in the design stage of the system. The attack tree is a structural technique for attack modeling. In terms of graphic presentation, attack trees are not complex and can be designed manually, also, they are an important tool in recognizing threats and evaluating risks. The absence of the presentation of comprehensive and systematic approaches to the attack modeling is often reflected in rather generalized and inconsistent presentations of the attack modeling as well as in difficult transfer of attack modeling techniques into practice. The current absence of the agreement and lack of consistency in the development approach of structural attack models limits the transfer of concepts in the field of cyber-attacks to educational environments. The purpose of the paper is to present, in clear practical example, a proposal for the development approach to the attack modeling. Thus, we want to contribute to the implementation of attack modeling into practice, especially in the field of cyber security education.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126040592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529371
N. Agrawal, Shailendra Singh
Emails are the basic unit of internet applications. Many emails are sent & received everyday with an exponential growth day by day but spam mail has become a very serious problem in email communication environment. There are number of content-based filter techniques available namely text based, image based filtering and many more others to filter spam mails. These techniques are costlier in respect of computation and network resources as they require the examination of whole message and computation on whole content at the server. These filters are also not in dynamic nature because the nature of spam mail and spammer changes frequently. We proposed origin based spam-filtering approach, which works with respect to header information of the mail regardless of the body content of the mail. It optimizes the network and server performance.
{"title":"Origin (dynamic blacklisting) based spammer detection and spam mail filtering approach","authors":"N. Agrawal, Shailendra Singh","doi":"10.1109/DIPDMWC.2016.7529371","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529371","url":null,"abstract":"Emails are the basic unit of internet applications. Many emails are sent & received everyday with an exponential growth day by day but spam mail has become a very serious problem in email communication environment. There are number of content-based filter techniques available namely text based, image based filtering and many more others to filter spam mails. These techniques are costlier in respect of computation and network resources as they require the examination of whole message and computation on whole content at the server. These filters are also not in dynamic nature because the nature of spam mail and spammer changes frequently. We proposed origin based spam-filtering approach, which works with respect to header information of the mail regardless of the body content of the mail. It optimizes the network and server performance.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134213444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529373
E. Sahragard, H. Farsi
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting methods fill the degraded or lost area of the image by appropriate information. This is performed in such a way so that the resulted image is not distinguishable for a casual person who is not familiar with the original image. In this paper, the various images are degraded with different ways: 1) the blurring and adding noise in the original image, and 2) losing a percentage of the pixels of the original image. Then, the proposed method and other methods are performed to restore the desired image. It is required that the image restoration method use optimization methods. In this paper, a linear restoration method is used based on the total variation regularizer. The variable of optimization problem is decomposed, and the new optimization problem is solved by using Lagrangian augmented method. The experimental results show that the proposed method is faster, and the restored images have higher quality than other methods.
{"title":"Variable decomposition in total variant regularizer for denoising/deblurring image","authors":"E. Sahragard, H. Farsi","doi":"10.1109/DIPDMWC.2016.7529373","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529373","url":null,"abstract":"The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting methods fill the degraded or lost area of the image by appropriate information. This is performed in such a way so that the resulted image is not distinguishable for a casual person who is not familiar with the original image. In this paper, the various images are degraded with different ways: 1) the blurring and adding noise in the original image, and 2) losing a percentage of the pixels of the original image. Then, the proposed method and other methods are performed to restore the desired image. It is required that the image restoration method use optimization methods. In this paper, a linear restoration method is used based on the total variation regularizer. The variable of optimization problem is decomposed, and the new optimization problem is solved by using Lagrangian augmented method. The experimental results show that the proposed method is faster, and the restored images have higher quality than other methods.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529366
J. Sil, Jaydeep Sen
Removal of inconsistency from a data set contributes significantly in improving classification accuracy. Inconsistency occurs when attributes of objects have same value but they belong to different classes. Inconsistency is either inherent in the data set or appear during different data preprocessing steps, like discretization, dimensionality reduction and missing value prediction. The aim of the paper is to develop a generalized inconsistency handling scheme based on probability distribution unlike the previous methods which are context dependent. We propose two algorithms to remove inconsistency by assigning class labels to the objects afresh based on the statistical properties of the training data set. The ultimate goal of this research work is to generate consistent data which provide superior classification accuracy compare to the original data set. The proposed methods are verified with real life intrusion domain NSL-KDD data set for establishing our claim.
{"title":"A generalized probabilistic approach for managing inconsistency to improve classifier accuracy","authors":"J. Sil, Jaydeep Sen","doi":"10.1109/DIPDMWC.2016.7529366","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529366","url":null,"abstract":"Removal of inconsistency from a data set contributes significantly in improving classification accuracy. Inconsistency occurs when attributes of objects have same value but they belong to different classes. Inconsistency is either inherent in the data set or appear during different data preprocessing steps, like discretization, dimensionality reduction and missing value prediction. The aim of the paper is to develop a generalized inconsistency handling scheme based on probability distribution unlike the previous methods which are context dependent. We propose two algorithms to remove inconsistency by assigning class labels to the objects afresh based on the statistical properties of the training data set. The ultimate goal of this research work is to generate consistent data which provide superior classification accuracy compare to the original data set. The proposed methods are verified with real life intrusion domain NSL-KDD data set for establishing our claim.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127978128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529369
Xiefeng Cheng, Kexue Sun, Xuejun Zhang, Chenjun She
One-dimensional heart sounds signal is converted into two-dimensional phonocardiogram (2D-PCG), then extracts image feature of heart sounds based on image processing technology in a 2D-PCG. Firstly we realize the wavelet noise reduction and amplitude normalization of one-dimensional heart sounds by one-dimensional signal processing method, and then convert Heart sounds into 2D-PCG with uniformity and comparability, and pretreatment. And analyze the image features of 2D-PCG which is characterization of Heart sounds' physiological information combining with heart sounds' physiological significance and 2D-PCG's image features, focus on vertical and horizontal ratio of coordinate and sequence code of inflection point. In order to quickly classify the heart sound signal, the paper introduces the new concept: degree of heart sound signal certainty (HSSCD). Finally, efficiency and feasibility are verified through the heart sound acquisition, classification and identification experiments. At last, explore the feasibility of classification and identification of 2D-PCG using Euclidean distance and degree of heart sound signal certainty based on vertical and horizontal ratio of coordinate and sequence code of inflection point and wavelet coefficients. Experimental results show that the three features can achieve the classification and recognition of the 2D-PCG, and inflection point sequence code gets the highest recognition rate. The method of 2D-PCG classification and identification based on a two-image processing has the feasibility and practical applicability, and has broad application prospects.
{"title":"Feature extraction and recognition methods based on phonocardiogram","authors":"Xiefeng Cheng, Kexue Sun, Xuejun Zhang, Chenjun She","doi":"10.1109/DIPDMWC.2016.7529369","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529369","url":null,"abstract":"One-dimensional heart sounds signal is converted into two-dimensional phonocardiogram (2D-PCG), then extracts image feature of heart sounds based on image processing technology in a 2D-PCG. Firstly we realize the wavelet noise reduction and amplitude normalization of one-dimensional heart sounds by one-dimensional signal processing method, and then convert Heart sounds into 2D-PCG with uniformity and comparability, and pretreatment. And analyze the image features of 2D-PCG which is characterization of Heart sounds' physiological information combining with heart sounds' physiological significance and 2D-PCG's image features, focus on vertical and horizontal ratio of coordinate and sequence code of inflection point. In order to quickly classify the heart sound signal, the paper introduces the new concept: degree of heart sound signal certainty (HSSCD). Finally, efficiency and feasibility are verified through the heart sound acquisition, classification and identification experiments. At last, explore the feasibility of classification and identification of 2D-PCG using Euclidean distance and degree of heart sound signal certainty based on vertical and horizontal ratio of coordinate and sequence code of inflection point and wavelet coefficients. Experimental results show that the three features can achieve the classification and recognition of the 2D-PCG, and inflection point sequence code gets the highest recognition rate. The method of 2D-PCG classification and identification based on a two-image processing has the feasibility and practical applicability, and has broad application prospects.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125928982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529394
Aigerim Ismukhamedova, Yelena Satimova, A. Nikiforov, N. Miloslavskaya
Currently great attention is paid to the operating system (OS) protection against malicious malware, viruses, etc. Wi-Fi is the most popular and demanded way of connection to the Internet, which is used in many companies and by the individuals. Wi-Fi is a widespread technology and in different situations it requires additional software for a protected installation. Wi-Fi networks can be cracked, and personal data can be stolen or compromised. A process of Wi-Fi network's hacking for educational purposes is considered in the paper. Some hacking techniques are shown as implemented in the laboratory works.
{"title":"Practical studying of Wi-Fi network vulnerabilities","authors":"Aigerim Ismukhamedova, Yelena Satimova, A. Nikiforov, N. Miloslavskaya","doi":"10.1109/DIPDMWC.2016.7529394","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529394","url":null,"abstract":"Currently great attention is paid to the operating system (OS) protection against malicious malware, viruses, etc. Wi-Fi is the most popular and demanded way of connection to the Internet, which is used in many companies and by the individuals. Wi-Fi is a widespread technology and in different situations it requires additional software for a protected installation. Wi-Fi networks can be cracked, and personal data can be stolen or compromised. A process of Wi-Fi network's hacking for educational purposes is considered in the paper. Some hacking techniques are shown as implemented in the laboratory works.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129923137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529377
E. Klyshinsky, N. Kochetkova
The main course of preliminary natural texts processing is tagging and disambiguating texts. Hence, most of modern language tools are specified for such purposes. In our projects, we carry out a shallow syntax of untagged texts. For this purpose we developed a new query language based on regular expressions. This language allows write queries according to words' ambiguity.
{"title":"A tool for morphologically ambiguous text processing","authors":"E. Klyshinsky, N. Kochetkova","doi":"10.1109/DIPDMWC.2016.7529377","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529377","url":null,"abstract":"The main course of preliminary natural texts processing is tagging and disambiguating texts. Hence, most of modern language tools are specified for such purposes. In our projects, we carry out a shallow syntax of untagged texts. For this purpose we developed a new query language based on regular expressions. This language allows write queries according to words' ambiguity.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"326 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132949938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-06DOI: 10.1109/DIPDMWC.2016.7529391
Andrej Jerman Blažič, Primoz Cigoj, B. Blažič
The research presented in this paper contributes to the advancement of the educational game design through the provision of empirical experience close to the real life environment in the selected field - digital forensics. The game was developed and designed to incorporate the learnability properties originating from the recently upgraded serious game taxonomy. The learnability properties of the game were evaluated through a student survey and educators' observations.
{"title":"Serious game design for digital forensics training","authors":"Andrej Jerman Blažič, Primoz Cigoj, B. Blažič","doi":"10.1109/DIPDMWC.2016.7529391","DOIUrl":"https://doi.org/10.1109/DIPDMWC.2016.7529391","url":null,"abstract":"The research presented in this paper contributes to the advancement of the educational game design through the provision of empirical experience close to the real life environment in the selected field - digital forensics. The game was developed and designed to incorporate the learnability properties originating from the recently upgraded serious game taxonomy. The learnability properties of the game were evaluated through a student survey and educators' observations.","PeriodicalId":298218,"journal":{"name":"2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133441332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}