Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025949
Shahnawaz Qureshi, S. Vanichayobon
In this paper, we propose 3 different machine learning techniques such as Random Forest, Bagging and Support Vector Machine along with time domain feature for classifying sleep stages based on single-channel EEG. Whole-night polysomnograms from 25 subjects were recorded employing R&K standard. The evolved process investigated the EEG signals of (C4-A1) for sleep staging. Automatic and manual scoring results were associated on an epoch-by-epoch basis. An entire 96,000 data samples 30s sleep EEG epoch were calculated and applied for performance evaluation. The epoch-by-epoch assessment was created by classifying the EEG epochs into six stages (W/S1/S2/S3/S4/REM) according to proposed method and manual scoring. Result shows that Random Forest classifiers achieve the overall accuracy; specificity and sensitivity level of 97.73%, 96.3% and 99.51% respectively.
{"title":"Evaluate different machine learning techniques for classifying sleep stages on single-channel EEG","authors":"Shahnawaz Qureshi, S. Vanichayobon","doi":"10.1109/JCSSE.2017.8025949","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025949","url":null,"abstract":"In this paper, we propose 3 different machine learning techniques such as Random Forest, Bagging and Support Vector Machine along with time domain feature for classifying sleep stages based on single-channel EEG. Whole-night polysomnograms from 25 subjects were recorded employing R&K standard. The evolved process investigated the EEG signals of (C4-A1) for sleep staging. Automatic and manual scoring results were associated on an epoch-by-epoch basis. An entire 96,000 data samples 30s sleep EEG epoch were calculated and applied for performance evaluation. The epoch-by-epoch assessment was created by classifying the EEG epochs into six stages (W/S1/S2/S3/S4/REM) according to proposed method and manual scoring. Result shows that Random Forest classifiers achieve the overall accuracy; specificity and sensitivity level of 97.73%, 96.3% and 99.51% respectively.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"28 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82759120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025931
Anurak Thungtong
Automated ECG signal processing can assist in diagnosing several heart diseases. Many R peak detection methods have been studied because the accuracy of R peak detection significantly affects the quality of subsequent ECG feature extraction. Two important steps in R peak detection algorithm that draw attention over researchers are the preprocessing and thresholding stages. Among several methods, wavelet transform is a widely used method for removing noise in the preprocessing stage. Various proposed algorithms require prior knowledge of frequency spectrum of the signal under consideration in order to select the wavelet detail coefficients in the reconstruction process. Moreover, parameter fine tuning is generally involved in threshold selection to accomplish high detection accuracy. As a result, it may be difficult to utilize these methods for general ECG data sets. Accordingly, we propose an automatic and parameter free method that optimally selects the appropriate detail components for wavelet reconstruction as well as the adaptive threshold. The proposed algorithm employs the analysis of probability density function of the processed ECG signal. The validation of the algorithm was performed over the MIT-BIH database and has produced an average sensitivity of 99.63% and specificity of 99.78% which is in the same range as the previously proposed approaches.
{"title":"A robust algorithm for R peak detection based on optimal Discrete Wavelet Transform","authors":"Anurak Thungtong","doi":"10.1109/JCSSE.2017.8025931","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025931","url":null,"abstract":"Automated ECG signal processing can assist in diagnosing several heart diseases. Many R peak detection methods have been studied because the accuracy of R peak detection significantly affects the quality of subsequent ECG feature extraction. Two important steps in R peak detection algorithm that draw attention over researchers are the preprocessing and thresholding stages. Among several methods, wavelet transform is a widely used method for removing noise in the preprocessing stage. Various proposed algorithms require prior knowledge of frequency spectrum of the signal under consideration in order to select the wavelet detail coefficients in the reconstruction process. Moreover, parameter fine tuning is generally involved in threshold selection to accomplish high detection accuracy. As a result, it may be difficult to utilize these methods for general ECG data sets. Accordingly, we propose an automatic and parameter free method that optimally selects the appropriate detail components for wavelet reconstruction as well as the adaptive threshold. The proposed algorithm employs the analysis of probability density function of the processed ECG signal. The validation of the algorithm was performed over the MIT-BIH database and has produced an average sensitivity of 99.63% and specificity of 99.78% which is in the same range as the previously proposed approaches.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"18 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73620870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025929
Nicha Piemkaroonwong, U. Watchareeruetai
This paper proposes a method that separates the region of each leaf from an image of occluded leaves and produces a set of single-leaf images as an output. To identify the region of a single leaf, intersection points and direction field are required. An intersection point, which is defined as a concave point between leaves, is used as the starting position of leaf estimation process. Direction field, which describes the average direction of edges in a local area, is used to guide the estimation process. Leaf separation process applies the result of leaf estimation process to create an output. Experimental results show that 71.23% of testing leaf images were correctly separated from each other with a segmentation accuracy of 88.80%.
{"title":"Separation of occluded leaves using direction field","authors":"Nicha Piemkaroonwong, U. Watchareeruetai","doi":"10.1109/JCSSE.2017.8025929","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025929","url":null,"abstract":"This paper proposes a method that separates the region of each leaf from an image of occluded leaves and produces a set of single-leaf images as an output. To identify the region of a single leaf, intersection points and direction field are required. An intersection point, which is defined as a concave point between leaves, is used as the starting position of leaf estimation process. Direction field, which describes the average direction of edges in a local area, is used to guide the estimation process. Leaf separation process applies the result of leaf estimation process to create an output. Experimental results show that 71.23% of testing leaf images were correctly separated from each other with a segmentation accuracy of 88.80%.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"143 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78589159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025954
Moragot Kandee, P. Boonbrahm, Valla Tantayotai
This paper shows an investigation of the ability of using various waveform generated by mathematical model on haptic device. Realistic virtual pulse measurement and diagnostic can be done using haptic device with pulse generated waveform, Augmented Reality (AR) environment and mannequin. The aim of this work is to propose a mathematical model for generating pulse pattern in different type of abnormal pulse waves and test them on the Phantom Omni device under AR environment. The radial arterial waveforms were generated by the setting of pulse parameters and superimposed sine waves to make the new waveforms representing various diseases. The system can simulate the radial arterial pulse waves of some diseases. This modeling technique can be used in training the nursing or health sciences students on the ability to classify various type of diseases that related to the pulse waveform.
{"title":"Modeling realistic virtual pulse of radial artery pressure waveform using haptic interface","authors":"Moragot Kandee, P. Boonbrahm, Valla Tantayotai","doi":"10.1109/JCSSE.2017.8025954","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025954","url":null,"abstract":"This paper shows an investigation of the ability of using various waveform generated by mathematical model on haptic device. Realistic virtual pulse measurement and diagnostic can be done using haptic device with pulse generated waveform, Augmented Reality (AR) environment and mannequin. The aim of this work is to propose a mathematical model for generating pulse pattern in different type of abnormal pulse waves and test them on the Phantom Omni device under AR environment. The radial arterial waveforms were generated by the setting of pulse parameters and superimposed sine waves to make the new waveforms representing various diseases. The system can simulate the radial arterial pulse waves of some diseases. This modeling technique can be used in training the nursing or health sciences students on the ability to classify various type of diseases that related to the pulse waveform.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"115 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79344052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025908
Chea Sowattana, Wantanee Viriyasitavat, A. Khurat
Vehicular Ad-hoc Networks (VANETs) is a research area focusing on improving road safety and traffic management. However, VANETs are still vulnerable to different kind of security attacks due to its infrastructure-less networking. Sybil Attack is a well-known attack in VANET. It forges multiple nodes with different identities to broadcast fake messages to manipulate the road traffic and information. In this paper, we propose a distributed detection mechanism using the neighborhood information. In our approach, a node is considered as a Sybil node if its position is inside the intersected area of two communication nodes, but it does not acknowledge by one of them. Each vehicle exchanges the information of their neighbors periodically via beacon message. The received neighbor information, from each neighbor, will be used to vote on each of the receiver node's neighbor whether they are Sybil. Simulation on different test cases are performed to observe the performance of our algorithm in term of its detection rate and false positive rate. The result depicts the increase of detection rate in the scenario where the number of surrounding neighbors is high.
{"title":"Distributed consensus-based Sybil nodes detection in VANETs","authors":"Chea Sowattana, Wantanee Viriyasitavat, A. Khurat","doi":"10.1109/JCSSE.2017.8025908","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025908","url":null,"abstract":"Vehicular Ad-hoc Networks (VANETs) is a research area focusing on improving road safety and traffic management. However, VANETs are still vulnerable to different kind of security attacks due to its infrastructure-less networking. Sybil Attack is a well-known attack in VANET. It forges multiple nodes with different identities to broadcast fake messages to manipulate the road traffic and information. In this paper, we propose a distributed detection mechanism using the neighborhood information. In our approach, a node is considered as a Sybil node if its position is inside the intersected area of two communication nodes, but it does not acknowledge by one of them. Each vehicle exchanges the information of their neighbors periodically via beacon message. The received neighbor information, from each neighbor, will be used to vote on each of the receiver node's neighbor whether they are Sybil. Simulation on different test cases are performed to observe the performance of our algorithm in term of its detection rate and false positive rate. The result depicts the increase of detection rate in the scenario where the number of surrounding neighbors is high.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"30 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84199587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025939
Kanokwan Rungreangsuparat, S. Kitisin, K. Sripanidkulchai
The advancement of technology to support data storage is easy to store with large volumes of data. In order to make data storage to be used extensively, the standard of data storage is formed. The World Health Organization defines numbers of the standard medical procedures to cover the all treatments without classifying any medical procedures by diseases. The selection of medical procedures is based on a patient's symptoms. Therefore, if the sets of medical procedures can identified, we may know the diseases of the patient or it can be used in disease surveillance. In addition, diabetes and hypertension are silent killers that have been threatening numbers of Thai people and also lead to many serious diseases. This research identified sets of medical procedures related to diabetes and/or hypertension using C4.5 and Naive Bayes algorithms. The results showed that C4.5 could identify sets of medical procedures related to Diabetes and/or Hypertension more effectively than the Naive Bayes algorithm.
{"title":"The classification of sets of medical procedures used in the treatment of Diabetes and/or Hypertension","authors":"Kanokwan Rungreangsuparat, S. Kitisin, K. Sripanidkulchai","doi":"10.1109/JCSSE.2017.8025939","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025939","url":null,"abstract":"The advancement of technology to support data storage is easy to store with large volumes of data. In order to make data storage to be used extensively, the standard of data storage is formed. The World Health Organization defines numbers of the standard medical procedures to cover the all treatments without classifying any medical procedures by diseases. The selection of medical procedures is based on a patient's symptoms. Therefore, if the sets of medical procedures can identified, we may know the diseases of the patient or it can be used in disease surveillance. In addition, diabetes and hypertension are silent killers that have been threatening numbers of Thai people and also lead to many serious diseases. This research identified sets of medical procedures related to diabetes and/or hypertension using C4.5 and Naive Bayes algorithms. The results showed that C4.5 could identify sets of medical procedures related to Diabetes and/or Hypertension more effectively than the Naive Bayes algorithm.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"68 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76290406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025959
Jakapong Boonyai, Suwanna Rasmequan
Vertebral pose segmentation is an important factor in diagnosing diseases such as osteoporosis, osteopenia and scoliosis. Low radiation X-ray images are often used to diagnose such diseases. This has been done to reduce patients risk exposure of over dose radiation which may cause from a series of treatments. In this respect, it led to a low accuracy in vertebral pose detection. In this paper, we proposed to improve the automate segmentation of low quality image of vertebral pose with a more generalized technique. In the proposed method, there are three main steps. Firstly, in the pre-processing step, Auto Cropped, Multi-Threshold and Canny Edge Detection are applied to find the vertebral bone structure from the original image. Secondly, Feature Analysis and Gravity Force were used to find the region of interest or the area of each pose. Finally, Colormaps, Intensity Diagnosis and Angle Analysis are adopted to segment each vertebral pose from candidate areas retrieved from second step. The experimental results which were compared with ground truth shown that the proposed approach can estimate vertebral pose with Precision at 79.61% and Recall at 77.11%.
{"title":"Vertebral pose segmentation on low radiation image using Convergence Gravity Force","authors":"Jakapong Boonyai, Suwanna Rasmequan","doi":"10.1109/JCSSE.2017.8025959","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025959","url":null,"abstract":"Vertebral pose segmentation is an important factor in diagnosing diseases such as osteoporosis, osteopenia and scoliosis. Low radiation X-ray images are often used to diagnose such diseases. This has been done to reduce patients risk exposure of over dose radiation which may cause from a series of treatments. In this respect, it led to a low accuracy in vertebral pose detection. In this paper, we proposed to improve the automate segmentation of low quality image of vertebral pose with a more generalized technique. In the proposed method, there are three main steps. Firstly, in the pre-processing step, Auto Cropped, Multi-Threshold and Canny Edge Detection are applied to find the vertebral bone structure from the original image. Secondly, Feature Analysis and Gravity Force were used to find the region of interest or the area of each pose. Finally, Colormaps, Intensity Diagnosis and Angle Analysis are adopted to segment each vertebral pose from candidate areas retrieved from second step. The experimental results which were compared with ground truth shown that the proposed approach can estimate vertebral pose with Precision at 79.61% and Recall at 77.11%.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"16 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87688592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025938
Mathawan Jaiwai, Usa Sammapun
In software development, requirements, normally written in natural language, are documents that specify what users want in software products. Software developers then analyze these requirements to create domain models represented in UML diagrams in an attempt to comprehend what users need in the software products. These domain models are usually converted into design models and finally carried over into classes in source code. Thus, domain models have an impact on the final software products. However, creating correct domain models can be difficult when software developers are not skilled. Moreover, even for skilled developers, when requirements are large, wading through all requirements to create domain models can take times and might result in errors. Therefore, researchers have studied various approaches to apply natural language processing techniques to transform requirements written in natural language into UML diagrams. Those researches focus on requirements written in English. This paper proposes an approach to process requirements written in Thai to extract UML class diagrams using natural language processing techniques. The UML class diagram extraction is based on transformation rules that identify classes and attributes from requirements. The results are evaluated with recall and precision using truth values created by humans. Future works include identifying operations and relationships from requirements to complete class diagram extraction. Our research should benefit Thai software developers by reducing time in requirement analysis and also helping novice software developers to create correct domain models represented in UML class diagram.
{"title":"Extracting UML class diagrams from software requirements in Thai using NLP","authors":"Mathawan Jaiwai, Usa Sammapun","doi":"10.1109/JCSSE.2017.8025938","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025938","url":null,"abstract":"In software development, requirements, normally written in natural language, are documents that specify what users want in software products. Software developers then analyze these requirements to create domain models represented in UML diagrams in an attempt to comprehend what users need in the software products. These domain models are usually converted into design models and finally carried over into classes in source code. Thus, domain models have an impact on the final software products. However, creating correct domain models can be difficult when software developers are not skilled. Moreover, even for skilled developers, when requirements are large, wading through all requirements to create domain models can take times and might result in errors. Therefore, researchers have studied various approaches to apply natural language processing techniques to transform requirements written in natural language into UML diagrams. Those researches focus on requirements written in English. This paper proposes an approach to process requirements written in Thai to extract UML class diagrams using natural language processing techniques. The UML class diagram extraction is based on transformation rules that identify classes and attributes from requirements. The results are evaluated with recall and precision using truth values created by humans. Future works include identifying operations and relationships from requirements to complete class diagram extraction. Our research should benefit Thai software developers by reducing time in requirement analysis and also helping novice software developers to create correct domain models represented in UML class diagram.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"15 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89596192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025950
Kritwara Rattanaopas
Big data is a popular topic on cloud computing research. The main characteristics of big data are volume, velocity and variety. These characteristics are difficult to handle by using traditional softwares and methods. Hadoop is open-source framework software which was developed to provide solutions for handling several domains of big data problems. For big data analytic, MapReduce framework is a main engine of Hadoop cluster and widely used nowadays. It uses a batch oriented processing. Apache also developed an alternative engine called “Tez”. It supports an interactive query and does not write temporary data into HDFS. In this paper, we focus on the performance comparison between MapReduce and Tez. We also investigate the performance of these two engines with the compression of input files and map output files. Bzip is a compression algorithm used for input files and snappy is used for map output files. Word-count and terasort benchmarks are used in our experiments. For the word-count benchmark, the results show that Tez engine always has better execution-time than MapReduce engine for both of compressed data or non-compressed data. It can reduce an execution-time up to 39% comparing with the execution time of MapReduce engine. In contrast, the results show that Tez engine usually has higher execution-time than MapReduce engine up to 13% for terasort benchmark. The results also show that the performance of compressing map output files with snappy provides better performance on execution time for both benchmarks.
{"title":"A performance comparison of Apache Tez and MapReduce with data compression on Hadoop cluster","authors":"Kritwara Rattanaopas","doi":"10.1109/JCSSE.2017.8025950","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025950","url":null,"abstract":"Big data is a popular topic on cloud computing research. The main characteristics of big data are volume, velocity and variety. These characteristics are difficult to handle by using traditional softwares and methods. Hadoop is open-source framework software which was developed to provide solutions for handling several domains of big data problems. For big data analytic, MapReduce framework is a main engine of Hadoop cluster and widely used nowadays. It uses a batch oriented processing. Apache also developed an alternative engine called “Tez”. It supports an interactive query and does not write temporary data into HDFS. In this paper, we focus on the performance comparison between MapReduce and Tez. We also investigate the performance of these two engines with the compression of input files and map output files. Bzip is a compression algorithm used for input files and snappy is used for map output files. Word-count and terasort benchmarks are used in our experiments. For the word-count benchmark, the results show that Tez engine always has better execution-time than MapReduce engine for both of compressed data or non-compressed data. It can reduce an execution-time up to 39% comparing with the execution time of MapReduce engine. In contrast, the results show that Tez engine usually has higher execution-time than MapReduce engine up to 13% for terasort benchmark. The results also show that the performance of compressing map output files with snappy provides better performance on execution time for both benchmarks.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"42 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90120331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025956
Udomchai Saisara, P. Boonbrahm, Achara Chaiwiriya
More than 5% of Thai people have strabismus. Strabismus is known as cross-eyed or wall-eyed because the visual field angle of two eyes is not parallel. The amblyopia disease is the cause of strabismus in kids. Strabismus can be completely cured if the strabismus screening can be made in early stage. Currently, strabismus screening includes methods such as Hirschberg test, cover test and Krimsky test, and etc. The strabismus screening in kids is difficult and takes a lot time in special room. This research intend to develop a computer system to assist strabismus screening using the combination of computer games and eye tracking devices so that the screening results will be more accurate and exact. This screening technique requires shorter time and it is easy to use, so it is better in terms of efficiency and reducing time for strabismus screening.
{"title":"Strabismus screening by Eye Tracker and games","authors":"Udomchai Saisara, P. Boonbrahm, Achara Chaiwiriya","doi":"10.1109/JCSSE.2017.8025956","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025956","url":null,"abstract":"More than 5% of Thai people have strabismus. Strabismus is known as cross-eyed or wall-eyed because the visual field angle of two eyes is not parallel. The amblyopia disease is the cause of strabismus in kids. Strabismus can be completely cured if the strabismus screening can be made in early stage. Currently, strabismus screening includes methods such as Hirschberg test, cover test and Krimsky test, and etc. The strabismus screening in kids is difficult and takes a lot time in special room. This research intend to develop a computer system to assist strabismus screening using the combination of computer games and eye tracking devices so that the screening results will be more accurate and exact. This screening technique requires shorter time and it is easy to use, so it is better in terms of efficiency and reducing time for strabismus screening.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"14 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82921637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}