Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457323
Thanatorn Boonnak, V. Visoottiviseth, J. Haga, Dylan Kobayashi, J. Leigh
With the ever-increasing amount of information, data scientists continue to explore new technologies that will help them access and interact with data in a variety of domains and situations. One technology of particular interest is Scalable Resolution Shared Displays (SRSD) that use a web-based collaboration middleware called SAGE2. These display systems are ideal for exploring large data sets in data intensive applications; however, interacting with content on these largewalls intuitively and rapidly remains a challenge to be addressed. This work introduces a prototype user interface basedon a simple, hand gesture-based approach to control content in a SAGE2 workspace using the Leap Motion controller. Our implementation and preliminary testing of the interface demonstrates its potential as a more natural interaction modalitywhen exploring big data sets.
{"title":"Integration of Gesture Control with Large Display Environments Using SAGE2","authors":"Thanatorn Boonnak, V. Visoottiviseth, J. Haga, Dylan Kobayashi, J. Leigh","doi":"10.1109/JCSSE.2018.8457323","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457323","url":null,"abstract":"With the ever-increasing amount of information, data scientists continue to explore new technologies that will help them access and interact with data in a variety of domains and situations. One technology of particular interest is Scalable Resolution Shared Displays (SRSD) that use a web-based collaboration middleware called SAGE2. These display systems are ideal for exploring large data sets in data intensive applications; however, interacting with content on these largewalls intuitively and rapidly remains a challenge to be addressed. This work introduces a prototype user interface basedon a simple, hand gesture-based approach to control content in a SAGE2 workspace using the Leap Motion controller. Our implementation and preliminary testing of the interface demonstrates its potential as a more natural interaction modalitywhen exploring big data sets.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457346
T. A. Tran, Jarunee Duangsuwan, W. Wettayaprasit
Online reviews play an important role in helping companies or governments to improve product quality and services. However, these reviews are increasing day by day. It is difficult to go through the amount of these reviews and to summarize the important information manually. We proposed a novel Automatic Sentiment Summarization (ASS) system. This system has two phases. The first phase is the aspect-based representation used to represent ranked knowledge on aspect opinion calculated by using frequencies, polarity, and opinion strength. The second phase is the review summary generation used to automatically produce review summary by ranking aspect based on information of the aspect. The generated summary is more coherent by applying natural language generation technique. Furthermore, the proposed ASS system allows users to add new reviews in the same domain in order to update the generated summary. The experiments used the sentiment aspect dataset benchmarks such as customer product/service reviews for Canon, Nikon, and Laptop. The generated summaries from the proposed ASS system are well performed compared with other systems extractive summarization and abstractive summarization.
{"title":"A Novel Automatic Sentiment Summarization from Aspect-based Customer Reviews","authors":"T. A. Tran, Jarunee Duangsuwan, W. Wettayaprasit","doi":"10.1109/JCSSE.2018.8457346","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457346","url":null,"abstract":"Online reviews play an important role in helping companies or governments to improve product quality and services. However, these reviews are increasing day by day. It is difficult to go through the amount of these reviews and to summarize the important information manually. We proposed a novel Automatic Sentiment Summarization (ASS) system. This system has two phases. The first phase is the aspect-based representation used to represent ranked knowledge on aspect opinion calculated by using frequencies, polarity, and opinion strength. The second phase is the review summary generation used to automatically produce review summary by ranking aspect based on information of the aspect. The generated summary is more coherent by applying natural language generation technique. Furthermore, the proposed ASS system allows users to add new reviews in the same domain in order to update the generated summary. The experiments used the sentiment aspect dataset benchmarks such as customer product/service reviews for Canon, Nikon, and Laptop. The generated summaries from the proposed ASS system are well performed compared with other systems extractive summarization and abstractive summarization.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457361
Narit Hnoohom, Sumeth Yuenyong
We present an image classification of Dhamma Esan characters by fine-tuning the Inception V3 deep neural network trained on the ImageNet dataset. Dhamma Esan is a traditional alphabet used in the north-eastern region of Thailand, primarily written on Corypha leaves for the purpose of recording Buddhist scriptures. Preservation of these historical documents calls for the ability to classify the characters of the alphabet in order to facilitate digital indexing and searching, as well as assist anyone trying to read them. Our dataset consists of over 70,000 Dhamma Esan character images, much larger than any previous work. The result of ten-fold cross-validation showed that our model had 100% accuracy for four folds, and 99.99% for the other six folds. The previous best accuracy reported was 97.77%. We also developed a Dhamma Esan character classification web service where users can upload images of characters and get immediate classification results as well as mapping to the modern Thai alphabet.
{"title":"Classification of Dhamma Esan Characters By Transfer Learning of a Deep Neural Network","authors":"Narit Hnoohom, Sumeth Yuenyong","doi":"10.1109/JCSSE.2018.8457361","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457361","url":null,"abstract":"We present an image classification of Dhamma Esan characters by fine-tuning the Inception V3 deep neural network trained on the ImageNet dataset. Dhamma Esan is a traditional alphabet used in the north-eastern region of Thailand, primarily written on Corypha leaves for the purpose of recording Buddhist scriptures. Preservation of these historical documents calls for the ability to classify the characters of the alphabet in order to facilitate digital indexing and searching, as well as assist anyone trying to read them. Our dataset consists of over 70,000 Dhamma Esan character images, much larger than any previous work. The result of ten-fold cross-validation showed that our model had 100% accuracy for four folds, and 99.99% for the other six folds. The previous best accuracy reported was 97.77%. We also developed a Dhamma Esan character classification web service where users can upload images of characters and get immediate classification results as well as mapping to the modern Thai alphabet.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122612476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457324
Chanavit Athavipach, S. Pan-Ngum, P. Israsena
This study focused on building a low-cost wearable EEG device multiple hour usage. The device suitable for long period monitoring is in-the-ear EEG, which has desirable wearable characteristics. With electrode in an earbud, it is relatively simple to install and wear. The in-the-ear prototype in this study was built from earphone rubber as an earpiece and silver-adhesive fabric as electrodes. Raw materials cost 3 dollar per piece. The impedance measurement from in-the-ear EEG is comparable to those of commercial electrodes. Signal verifications were conducted by teeth clenching, ASSR, MMN, and correlation. The signal verification results show that there is a strong correlation between in-the-ear EEG and T7/T8 signals. (γ-coefficient = 0.912)
{"title":"Development of Low-Cost in-the-Ear EEG Prototype","authors":"Chanavit Athavipach, S. Pan-Ngum, P. Israsena","doi":"10.1109/JCSSE.2018.8457324","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457324","url":null,"abstract":"This study focused on building a low-cost wearable EEG device multiple hour usage. The device suitable for long period monitoring is in-the-ear EEG, which has desirable wearable characteristics. With electrode in an earbud, it is relatively simple to install and wear. The in-the-ear prototype in this study was built from earphone rubber as an earpiece and silver-adhesive fabric as electrodes. Raw materials cost 3 dollar per piece. The impedance measurement from in-the-ear EEG is comparable to those of commercial electrodes. Signal verifications were conducted by teeth clenching, ASSR, MMN, and correlation. The signal verification results show that there is a strong correlation between in-the-ear EEG and T7/T8 signals. (γ-coefficient = 0.912)","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"101 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457386
Kamonluk Suksen, P. Chongstitvatana
In Evolutionary Computation, good substructures that are combined into good solutions are called building blocks. In this context, building blocks are common structure of high- quality solutions. The compact genetic algorithm is an extension of the genetic algorithm that replaces the latter’s population of chromosomes with a probability distribution from which candidate solutions can be generated. This paper describes an algorithm that exploits building blocks with compact genetic algorithm in order to solve difficult optimization problems under the assumption that we have already known building blocks. The main idea is to update the probability vectors as a group of bits that represents building blocks thus avoiding the disruption of the building blocks. Comparisons of the new algorithm with a conventional compact genetic algorithm on trap-function and traveling salesman problems indicate the utility of the proposed algorithm. It is most effective when the problem instants have common structures that can be identify as building blocks.
{"title":"Exploiting Building Blocks in Hard Problems with Modified Compact Genetic Algorithm","authors":"Kamonluk Suksen, P. Chongstitvatana","doi":"10.1109/JCSSE.2018.8457386","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457386","url":null,"abstract":"In Evolutionary Computation, good substructures that are combined into good solutions are called building blocks. In this context, building blocks are common structure of high- quality solutions. The compact genetic algorithm is an extension of the genetic algorithm that replaces the latter’s population of chromosomes with a probability distribution from which candidate solutions can be generated. This paper describes an algorithm that exploits building blocks with compact genetic algorithm in order to solve difficult optimization problems under the assumption that we have already known building blocks. The main idea is to update the probability vectors as a group of bits that represents building blocks thus avoiding the disruption of the building blocks. Comparisons of the new algorithm with a conventional compact genetic algorithm on trap-function and traveling salesman problems indicate the utility of the proposed algorithm. It is most effective when the problem instants have common structures that can be identify as building blocks.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126921376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457339
Amirhossein Moravejosharieh, Michael J. Watts, Yu Song
Software-Defined Networking (SDN) is a new networking paradigm designed to resolve traditional IP network shortcomings by breaking the vertical integration of control and data planes. SDN separates the network control logic from underlying routers and switches and introduces the ability to program the network. Bandwidth reservation is an approach offered in SDN-enabled networks to guarantee relatively high Quality of Service for different types of media, e.g., video, audio or data. Although, this approach has been proven to be worthy of considering in SDN, there are still some concerns regarding its applicability in a relatively large networks. In this paper, we have evaluated the performance of bandwidth reservation approach in a relatively large-scaled SDN-enabled network in terms of its suitability when the number of users demanding for reserved bandwidth becomes larger. The obtained results from our simulation study show that bandwidth reservation can be beneficial only when the number of users asking for guaranteed bandwidth is relatively smaller than other users. Moreover, higher end-to-end QoS delivery can be achieved as an immediate outcome of deploying bandwidth reservation approach for a particular type of traffic flow, however, at the cost of incurring negative impact on other types of traffic flow in terms of achievable network throughput.
{"title":"Bandwidth Reservation Approach to Improve Quality of Service in Software-Defined Networking: A Performance Analysis","authors":"Amirhossein Moravejosharieh, Michael J. Watts, Yu Song","doi":"10.1109/JCSSE.2018.8457339","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457339","url":null,"abstract":"Software-Defined Networking (SDN) is a new networking paradigm designed to resolve traditional IP network shortcomings by breaking the vertical integration of control and data planes. SDN separates the network control logic from underlying routers and switches and introduces the ability to program the network. Bandwidth reservation is an approach offered in SDN-enabled networks to guarantee relatively high Quality of Service for different types of media, e.g., video, audio or data. Although, this approach has been proven to be worthy of considering in SDN, there are still some concerns regarding its applicability in a relatively large networks. In this paper, we have evaluated the performance of bandwidth reservation approach in a relatively large-scaled SDN-enabled network in terms of its suitability when the number of users demanding for reserved bandwidth becomes larger. The obtained results from our simulation study show that bandwidth reservation can be beneficial only when the number of users asking for guaranteed bandwidth is relatively smaller than other users. Moreover, higher end-to-end QoS delivery can be achieved as an immediate outcome of deploying bandwidth reservation approach for a particular type of traffic flow, however, at the cost of incurring negative impact on other types of traffic flow in terms of achievable network throughput.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121730262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457364
H. Esmaeili, T. Phoka
Convolutional Neural Network (CNN) is taking a big role in image classification. B ut f ully t raining i mages by using CNN takes a plenty of time and uses a very large data set. This paper will focus on transfer learning, a technique that takes a pre-trained model e.g., Inception, Resnet or MobileNets models then retrains the model from the existing weights for a new classification p roblem. T he r etrain t echnique drastically decreases time spending in the training process and many fewer number of image data is required to yield high accuracy trained networks. This paper considers the problem of leaf image classification t hat t he e xisting a pproaches t ake m uch e ffort to choose various types of imagefeatures for classification. This also reflects p utting b iases b y c hoosing s ome f eatures a nd ignoring the other information in images. This paper will conduct the experiments in accuracy comparison between traditional leaf image classification using image processing techniques and CNN with transfer learning. The result will show that without much knowledge in image processing, the leaf image classification can be achieved with high accuracy using the transfer learning technique.
{"title":"Transfer Learning for Leaf Classification with Convolutional Neural Networks","authors":"H. Esmaeili, T. Phoka","doi":"10.1109/JCSSE.2018.8457364","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457364","url":null,"abstract":"Convolutional Neural Network (CNN) is taking a big role in image classification. B ut f ully t raining i mages by using CNN takes a plenty of time and uses a very large data set. This paper will focus on transfer learning, a technique that takes a pre-trained model e.g., Inception, Resnet or MobileNets models then retrains the model from the existing weights for a new classification p roblem. T he r etrain t echnique drastically decreases time spending in the training process and many fewer number of image data is required to yield high accuracy trained networks. This paper considers the problem of leaf image classification t hat t he e xisting a pproaches t ake m uch e ffort to choose various types of imagefeatures for classification. This also reflects p utting b iases b y c hoosing s ome f eatures a nd ignoring the other information in images. This paper will conduct the experiments in accuracy comparison between traditional leaf image classification using image processing techniques and CNN with transfer learning. The result will show that without much knowledge in image processing, the leaf image classification can be achieved with high accuracy using the transfer learning technique.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122001487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457368
Amarin Jettakul, Chavisa Thamjarat, Kawin Liaowongphuthorn, Can Udomcharoenchaikit, P. Vateekul, P. Boonkwan
In Natural Language Processing (NLP), there are three fundamental tasks of NLP which are Tokenization being a part of a lexical level, Part-of-Speech tagging (POS) and Named-Entity-Recognition (NER) being parts of a syntactic level. Recently, there have been many deep learning researches showing their success in many domains. However, there has been no comparative study for Thai NLP to suggest the most suitable technique for each task yet. In this paper, we aim to provide a performance comparison among various deep learning-based techniques on three NLP tasks, and study the effect on synthesized OOV words and the OOV handling algorithm with Levenshtein distance had been provided due to the fact that most existing works relied on a set of vocabularies in the trained model and not being fit for noisy text in the real use case. Our three experiments were conducted on BEST 2010 I2R, a standard Thai NLP corpus on F1 measurement, with the different percentage of noises having been synthesized. Firstly, for Tokenization, the result shows that Synthai, a jointed bidirectional LSTM, has the best performance. Additionally, for POS, bi-directional LSTM with CRF has obtained the best performance. For NER, variational bi-directional LSTM with CRF has outperformed other methods. Finally, the effect of noises reduces the performance of all algorithms on these foundation tasks and the result shows that our OOV handling technique could improve the performance on noisy data.
{"title":"A Comparative Study on Various Deep Learning Techniques for Thai NLP Lexical and Syntactic Tasks on Noisy Data","authors":"Amarin Jettakul, Chavisa Thamjarat, Kawin Liaowongphuthorn, Can Udomcharoenchaikit, P. Vateekul, P. Boonkwan","doi":"10.1109/JCSSE.2018.8457368","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457368","url":null,"abstract":"In Natural Language Processing (NLP), there are three fundamental tasks of NLP which are Tokenization being a part of a lexical level, Part-of-Speech tagging (POS) and Named-Entity-Recognition (NER) being parts of a syntactic level. Recently, there have been many deep learning researches showing their success in many domains. However, there has been no comparative study for Thai NLP to suggest the most suitable technique for each task yet. In this paper, we aim to provide a performance comparison among various deep learning-based techniques on three NLP tasks, and study the effect on synthesized OOV words and the OOV handling algorithm with Levenshtein distance had been provided due to the fact that most existing works relied on a set of vocabularies in the trained model and not being fit for noisy text in the real use case. Our three experiments were conducted on BEST 2010 I2R, a standard Thai NLP corpus on F1 measurement, with the different percentage of noises having been synthesized. Firstly, for Tokenization, the result shows that Synthai, a jointed bidirectional LSTM, has the best performance. Additionally, for POS, bi-directional LSTM with CRF has obtained the best performance. For NER, variational bi-directional LSTM with CRF has outperformed other methods. Finally, the effect of noises reduces the performance of all algorithms on these foundation tasks and the result shows that our OOV handling technique could improve the performance on noisy data.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115278721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-01DOI: 10.1109/JCSSE.2018.8457180
Taweechai Nuntawisuttiwong, N. Dejdumrong
This paper presents a method to approximate Béziercurves by a sequence of arc splines with inscribed regular polygon. The proposed algorithm uses the arc length approximation method in subdividing a Bézier curve into subcurves which have equal arc length. Each subcurve is interpolated with a line segment which is a side of the inscribed polygon of a curve. Curve segments are then clustered into a circular arc by evaluating interior angles of inscribed polygon. This method represents a Bézier curve with the minimum number of circular arcs and acceptable errors. The experimental results are provided the similarity of original curve and approximated arc spline. The approximated arc spline which is the result of proposed algorithm is compatible for vector and raster graphic format.
{"title":"An Approach to Bézier Curve Approximation by Circular Arcs","authors":"Taweechai Nuntawisuttiwong, N. Dejdumrong","doi":"10.1109/JCSSE.2018.8457180","DOIUrl":"https://doi.org/10.1109/JCSSE.2018.8457180","url":null,"abstract":"This paper presents a method to approximate Béziercurves by a sequence of arc splines with inscribed regular polygon. The proposed algorithm uses the arc length approximation method in subdividing a Bézier curve into subcurves which have equal arc length. Each subcurve is interpolated with a line segment which is a side of the inscribed polygon of a curve. Curve segments are then clustered into a circular arc by evaluating interior angles of inscribed polygon. This method represents a Bézier curve with the minimum number of circular arcs and acceptable errors. The experimental results are provided the similarity of original curve and approximated arc spline. The approximated arc spline which is the result of proposed algorithm is compatible for vector and raster graphic format.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124184988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}