Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025927
Suhardi, N. Kurniawan, M. I. W. Pramana, Jaka Semibiring
The use of services computing technology provides support to the development of modern services industries. Services computing utilizes and implements computing technology to develop IT services systems that are capable of supporting and developing services innovation. This condition motivates the organization to be able to present the services computing systems in the form of IT services systems. It needs a strong understanding of identifying, designing, building, implementing, and running the systems. This paper outlines the development of services computing systems engineering framework that can be used as the foundation for building services computing systems. The contribution of this paper is to enhance the knowledge of services computing from the engineering perspective.
{"title":"Developing a framework for services computing systems engineering","authors":"Suhardi, N. Kurniawan, M. I. W. Pramana, Jaka Semibiring","doi":"10.1109/JCSSE.2017.8025927","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025927","url":null,"abstract":"The use of services computing technology provides support to the development of modern services industries. Services computing utilizes and implements computing technology to develop IT services systems that are capable of supporting and developing services innovation. This condition motivates the organization to be able to present the services computing systems in the form of IT services systems. It needs a strong understanding of identifying, designing, building, implementing, and running the systems. This paper outlines the development of services computing systems engineering framework that can be used as the foundation for building services computing systems. The contribution of this paper is to enhance the knowledge of services computing from the engineering perspective.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"22 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83111291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025921
Arnon Jirakittayakorn, Teeranai Kormongkolkul, P. Vateekul, Kulsawasd Jitkajornwanich, S. Lawawirojwong
Ocean surface current prediction is at the core of various marine operational routines, including disaster monitoring, oil-spill backtracking, sea navigation and search-and-rescue operations. More accurate prediction can yield significant improvement to the overall system. Most existing short-term prediction methods applied numerical models based on physical processes. In this paper, we propose an alternative approach in predicting the surface current by utilizing temporal k-nearest-neighbor technique, which can predict the future surface current up to 24 hours in advance. Our model incorporates several pre-processing methods, e.g. feature extraction and data transformation, in order to capture the seasonal and temporal characteristics of the HF (high frequency) radar observation data. The developed model was implemented, validated and compared with the existing models using the same historical datasets collected from the HF coastal radar stations located along the Gulf of Thailand. Our experimental results indicate that the proposed model can achieve the highest accuracy among all methods, including ARIMA, exponential smoothing, and LSTM; and satisfy the oil-spill backtracking application requirements. In addition, we found that our system requires little to none maintenance and can easily be adapted to other coastal radar locations where the amount of historical HF radar observations is limited.
{"title":"Temporal kNN for short-term ocean current prediction based on HF radar observations","authors":"Arnon Jirakittayakorn, Teeranai Kormongkolkul, P. Vateekul, Kulsawasd Jitkajornwanich, S. Lawawirojwong","doi":"10.1109/JCSSE.2017.8025921","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025921","url":null,"abstract":"Ocean surface current prediction is at the core of various marine operational routines, including disaster monitoring, oil-spill backtracking, sea navigation and search-and-rescue operations. More accurate prediction can yield significant improvement to the overall system. Most existing short-term prediction methods applied numerical models based on physical processes. In this paper, we propose an alternative approach in predicting the surface current by utilizing temporal k-nearest-neighbor technique, which can predict the future surface current up to 24 hours in advance. Our model incorporates several pre-processing methods, e.g. feature extraction and data transformation, in order to capture the seasonal and temporal characteristics of the HF (high frequency) radar observation data. The developed model was implemented, validated and compared with the existing models using the same historical datasets collected from the HF coastal radar stations located along the Gulf of Thailand. Our experimental results indicate that the proposed model can achieve the highest accuracy among all methods, including ARIMA, exponential smoothing, and LSTM; and satisfy the oil-spill backtracking application requirements. In addition, we found that our system requires little to none maintenance and can easily be adapted to other coastal radar locations where the amount of historical HF radar observations is limited.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"95 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82207207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/JCSSE.2017.8025930
K. Atchariyachanvanich, Srinual Nalintippayawong, Tanasab Permpool
This research developed the MySQL Sandbox, a secured environment for processing SQL queries. It was implemented as a RESTful web service having three services - sandbox database creation, SQL statement processing and sandbox database resetting. It supports the simultaneous processing of multiple SQL statements from multiple users in multiple databases. It uses question identification (ID) and student ID to create separate databases for each student using the MySQL feature to manage the user's privileges of their own database. Every service returns a result in the JSON format, which is easy to understand. This MySQL Sandbox is the first tool to support judging DDL statements and complex DML statements. Existing SQL grading systems have limitations on the number of supported SQL statements because they are concerned about risks from some sensitive SQL statement, such as DDL and DML statements, other than the SELECT statement. This sandbox will help eliminate the security concerns that obstruct the development and improvement of SQL grading systems, while providing a greater freedom of learning query to students, which will help them improve their own skills in three dimensions i.e., database query, database administration and database programming.
{"title":"Development of a MySQL Sandbox for processing SQL statements: Case of DML and DDL statements","authors":"K. Atchariyachanvanich, Srinual Nalintippayawong, Tanasab Permpool","doi":"10.1109/JCSSE.2017.8025930","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025930","url":null,"abstract":"This research developed the MySQL Sandbox, a secured environment for processing SQL queries. It was implemented as a RESTful web service having three services - sandbox database creation, SQL statement processing and sandbox database resetting. It supports the simultaneous processing of multiple SQL statements from multiple users in multiple databases. It uses question identification (ID) and student ID to create separate databases for each student using the MySQL feature to manage the user's privileges of their own database. Every service returns a result in the JSON format, which is easy to understand. This MySQL Sandbox is the first tool to support judging DDL statements and complex DML statements. Existing SQL grading systems have limitations on the number of supported SQL statements because they are concerned about risks from some sensitive SQL statement, such as DDL and DML statements, other than the SELECT statement. This sandbox will help eliminate the security concerns that obstruct the development and improvement of SQL grading systems, while providing a greater freedom of learning query to students, which will help them improve their own skills in three dimensions i.e., database query, database administration and database programming.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"220 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85737148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays documents in form of printout are widely used. There are many attacks such as forgery that make the document unreliable. Some research studies a technique to make the document verifiable even though some contents are intentionally modified. Moreover, the problem of large amount of verification data causes limited use. This study proposes the method to make the printed document more reliable with the smaller verification data. Converting text image to string using OCR then apply cryptographic hash function, while applying compact image hash function to image can provide the lower consumption of space but maintains high performance. Experimental results show that the proposed method outperforms the recent work in terms of FAR, FRR, and harmonic mean.
{"title":"Enhancing trustworthy of document using a combination of image hash and cryptographic hash","authors":"Pornchai Assamongkol, Suphakant Phimoltares, Sasipa Panthuwadeethorn","doi":"10.1109/JCSSE.2017.8025924","DOIUrl":"https://doi.org/10.1109/JCSSE.2017.8025924","url":null,"abstract":"Nowadays documents in form of printout are widely used. There are many attacks such as forgery that make the document unreliable. Some research studies a technique to make the document verifiable even though some contents are intentionally modified. Moreover, the problem of large amount of verification data causes limited use. This study proposes the method to make the printed document more reliable with the smaller verification data. Converting text image to string using OCR then apply cryptographic hash function, while applying compact image hash function to image can provide the lower consumption of space but maintains high performance. Experimental results show that the proposed method outperforms the recent work in terms of FAR, FRR, and harmonic mean.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"147 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86026206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on the analysis of color histogram for image retrieval, a new descriptor, bit-plane distribution entropy (BPDE), is proposed in this paper. The image is firstly divided into eight bit-planes and the Gray-code of bit-planes is introduced to avoid the effect of changes in the intensity values on bit-planes. Then, an entropy vector is constructed by computing the entropy of the first four significant planes which contain most of the structural information of the image. Finally, with designing of the correlation-weighted matrix, the Mahalanobis distance is adopted to measure the similarity because of the correlation between the concerned vectors. Comparisons are conducted between BPDE and other descriptors. Experimental results show that the proposed method provides more significantly retrieval results than the traditional ones.
{"title":"Image Retrieval Based on Bit-Plane Distribution Entropy","authors":"Z. Shan, Wang Hai-tao","doi":"10.1109/CSSE.2008.270","DOIUrl":"https://doi.org/10.1109/CSSE.2008.270","url":null,"abstract":"Based on the analysis of color histogram for image retrieval, a new descriptor, bit-plane distribution entropy (BPDE), is proposed in this paper. The image is firstly divided into eight bit-planes and the Gray-code of bit-planes is introduced to avoid the effect of changes in the intensity values on bit-planes. Then, an entropy vector is constructed by computing the entropy of the first four significant planes which contain most of the structural information of the image. Finally, with designing of the correlation-weighted matrix, the Mahalanobis distance is adopted to measure the similarity because of the correlation between the concerned vectors. Comparisons are conducted between BPDE and other descriptors. Experimental results show that the proposed method provides more significantly retrieval results than the traditional ones.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"187 1","pages":"532-535"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79758512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large number of Web pages returned by filling in search forms are not indexed by most search engines today. The set of such Web pages is referred to as the deep Web. Since results returned by Web databases seldom have proper annotations, it is necessary to assign meaningful labels to the results. This paper presents a framework of automatic annotation which uses multi-annotator to annotate results from different aspects. Especially, search engine-based annotator extends question-answering techniques commonly used in the AI community, constructing validate queries and posing to the search engine. It finds the most appropriate terms to annotate the data units by calculate the similarities between terms and instances. Information for annotating can be acquired automatically without the support of domain ontology. Experiments over four real world domains indicate that the proposed approach is highly effective.
{"title":"Multi-source Automatic Annotation for Deep Web","authors":"Cui Xiao-jun, Peng Zhiyong, Wang Hui","doi":"10.1109/CSSE.2008.439","DOIUrl":"https://doi.org/10.1109/CSSE.2008.439","url":null,"abstract":"A large number of Web pages returned by filling in search forms are not indexed by most search engines today. The set of such Web pages is referred to as the deep Web. Since results returned by Web databases seldom have proper annotations, it is necessary to assign meaningful labels to the results. This paper presents a framework of automatic annotation which uses multi-annotator to annotate results from different aspects. Especially, search engine-based annotator extends question-answering techniques commonly used in the AI community, constructing validate queries and posing to the search engine. It finds the most appropriate terms to annotate the data units by calculate the similarities between terms and instances. Information for annotating can be acquired automatically without the support of domain ontology. Experiments over four real world domains indicate that the proposed approach is highly effective.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"44 1","pages":"659-662"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79332902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blog, as a new emerging thing in Internet in recent years, is the fourth generation intercommunication form in Internet that differs from the Email, BBS and ICQ. This paper introduces the basic characteristics of blog, and gives emphatic analysis to three of them: thought share, non-linearity and concentricity, criticalness and multivariate collision. These cultural characteristics determine that blog will be a potential education tool that can be applied to students as well as teachers. For studentspsila study, blog can be used for percolating information, providing abundant situations for study, raising students' media cultural levels, encouraging the dissimilar standpoints of the participants, providing the evaluation to the information and encouraging the studentspsila participation and cooperation. For the teacherspsila teaching, blog can help the teachers develop professionally and establish teacherspsila learning organization.
{"title":"The Application of Blog in Modern Education","authors":"Shaohui Wang, Ma Lihua","doi":"10.1109/CSSE.2008.1443","DOIUrl":"https://doi.org/10.1109/CSSE.2008.1443","url":null,"abstract":"Blog, as a new emerging thing in Internet in recent years, is the fourth generation intercommunication form in Internet that differs from the Email, BBS and ICQ. This paper introduces the basic characteristics of blog, and gives emphatic analysis to three of them: thought share, non-linearity and concentricity, criticalness and multivariate collision. These cultural characteristics determine that blog will be a potential education tool that can be applied to students as well as teachers. For studentspsila study, blog can be used for percolating information, providing abundant situations for study, raising students' media cultural levels, encouraging the dissimilar standpoints of the participants, providing the evaluation to the information and encouraging the studentspsila participation and cooperation. For the teacherspsila teaching, blog can help the teachers develop professionally and establish teacherspsila learning organization.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"400 1","pages":"1083-1085"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76999244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel feature extraction method based on parallel coordinate plots was presented. Observing the parallel coordinate plots, discovered that using the distance of one point to others on one dimensionality to measurement the classify performance of the variable, can express the fact classify performance more impersonally. The Euclidean distance or module matrix and the relative distance matrix were given. And the distance ratio of every sample point to other sorts and it to its own sort has more classify information. We achieved better performance when experiment on data which has poor statistical performance.
{"title":"Study on Feature Extraction Method Based on Parallel Coordinate Plots","authors":"Cui Jianxin, H. Wen-xue, Gao Haibo","doi":"10.1109/CSSE.2008.1100","DOIUrl":"https://doi.org/10.1109/CSSE.2008.1100","url":null,"abstract":"A novel feature extraction method based on parallel coordinate plots was presented. Observing the parallel coordinate plots, discovered that using the distance of one point to others on one dimensionality to measurement the classify performance of the variable, can express the fact classify performance more impersonally. The Euclidean distance or module matrix and the relative distance matrix were given. And the distance ratio of every sample point to other sorts and it to its own sort has more classify information. We achieved better performance when experiment on data which has poor statistical performance.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"73 1","pages":"949-952"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88231436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xia Yongquan, Zhi Jun, Huang Min, Liu Weili, Ma Rui
In ALV system based on stereo vision, the obstacle detection is one of the most important problems. In this paper, an algorithm is proposed based on moment invariant to detect obstacle in ALV system. Firstly, a simple method is applied to binarize the images by the defined binarization function; Secondly, the binarized images are segmented using outer contour tracing algorithm and regions of left and right images are detected respectively; Lastly, the regions are matched between left and right image, the successful matched regions are the likely obstacle. Stereo pairs captured from ALV system are used to test the proposed algorithm, the result indicate that the approach is valid and feasible.
{"title":"A Stereo Matching Approach to Detect Obstacle in ALV System","authors":"Xia Yongquan, Zhi Jun, Huang Min, Liu Weili, Ma Rui","doi":"10.1109/CSSE.2008.1071","DOIUrl":"https://doi.org/10.1109/CSSE.2008.1071","url":null,"abstract":"In ALV system based on stereo vision, the obstacle detection is one of the most important problems. In this paper, an algorithm is proposed based on moment invariant to detect obstacle in ALV system. Firstly, a simple method is applied to binarize the images by the defined binarization function; Secondly, the binarized images are segmented using outer contour tracing algorithm and regions of left and right images are detected respectively; Lastly, the regions are matched between left and right image, the successful matched regions are the likely obstacle. Stereo pairs captured from ALV system are used to test the proposed algorithm, the result indicate that the approach is valid and feasible.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"38 1","pages":"1103-1106"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88648810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measured the instantaneous speed of diesel engine and analyzed the mechanism of fault diagnosis with speed signal. Processed the speed signal with method of spectrum analysis and complexity analysis and analyzed the change law of speed in condition of mis-cylinder. Calculated [K,C] complexity of speed signal and got the features of mis-cylinder fault. Set up a BP neural network to diagnose mis-cylinder fault of diesel engine.
{"title":"A Method for Diagnosing the Cylinder Fault of Engine Based on Artificial Neural Network","authors":"L. Jianmin, Qiao Xinyong","doi":"10.1109/CSSE.2008.1409","DOIUrl":"https://doi.org/10.1109/CSSE.2008.1409","url":null,"abstract":"Measured the instantaneous speed of diesel engine and analyzed the mechanism of fault diagnosis with speed signal. Processed the speed signal with method of spectrum analysis and complexity analysis and analyzed the change law of speed in condition of mis-cylinder. Calculated [K,C] complexity of speed signal and got the features of mis-cylinder fault. Set up a BP neural network to diagnose mis-cylinder fault of diesel engine.","PeriodicalId":6460,"journal":{"name":"2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"44 1","pages":"102-105"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91193563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}