The Information technology (IT) plays a key role in the functioning of the modern era. The evolution of computer is very fast. It improves the efficiency of processing, the capacity of memory and the transmission rate of network. The procurement department will procure the new resources. They did not know that equipment suit any budget because they may not track changes or the development of a computer constantly. We propose a system that helps to define specifications of computer for appropriated usage and commensurate with the budget. This paper explains the structure of system consists of three parts: frontend, selector and reporter. The experimental results present the implement of system consist of main page, registration, authentication, standard mode, advanced mode, selecting the types of computer usage, and the output is a list of component.
{"title":"Feature preparing for the procurement of computer equipment: A case study of RMUTT","authors":"Khongthep Boonmee, Waraphan Sarasureeporn, Witchawoan Mankhong","doi":"10.1109/ICTKE.2014.7001537","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001537","url":null,"abstract":"The Information technology (IT) plays a key role in the functioning of the modern era. The evolution of computer is very fast. It improves the efficiency of processing, the capacity of memory and the transmission rate of network. The procurement department will procure the new resources. They did not know that equipment suit any budget because they may not track changes or the development of a computer constantly. We propose a system that helps to define specifications of computer for appropriated usage and commensurate with the budget. This paper explains the structure of system consists of three parts: frontend, selector and reporter. The experimental results present the implement of system consist of main page, registration, authentication, standard mode, advanced mode, selecting the types of computer usage, and the output is a list of component.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129170211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001525
Lavanya Sharma
In this paper, denoising of multilead electrocardiograms (ECG) using multiscale singular value decomposition is proposed. If signal of each ECG leads are wavelet transformed with same mother wavelet and decomposition levels, it helps formation of multivariate multiscale matrices at wavelet scales. Singular value decomposition is applies in these scales. A new method to select singular values at these scales is proposed which is based on weighted ratio of matrix norms. This optimizes the approximate ranks for multiscale multivariate matrices to capture the diagnostic components present at different scales. Testing with records from PTB diagnostic ECG database for various pathological cases gives better SNR improvement retaining the pathological signatures. After adding white Gaussian noise at different SNR levels, quantitative analysis is carried out by evaluating error measures like percentage root mean square difference (PRD), root mean square error (NRMSE) and wavelet energy based diagnostic distortion measure (WEDD).
{"title":"Denoising pathological multilead electrocardiogram signals using multiscale singular value decomposition","authors":"Lavanya Sharma","doi":"10.1109/ICTKE.2014.7001525","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001525","url":null,"abstract":"In this paper, denoising of multilead electrocardiograms (ECG) using multiscale singular value decomposition is proposed. If signal of each ECG leads are wavelet transformed with same mother wavelet and decomposition levels, it helps formation of multivariate multiscale matrices at wavelet scales. Singular value decomposition is applies in these scales. A new method to select singular values at these scales is proposed which is based on weighted ratio of matrix norms. This optimizes the approximate ranks for multiscale multivariate matrices to capture the diagnostic components present at different scales. Testing with records from PTB diagnostic ECG database for various pathological cases gives better SNR improvement retaining the pathological signatures. After adding white Gaussian noise at different SNR levels, quantitative analysis is carried out by evaluating error measures like percentage root mean square difference (PRD), root mean square error (NRMSE) and wavelet energy based diagnostic distortion measure (WEDD).","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124148100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001528
T. Funayama, Yoshiro Yamamoto, Makoto Tomita, O. Uchida, Y. Kajita
We can easily get the Tweets from the Twitter and open data about population etc. Therefore we are constructing a system to extract disaster information and analyze Tweet in Twitter by using API (Application Programming Interface), Web service changing geographical information and R etc. If we want to know where is occurring an earthquake, a flood and etc, we could want to get those information which are seismic intensity, weather and traffic conditions. According this study was decided to get those information by using API. And a system which get and show those information are constructed.
我们可以很容易地从推特上获得推文和关于人口等的开放数据。因此,我们利用API (Application Programming Interface)、Web service change geographical information和R等技术,构建了一个灾难信息提取和Twitter Tweet分析系统。如果我们想知道哪里发生了地震、洪水等,我们可能想要得到地震强度、天气和交通状况等信息。根据这项研究,决定使用API来获取这些信息。并构建了一个获取和显示这些信息的系统。
{"title":"Disaster mitigation support system using Twitter and GIS","authors":"T. Funayama, Yoshiro Yamamoto, Makoto Tomita, O. Uchida, Y. Kajita","doi":"10.1109/ICTKE.2014.7001528","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001528","url":null,"abstract":"We can easily get the Tweets from the Twitter and open data about population etc. Therefore we are constructing a system to extract disaster information and analyze Tweet in Twitter by using API (Application Programming Interface), Web service changing geographical information and R etc. If we want to know where is occurring an earthquake, a flood and etc, we could want to get those information which are seismic intensity, weather and traffic conditions. According this study was decided to get those information by using API. And a system which get and show those information are constructed.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128160147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001526
Sanetoshi Yamada, Yoshiro Yamamoto
The information that customer data usually provides is the personal surface information including the sex, age and hometown. But, we can obtain personal internal information by analyzing questionnaire data. This paper, we propose the visualization that combined association analysis with correspondence analysis, it can find about difference of internal characteristic of six layers.
{"title":"The visualization of relationship of age and gender to multiple choice questions","authors":"Sanetoshi Yamada, Yoshiro Yamamoto","doi":"10.1109/ICTKE.2014.7001526","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001526","url":null,"abstract":"The information that customer data usually provides is the personal surface information including the sex, age and hometown. But, we can obtain personal internal information by analyzing questionnaire data. This paper, we propose the visualization that combined association analysis with correspondence analysis, it can find about difference of internal characteristic of six layers.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131959791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001533
Keita Yagi, T. Funayama, Yoshiro Yamamoto
There are many statistics to measure a batter performance including a batting average and the RBI of the professional baseball. In this paper, we introduce some modification of OPS to represent the characteristic and change of the condition of the player and develop visualization methods and the Web based system of these visualization.
{"title":"The digitizing of the characteristic and visualization of the wave of the condition of batter of the professional baseball","authors":"Keita Yagi, T. Funayama, Yoshiro Yamamoto","doi":"10.1109/ICTKE.2014.7001533","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001533","url":null,"abstract":"There are many statistics to measure a batter performance including a batting average and the RBI of the professional baseball. In this paper, we introduce some modification of OPS to represent the characteristic and change of the condition of the player and develop visualization methods and the Web based system of these visualization.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128449745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001527
Yohji Kameoka, Yoshiro Yamamoto
We can use the detail data of soccer and we are intended to analyze the pattern to lead to a score in soccer games. We note a series of actions to lead to a shoot and visualized the actions for analyze a pattern for each team. We analyzed the actions to lead to a shoot for each team from visualized the actions.
{"title":"Visualization of process of shoot and goal in soccer games","authors":"Yohji Kameoka, Yoshiro Yamamoto","doi":"10.1109/ICTKE.2014.7001527","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001527","url":null,"abstract":"We can use the detail data of soccer and we are intended to analyze the pattern to lead to a score in soccer games. We note a series of actions to lead to a shoot and visualized the actions for analyze a pattern for each team. We analyzed the actions to lead to a shoot for each team from visualized the actions.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117013105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001540
P. Porouhan, N. Jongsawat, W. Premchaiswadi
In this paper, we applied two methods of process mining techniques (from Discovery class/approach) in order to extract knowledge from event logs recorded by an online information system. The event log was created via information received from an online proceedings review system in Thailand. Accordingly, Alpha and Heuristic algorithms were used with the objective of automatically visualizing the models in terms of Petri nets and animated simulations. The paper eventually aimed at improving the handling of online reviews by providing techniques and tools for discovering process, control, data, organizational, and social structures from the created event log.
{"title":"Process and deviation exploration through Alpha-algorithm and Heuristic miner techniques","authors":"P. Porouhan, N. Jongsawat, W. Premchaiswadi","doi":"10.1109/ICTKE.2014.7001540","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001540","url":null,"abstract":"In this paper, we applied two methods of process mining techniques (from Discovery class/approach) in order to extract knowledge from event logs recorded by an online information system. The event log was created via information received from an online proceedings review system in Thailand. Accordingly, Alpha and Heuristic algorithms were used with the objective of automatically visualizing the models in terms of Petri nets and animated simulations. The paper eventually aimed at improving the handling of online reviews by providing techniques and tools for discovering process, control, data, organizational, and social structures from the created event log.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122008474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001529
M. Kumngern, Ittipol Kansiri
This paper presents a new third-order quadrature oscillator using operational transresistance amplifiers as active elements. The proposed circuit provides high precision of the frequency of oscillation. The frequency of oscillation can be controlled using a single passive component and the condition of oscillation can be controlled orthogonally by setting the circuit components. Also two quadrature voltage output terminals possess low impedance level which can be directly connected to the load. Simulation results verifying the theoretical analysis are also included.
{"title":"Single-element control third-order quadrature oscillator using OTRAs","authors":"M. Kumngern, Ittipol Kansiri","doi":"10.1109/ICTKE.2014.7001529","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001529","url":null,"abstract":"This paper presents a new third-order quadrature oscillator using operational transresistance amplifiers as active elements. The proposed circuit provides high precision of the frequency of oscillation. The frequency of oscillation can be controlled using a single passive component and the condition of oscillation can be controlled orthogonally by setting the circuit components. Also two quadrature voltage output terminals possess low impedance level which can be directly connected to the load. Simulation results verifying the theoretical analysis are also included.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131514368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001541
Anucha Tungkastan, N. Jongsawat, W. Premchaiswadi
Automated license plate recognition system only is not enough to detect suspicious passing vehicles. In this paper, we present a proposed framework for automated real-time detection of suspicious vehicles. It consists of two sub-systems, an automated license plate recognition system and a vehicle model recognition system. An automated license plate recognition system consists of license plate detection, character segmentation, and character recognition. A vehicle model recognition system consists of vehicle segmentation, license plate localization, taillight detection, vehicle class identification, model recognition. Finally, the results obtained from two systems are compared with the data obtained from the department of land transport in real-time. The proposed system is able to monitor suspicious vehicles and generates the reports or suspicious transactions to the department of land transport and third party.
{"title":"A proposed framework for automated real-time detection of suspicious vehicles","authors":"Anucha Tungkastan, N. Jongsawat, W. Premchaiswadi","doi":"10.1109/ICTKE.2014.7001541","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001541","url":null,"abstract":"Automated license plate recognition system only is not enough to detect suspicious passing vehicles. In this paper, we present a proposed framework for automated real-time detection of suspicious vehicles. It consists of two sub-systems, an automated license plate recognition system and a vehicle model recognition system. An automated license plate recognition system consists of license plate detection, character segmentation, and character recognition. A vehicle model recognition system consists of vehicle segmentation, license plate localization, taillight detection, vehicle class identification, model recognition. Finally, the results obtained from two systems are compared with the data obtained from the department of land transport in real-time. The proposed system is able to monitor suspicious vehicles and generates the reports or suspicious transactions to the department of land transport and third party.","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123545995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICTKE.2014.7001538
J. Watthananon
Now a day, the massive amount of data and information (recently termed as “Big Data”) causes accessibility and retrieval problems if poorly managed. This is due to their relational structure which is more complicate, unexplainable, and unanalyzable with simple or traditional methods. The uniform display of these data and information is also difficult due to their diversified formats. Bag of Words (BOW), the mostly used data sorting method, is although simple but the significance of synonymity is overlooked. The objective of this research study is to propose method in determining massively scattered data (as electronic documents). The linking of related data is also supported by the application of Dewey Decimal Classification (DDC) technique. DDC was employed in data processing, analyzing, and displaying with appropriate method in form of Mind Map. The accuracy test was performed on the data from the “Wikipedia Selection for schools”, a sub version of Wikipedia, to determine the efficiency among four models: DDC: Dewey decimal classification, SVM: Support Vector Machine, K-Mean Clustering and Hierarchical Clustering. The results indicated that DDC yielded the most accuracy (75.02%), followed by the Hierarchical models (74.66%), while both K-Mean and SVM yielded the similar accuracy (72.66%). And the time in process is K-Mean Clustering was best time more than other models (16.09 second).
如今,如果管理不善,大量的数据和信息(最近被称为“大数据”)会导致可访问性和检索问题。这是由于它们的关系结构更加复杂,无法解释,无法用简单或传统的方法分析。这些数据和信息由于格式多样,难以统一显示。word Bag (BOW)是目前最常用的数据排序方法,虽然简单,但忽略了同义性的重要性。本研究的目的是提出确定大量分散数据(如电子文档)的方法。杜威十进分类法(Dewey Decimal Classification, DDC)的应用也支持了相关数据的链接。DDC以思维导图的形式对数据进行处理、分析和显示。对维基百科的子版本“Wikipedia Selection for schools”的数据进行准确性测试,以确定DDC: Dewey十进分类、SVM:支持向量机、K-Mean聚类和分层聚类四种模型的效率。结果表明,DDC模型的准确率最高(75.02%),其次是分层模型(74.66%),K-Mean和SVM的准确率相近(72.66%)。K-Mean聚类在处理时间上优于其他模型(16.09秒)。
{"title":"The relationship of text categorization using Dewey Decimal Classification techniques","authors":"J. Watthananon","doi":"10.1109/ICTKE.2014.7001538","DOIUrl":"https://doi.org/10.1109/ICTKE.2014.7001538","url":null,"abstract":"Now a day, the massive amount of data and information (recently termed as “Big Data”) causes accessibility and retrieval problems if poorly managed. This is due to their relational structure which is more complicate, unexplainable, and unanalyzable with simple or traditional methods. The uniform display of these data and information is also difficult due to their diversified formats. Bag of Words (BOW), the mostly used data sorting method, is although simple but the significance of synonymity is overlooked. The objective of this research study is to propose method in determining massively scattered data (as electronic documents). The linking of related data is also supported by the application of Dewey Decimal Classification (DDC) technique. DDC was employed in data processing, analyzing, and displaying with appropriate method in form of Mind Map. The accuracy test was performed on the data from the “Wikipedia Selection for schools”, a sub version of Wikipedia, to determine the efficiency among four models: DDC: Dewey decimal classification, SVM: Support Vector Machine, K-Mean Clustering and Hierarchical Clustering. The results indicated that DDC yielded the most accuracy (75.02%), followed by the Hierarchical models (74.66%), while both K-Mean and SVM yielded the similar accuracy (72.66%). And the time in process is K-Mean Clustering was best time more than other models (16.09 second).","PeriodicalId":120743,"journal":{"name":"2014 Twelfth International Conference on ICT and Knowledge Engineering","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124109524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}