Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8710936
Siriguleng Wang, Wuyuntana
In view of the rich morphology of Mongolian language and the limited vocabulary of neural machine translation, this paper firstly segmenting Mongolian words from different granularity, which are the segmentation of separates morphological suffixes and the segmentation of Ligatures morphological suffixes. For Chinese, we use word segmentation and word division. Then, we studied the morpheme-based Mongolian-Chinese end-to-end neural machine translation under the framework of bidirectional encoder and attention-based decoder. The experimental results show that the segmentation of Mongolian word effectively solves the data sparsity of Mongolian, and the morpheme-based Mongolian-Chinese neural machine translation model can improve the quality of machine translation. The best NIST and BLEU values of the morpheme-based Mongolian-Chinese Neural Machine Translation results were respectively reached 9.4216 and 0.6320.
{"title":"The Research on Morpheme-Based Mongolian-Chinese Neural Machine Translation","authors":"Siriguleng Wang, Wuyuntana","doi":"10.1109/INFOCT.2019.8710936","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8710936","url":null,"abstract":"In view of the rich morphology of Mongolian language and the limited vocabulary of neural machine translation, this paper firstly segmenting Mongolian words from different granularity, which are the segmentation of separates morphological suffixes and the segmentation of Ligatures morphological suffixes. For Chinese, we use word segmentation and word division. Then, we studied the morpheme-based Mongolian-Chinese end-to-end neural machine translation under the framework of bidirectional encoder and attention-based decoder. The experimental results show that the segmentation of Mongolian word effectively solves the data sparsity of Mongolian, and the morpheme-based Mongolian-Chinese neural machine translation model can improve the quality of machine translation. The best NIST and BLEU values of the morpheme-based Mongolian-Chinese Neural Machine Translation results were respectively reached 9.4216 and 0.6320.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123894521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8711412
Fei Du, Yongzheng Zhang, Xiuguo Bao, Boyuan Liu
It is valuable to classify IP address roles based on network traffic behavior for network security analysis. Many previous studies have focused on coarse-grained classification (e.g., servers, clients and P2P, and so on.), these do not meet the increasingly diverse needs of applications. In this paper, we propose a novel approach for learning the continuous feature representation of connection patterns that we call FENet, which focuses on the low-dimensional embedding of IP address connection features. Thus, we trained two-tier neural networks that classified IP address roles in the given network dataset. Our approach can achieve more fine granularity representation and classification of IP address roles. Experimental results demonstrate the effectiveness of FENet over existing state-of-the-art techniques. In several real-world networks from active IP addresses, we have achieved very high classification accuracy and stability.
{"title":"FENet: Roles Classification of IP Addresses Using Connection Patterns","authors":"Fei Du, Yongzheng Zhang, Xiuguo Bao, Boyuan Liu","doi":"10.1109/INFOCT.2019.8711412","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8711412","url":null,"abstract":"It is valuable to classify IP address roles based on network traffic behavior for network security analysis. Many previous studies have focused on coarse-grained classification (e.g., servers, clients and P2P, and so on.), these do not meet the increasingly diverse needs of applications. In this paper, we propose a novel approach for learning the continuous feature representation of connection patterns that we call FENet, which focuses on the low-dimensional embedding of IP address connection features. Thus, we trained two-tier neural networks that classified IP address roles in the given network dataset. Our approach can achieve more fine granularity representation and classification of IP address roles. Experimental results demonstrate the effectiveness of FENet over existing state-of-the-art techniques. In several real-world networks from active IP addresses, we have achieved very high classification accuracy and stability.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114978983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8710919
Danissa V. Rodriguez, D. Carver
Requirements traceability supports many software engineering activities such as change impact analysis and requirements validation, providing benefits to the overall quality of software systems. Factors such as lack of communication, time pressure problems, and unsuccessfully implemented traceability practices result in developers losing track of requirements. Requirements traceability is a primary means to address completeness and accuracy of requirements. It is an active research topic for software engineers. Textual analysis and information retrieval techniques have been applied to the requirements traceability recovery problem for many years, due to the textual components of requirements and source code. Information retrieval techniques are semiautomatic techniques for recovering traceability links and on occasion, they have become the baseline for automatic methods applied to requirements traceability recovery. We evaluate the performance of IR techniques applied to the requirement traceability recovery process. The most popular information retrieval techniques applied to the requirements traceability recovery problem are the IR Probabilistic, Vector Space Model, and Latent Semantic Index approach. All three approaches rank documents by using one of the documents for extracting queries and the other as the documents being search using those extracted queries; however, they apply different internal logics for establishing similarities. We compared IR Probabilistic, Vector Space Model, and Latent Semantic Index approaches to evaluate their performance for requirement traceability recovery using the metrics of precision and recall. Experimental results indicate a low precision and recall for the LSI technique and high precision and low recall for both the IR probabilistic and the VSM techniques.
{"title":"Comparison of Information Retrieval Techniques for Traceability Link Recovery","authors":"Danissa V. Rodriguez, D. Carver","doi":"10.1109/INFOCT.2019.8710919","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8710919","url":null,"abstract":"Requirements traceability supports many software engineering activities such as change impact analysis and requirements validation, providing benefits to the overall quality of software systems. Factors such as lack of communication, time pressure problems, and unsuccessfully implemented traceability practices result in developers losing track of requirements. Requirements traceability is a primary means to address completeness and accuracy of requirements. It is an active research topic for software engineers. Textual analysis and information retrieval techniques have been applied to the requirements traceability recovery problem for many years, due to the textual components of requirements and source code. Information retrieval techniques are semiautomatic techniques for recovering traceability links and on occasion, they have become the baseline for automatic methods applied to requirements traceability recovery. We evaluate the performance of IR techniques applied to the requirement traceability recovery process. The most popular information retrieval techniques applied to the requirements traceability recovery problem are the IR Probabilistic, Vector Space Model, and Latent Semantic Index approach. All three approaches rank documents by using one of the documents for extracting queries and the other as the documents being search using those extracted queries; however, they apply different internal logics for establishing similarities. We compared IR Probabilistic, Vector Space Model, and Latent Semantic Index approaches to evaluate their performance for requirement traceability recovery using the metrics of precision and recall. Experimental results indicate a low precision and recall for the LSI technique and high precision and low recall for both the IR probabilistic and the VSM techniques.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129112911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/infoct.2019.8711401
{"title":"ICICT 2019 Title Page","authors":"","doi":"10.1109/infoct.2019.8711401","DOIUrl":"https://doi.org/10.1109/infoct.2019.8711401","url":null,"abstract":"","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125343148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8710930
M. Masinde, M. Mkhonto
Local government institutions present the ideal place for majority of the interactions between the government and the citizens. Despite this, most e-government implementation strategies tend to be national outfits that fail to consider the unique contexts of the local government. South Africa’s local government takes the form and shape of the national socio-political system that is characterized by alarming levels of inequalities. This has resulted in sharp apartheid digital divide for which local e-government implementers cannot afford to ignore. Using data from three municipalities, drawn from each of the three categories of South Africa’s local government institutions, this paper presents the critical success factors for guiding e-government implementation initiatives at local government level. Results from principal component analysis and arithmetic mean of data from 243 respondents was used to determine the significance of the factors relating to the priority e-services, e-skills and e-infrastructures.
{"title":"The Critical Success Factors for e-Government Implementation in South Africa’s Local government: Factoring in Apartheid Digital Divide","authors":"M. Masinde, M. Mkhonto","doi":"10.1109/INFOCT.2019.8710930","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8710930","url":null,"abstract":"Local government institutions present the ideal place for majority of the interactions between the government and the citizens. Despite this, most e-government implementation strategies tend to be national outfits that fail to consider the unique contexts of the local government. South Africa’s local government takes the form and shape of the national socio-political system that is characterized by alarming levels of inequalities. This has resulted in sharp apartheid digital divide for which local e-government implementers cannot afford to ignore. Using data from three municipalities, drawn from each of the three categories of South Africa’s local government institutions, this paper presents the critical success factors for guiding e-government implementation initiatives at local government level. Results from principal component analysis and arithmetic mean of data from 243 respondents was used to determine the significance of the factors relating to the priority e-services, e-skills and e-infrastructures.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8711390
Boyang Ti, Yongsheng Gao, Qiang Li, Jie Zhao
Motion generalization is an effective way for robot leaner to learn from demonstration, especially they are set within a novel situation. However, as for learned skills, to generate humanoid and natural behaviour for robot is the key challenge in robot skill learning. In this paper, we proposed a method using the statistical method Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to analyze the data from human demonstration. For accurate learning, the raw data is pretreated by dynamic time warping (DTW). Dynamic movement primitives (DMP) aim to generate a human-like motion to a new goal, employing the data processed by GMM-GMR. Including induction, summarizing demonstration data and generalizing skill, the results, in comparison with Average method pretreating data, show that our method can achieve task-specific generalization with more smooth and human-like trajectory.
运动泛化是机器人学习者从演示中学习的一种有效方法,特别是在一个新的环境中。然而,对于学习到的技能,如何生成机器人的类人行为和自然行为是机器人技能学习的关键挑战。本文提出了一种利用统计方法高斯混合模型和高斯混合回归(GMM-GMR)对人体演示数据进行分析的方法。为了准确学习,对原始数据进行动态时间规整(DTW)预处理。动态运动原语(Dynamic movement primitives, DMP)的目的是利用GMM-GMR处理的数据,生成一个类似人的运动到一个新的目标。包括归纳、汇总演示数据和泛化技巧,结果表明,与Average方法预处理数据相比,我们的方法可以实现特定任务的泛化,并且具有更平滑、更人性化的轨迹。
{"title":"Dynamic Movement Primitives for Movement Generation Using GMM-GMR Analytical Method","authors":"Boyang Ti, Yongsheng Gao, Qiang Li, Jie Zhao","doi":"10.1109/INFOCT.2019.8711390","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8711390","url":null,"abstract":"Motion generalization is an effective way for robot leaner to learn from demonstration, especially they are set within a novel situation. However, as for learned skills, to generate humanoid and natural behaviour for robot is the key challenge in robot skill learning. In this paper, we proposed a method using the statistical method Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to analyze the data from human demonstration. For accurate learning, the raw data is pretreated by dynamic time warping (DTW). Dynamic movement primitives (DMP) aim to generate a human-like motion to a new goal, employing the data processed by GMM-GMR. Including induction, summarizing demonstration data and generalizing skill, the results, in comparison with Average method pretreating data, show that our method can achieve task-specific generalization with more smooth and human-like trajectory.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123975914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/infoct.2019.8711174
Yilan Xing, Jian-yi Li, Xing-hui Wang
In view of the difficulty of parking, it is urgent to build an intelligent parking system. In this paper, a wireless parking detector based on geomagnetic detection and NB-IoT technology is proposed. The parking detector consists of three parts: the STM32 microcontroller, the geomagnetic sensor and the NB wireless module. Geomagnetic sensor collects the intensity of the magnetic field around, and then the magnetic field intensity is sent to the microcontroller. The microcontroller determines whether the parking space is occupied by the algorithm, and then transfers the occupancy of the parking space to the background management system in real time through NB.
{"title":"Research and Design of Parking Detector Based on NB-IoT and Geomagnetism","authors":"Yilan Xing, Jian-yi Li, Xing-hui Wang","doi":"10.1109/infoct.2019.8711174","DOIUrl":"https://doi.org/10.1109/infoct.2019.8711174","url":null,"abstract":"In view of the difficulty of parking, it is urgent to build an intelligent parking system. In this paper, a wireless parking detector based on geomagnetic detection and NB-IoT technology is proposed. The parking detector consists of three parts: the STM32 microcontroller, the geomagnetic sensor and the NB wireless module. Geomagnetic sensor collects the intensity of the magnetic field around, and then the magnetic field intensity is sent to the microcontroller. The microcontroller determines whether the parking space is occupied by the algorithm, and then transfers the occupancy of the parking space to the background management system in real time through NB.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"185 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120977916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8711039
Riza Dhiman, Vinay Chopra
Regression testing is used to retest the component of a system that verifies that after modifications defects are removed from the in effected software. Automation tools are required for these types of testing. This work is based on manual slicing and automated slicing for test case prioritization to detect maximum number of faults from the project in which some changes are done for the new version release. The slicing is the technique which will divide the whole project function wise and detect associated functions. To test the performance of proposed and existing algorithm MATLAB is being used by considering the dataset of ten projects. Each project has seven functions and four numbers of changes are defined for the regression testing. In the simulation it is being analyzed that fault detection rate is increased and execution time is reduced with the implementation of automated test case prioritization as compared to manual test case prioritization in regression testing.
{"title":"Novel Approach for Test Case Prioritization Using ACO Algorithm","authors":"Riza Dhiman, Vinay Chopra","doi":"10.1109/INFOCT.2019.8711039","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8711039","url":null,"abstract":"Regression testing is used to retest the component of a system that verifies that after modifications defects are removed from the in effected software. Automation tools are required for these types of testing. This work is based on manual slicing and automated slicing for test case prioritization to detect maximum number of faults from the project in which some changes are done for the new version release. The slicing is the technique which will divide the whole project function wise and detect associated functions. To test the performance of proposed and existing algorithm MATLAB is being used by considering the dataset of ten projects. Each project has seven functions and four numbers of changes are defined for the regression testing. In the simulation it is being analyzed that fault detection rate is increased and execution time is reduced with the implementation of automated test case prioritization as compared to manual test case prioritization in regression testing.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126483983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8710860
I. Koike, Kano Suzuki, T. Kinoshita
Queuing network techniques are effective for evaluating the performance of computer systems. We discuss a queuing network technique for computer systems in finite input source. The finite number of terminals exist in the network and a job in the network moves to the server that includes CPU, I/O equipment and memory after think-time at the terminal. When the job arrives at the server, it acquires a part of memory and executes CPU and I/O processing in the server. After the job completes CPU and I/O processing, it releases the memory and goes back to its original terminal. However, when the computer system has the memory resource, the queuing network model has no product form solution and cannot be calculated the exact solutions. We proposed here an approximation queuing network technique for calculating the performance measures of computer systems with finite input source on which multiple types of jobs exist. This technique involves dividing the queuing network into two levels; one is „inner level„ in which a job executes CPU and I/O processing, and the other is „outer level„ that includes terminals and communication lines. By dividing the network into two levels, we can prevent the number of states of the network from increasing and approximately calculate the performance measures of the network. We evaluated the proposed approximation technique by using numerical experiments and clarified the characteristics of the system response time and the mean number of jobs in the inner level.
{"title":"Queuing Network Approximation Method for Evaluating Performance of Computer Systems with Finite Input Source","authors":"I. Koike, Kano Suzuki, T. Kinoshita","doi":"10.1109/INFOCT.2019.8710860","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8710860","url":null,"abstract":"Queuing network techniques are effective for evaluating the performance of computer systems. We discuss a queuing network technique for computer systems in finite input source. The finite number of terminals exist in the network and a job in the network moves to the server that includes CPU, I/O equipment and memory after think-time at the terminal. When the job arrives at the server, it acquires a part of memory and executes CPU and I/O processing in the server. After the job completes CPU and I/O processing, it releases the memory and goes back to its original terminal. However, when the computer system has the memory resource, the queuing network model has no product form solution and cannot be calculated the exact solutions. We proposed here an approximation queuing network technique for calculating the performance measures of computer systems with finite input source on which multiple types of jobs exist. This technique involves dividing the queuing network into two levels; one is „inner level„ in which a job executes CPU and I/O processing, and the other is „outer level„ that includes terminals and communication lines. By dividing the network into two levels, we can prevent the number of states of the network from increasing and approximately calculate the performance measures of the network. We evaluated the proposed approximation technique by using numerical experiments and clarified the characteristics of the system response time and the mean number of jobs in the inner level.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128995059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/INFOCT.2019.8710865
Qiang Li, Yongsheng Gao, Boyang Ti, Jie Zhao
In this paper, a nonlinear controller for robotic manipulator with unknown model parameters is proposed to reach high accurate trajectory tracking. The unknown parameters such as uncertain moment of inertia, uncertain geometry of manipulator, unknown friction torque, unknown gravitational torque and payload variation are addressed. Model-based control methods require accurate model parameters, while it is difficult to get these parameters. To solve this problem, a model-error observer is proposed, and it observes the parameter error effectively. In the proposed control law, the model-error observer is adopted to handle unknown model parameters, and this controller solves the problem of model-based control methods effectively. The robust performance of the control law is confirmed in simulations, and the results show accurate path tracking in spite of the existing friction and unknown model parameters.
{"title":"Model-Error-Observer-Based Control of Robotic Manipulator with Uncertain Dynamics","authors":"Qiang Li, Yongsheng Gao, Boyang Ti, Jie Zhao","doi":"10.1109/INFOCT.2019.8710865","DOIUrl":"https://doi.org/10.1109/INFOCT.2019.8710865","url":null,"abstract":"In this paper, a nonlinear controller for robotic manipulator with unknown model parameters is proposed to reach high accurate trajectory tracking. The unknown parameters such as uncertain moment of inertia, uncertain geometry of manipulator, unknown friction torque, unknown gravitational torque and payload variation are addressed. Model-based control methods require accurate model parameters, while it is difficult to get these parameters. To solve this problem, a model-error observer is proposed, and it observes the parameter error effectively. In the proposed control law, the model-error observer is adopted to handle unknown model parameters, and this controller solves the problem of model-based control methods effectively. The robust performance of the control law is confirmed in simulations, and the results show accurate path tracking in spite of the existing friction and unknown model parameters.","PeriodicalId":369231,"journal":{"name":"2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}