Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846537
Zhongyun Jiang
Student activity trajectory analysis and attendance management are the important content of the management of university students. In view of the existing situation of low efficiency and high cost, it is necessary to design a smart management system based on Internet of things technology. Using the radio frequency identification technology and wireless sensor network technology to capture the trajectory of the students' activities, it can provide analysis data for the student attendance management system. The student attendance management system based on Internet of things can record and analyze students' activities trajectory, and grasp real-time dynamics of students. The system is conducive to improve the quality of teaching and management of students.
{"title":"Analysis of student activities trajectory and design of attendance management based on internet of things","authors":"Zhongyun Jiang","doi":"10.1109/ICALIP.2016.7846537","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846537","url":null,"abstract":"Student activity trajectory analysis and attendance management are the important content of the management of university students. In view of the existing situation of low efficiency and high cost, it is necessary to design a smart management system based on Internet of things technology. Using the radio frequency identification technology and wireless sensor network technology to capture the trajectory of the students' activities, it can provide analysis data for the student attendance management system. The student attendance management system based on Internet of things can record and analyze students' activities trajectory, and grasp real-time dynamics of students. The system is conducive to improve the quality of teaching and management of students.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"96 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846668
Gao Xiang, Zhu Qiuyu, Wang Hui, Chen Yan
This paper presents a system to recognize face by a variation of LBPH. We use a method of regression of local binary features to get the landmark of face image whose computational complexity is very low. We utilize these landmark points which can be trained to align the face, to extract the facial features. By calculating the Local Binary Patterns Histogram (LBPH) of these landmark points and its neighborhood pixels, we can extract effective facial feature to realize face recognition. This method can increase the calculating speed of LBPH and also can improve the recognition rate. Finally, we show the experimental results using this method to recognize face.
{"title":"Face recognition based on LBPH and regression of Local Binary features","authors":"Gao Xiang, Zhu Qiuyu, Wang Hui, Chen Yan","doi":"10.1109/ICALIP.2016.7846668","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846668","url":null,"abstract":"This paper presents a system to recognize face by a variation of LBPH. We use a method of regression of local binary features to get the landmark of face image whose computational complexity is very low. We utilize these landmark points which can be trained to align the face, to extract the facial features. By calculating the Local Binary Patterns Histogram (LBPH) of these landmark points and its neighborhood pixels, we can extract effective facial feature to realize face recognition. This method can increase the calculating speed of LBPH and also can improve the recognition rate. Finally, we show the experimental results using this method to recognize face.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115843016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846644
Deke Kong, Xuzhi Wang, Linxia Zhong, Yangyang Sha, Zhongbin Li
With the development of Internet technology, dynamic password token have won more and more favor in the field of identity authentication with the good security and ease of operation. Nowadays, foreign's SHA series of algorithms of dynamic password productsare widely using in our country, which have caused a potential threaten for the localization of information security. To realize the research and development of the independent technology products based state-owned cipher algorithm will make encryption products be better safeguarded. In this paper, I select the dynamic password authentication technology based on time synchronization, and a dynamic password is designed by using the state-owned SM3 algorithm Experimental result show that this method has high security and value.
{"title":"Dynamic password token based on SM3 algorithm","authors":"Deke Kong, Xuzhi Wang, Linxia Zhong, Yangyang Sha, Zhongbin Li","doi":"10.1109/ICALIP.2016.7846644","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846644","url":null,"abstract":"With the development of Internet technology, dynamic password token have won more and more favor in the field of identity authentication with the good security and ease of operation. Nowadays, foreign's SHA series of algorithms of dynamic password productsare widely using in our country, which have caused a potential threaten for the localization of information security. To realize the research and development of the independent technology products based state-owned cipher algorithm will make encryption products be better safeguarded. In this paper, I select the dynamic password authentication technology based on time synchronization, and a dynamic password is designed by using the state-owned SM3 algorithm Experimental result show that this method has high security and value.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846627
Hongbin Yang, Xianyang Liu, Shenbo Chen, Zhou Lei, Hongguang Du, C. Zhu
Spark has become the first choice of distributed computing framework for big data processing. The biggest highlight is the use of in-memory computations on large clusters, which is suitable for iterative computing and interactive computing. However, the straggler machines can seriously affect their performance. The current approach of Spark is speculative execution which selects the slow tasks and resubmit them, but there are two deficiencies: Firstly, it directly uses the median time to judge whether the task is abnormal, this may be misleading in reality; Secondly, the backup tasks are directly added to the task queue without taking into account the presence of straggler machines. These deficiencies will further extend the execution time of a job. Therefore, we design a improved speculative strategy, Multiple Phases Time Estimation (MPTE), which greatly reduces the impact of straggler machines. In MPTE, we use the remaining time estimated based on multiple phases to select slow tasks, and we improve the task scheduler for backup tasks scheduling. Experiment results show that MPTE can improve the accuracy of determining if should run a speculative copy for a task by about 20% compared to Spark native scheduler.
{"title":"Improving Spark performance with MPTE in heterogeneous environments","authors":"Hongbin Yang, Xianyang Liu, Shenbo Chen, Zhou Lei, Hongguang Du, C. Zhu","doi":"10.1109/ICALIP.2016.7846627","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846627","url":null,"abstract":"Spark has become the first choice of distributed computing framework for big data processing. The biggest highlight is the use of in-memory computations on large clusters, which is suitable for iterative computing and interactive computing. However, the straggler machines can seriously affect their performance. The current approach of Spark is speculative execution which selects the slow tasks and resubmit them, but there are two deficiencies: Firstly, it directly uses the median time to judge whether the task is abnormal, this may be misleading in reality; Secondly, the backup tasks are directly added to the task queue without taking into account the presence of straggler machines. These deficiencies will further extend the execution time of a job. Therefore, we design a improved speculative strategy, Multiple Phases Time Estimation (MPTE), which greatly reduces the impact of straggler machines. In MPTE, we use the remaining time estimated based on multiple phases to select slow tasks, and we improve the task scheduler for backup tasks scheduling. Experiment results show that MPTE can improve the accuracy of determining if should run a speculative copy for a task by about 20% compared to Spark native scheduler.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121803219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846562
Jing Yin, X. Yang
Face as a biometric identification in computer vision is an important medium, in areas such as video surveillance, animation games, security anti-terrorist has a very wide range of applications, creating vivid, strong visibility of 3d face model, now has become a challenging in the field of computer vision is one of the important topics. At first, this paper used the zhongxing-micro ZC301P cameras to build a binocular stereo vision system for recording images. After the camera calibration and binocular calibration, the three-dimensional data of facial images were extracted using the functions of OpenCV computer vision library, and then 3d face model were reconstructed preliminary by DirectX. According the reconstruction process, the human face three-dimensional reconstruction software was designed and developed. The paper laid the foundation for the next step work that is to obtain more clear and strong visibility of 3d face.
{"title":"3D facial reconstruction of based on OpenCV and DirectX","authors":"Jing Yin, X. Yang","doi":"10.1109/ICALIP.2016.7846562","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846562","url":null,"abstract":"Face as a biometric identification in computer vision is an important medium, in areas such as video surveillance, animation games, security anti-terrorist has a very wide range of applications, creating vivid, strong visibility of 3d face model, now has become a challenging in the field of computer vision is one of the important topics. At first, this paper used the zhongxing-micro ZC301P cameras to build a binocular stereo vision system for recording images. After the camera calibration and binocular calibration, the three-dimensional data of facial images were extracted using the functions of OpenCV computer vision library, and then 3d face model were reconstructed preliminary by DirectX. According the reconstruction process, the human face three-dimensional reconstruction software was designed and developed. The paper laid the foundation for the next step work that is to obtain more clear and strong visibility of 3d face.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121074030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846531
W. Jiang, Chang Chen, Yan Cai
This paper concludes that traditional fireman uniform has defect on application. In order to improve the coefficient fireman's personal safety, a kind of multi-individual multi-parameter multi-sensor intelligent physical warning system for fireman(IPESF) is proposed. At length, it introduces the structure of system platform and the design of each module. Corresponding software is developed on monitor-display terminator which could check several fire environment and position where firemen stay as well as their crucial physiological information so that the warnings could be provided. Moreover, suitable data processing algorithm is selected to ensure the accuracy when the data of human body are transmitting avoiding that the optimal rescuing time is lost owing to delay of decision. Therefore, the coefficient fireman's personal safety are enhanced greatly. Finally, the experimental test environment is built and results verifies that this system has remarkable advantages including low power consumption, low transmitting error rate, strong anti-interference.
{"title":"The design on intelligent physical warning system for fireman","authors":"W. Jiang, Chang Chen, Yan Cai","doi":"10.1109/ICALIP.2016.7846531","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846531","url":null,"abstract":"This paper concludes that traditional fireman uniform has defect on application. In order to improve the coefficient fireman's personal safety, a kind of multi-individual multi-parameter multi-sensor intelligent physical warning system for fireman(IPESF) is proposed. At length, it introduces the structure of system platform and the design of each module. Corresponding software is developed on monitor-display terminator which could check several fire environment and position where firemen stay as well as their crucial physiological information so that the warnings could be provided. Moreover, suitable data processing algorithm is selected to ensure the accuracy when the data of human body are transmitting avoiding that the optimal rescuing time is lost owing to delay of decision. Therefore, the coefficient fireman's personal safety are enhanced greatly. Finally, the experimental test environment is built and results verifies that this system has remarkable advantages including low power consumption, low transmitting error rate, strong anti-interference.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127056677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846524
Liu Yue
Detecting the mutation information in traveling wave is the key point to realize high voltage direct current (HVDC) transmission line ground fault location, and the fault voltage traveling wave is essentially independent, therefore, the fault location method based on fast independent component analysis (FastICA) algorithm is proposed in this paper. According to the single-ended fault traveling wave location principle, the FastICA is used to isolate fault voltage traveling wave feature signals from multi-channel high voltage direct current transmission line voltage signals, the time of first and second traveling wave head reach the measurement point are recognized, as well as the polarity of the two wave heads are distinguished. The HVDC transmission system model is established in Matlab for simulating a variety of line to ground fault types, the results show that the traveling wave location method based on FastICA can isolate the fault voltage traveling wave feature signals effectively, and the mutation information can be extracted accurately for realizing fault location.
{"title":"Research on HVDC single-ended fault traveling wave location method based on FastICA","authors":"Liu Yue","doi":"10.1109/ICALIP.2016.7846524","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846524","url":null,"abstract":"Detecting the mutation information in traveling wave is the key point to realize high voltage direct current (HVDC) transmission line ground fault location, and the fault voltage traveling wave is essentially independent, therefore, the fault location method based on fast independent component analysis (FastICA) algorithm is proposed in this paper. According to the single-ended fault traveling wave location principle, the FastICA is used to isolate fault voltage traveling wave feature signals from multi-channel high voltage direct current transmission line voltage signals, the time of first and second traveling wave head reach the measurement point are recognized, as well as the polarity of the two wave heads are distinguished. The HVDC transmission system model is established in Matlab for simulating a variety of line to ground fault types, the results show that the traveling wave location method based on FastICA can isolate the fault voltage traveling wave feature signals effectively, and the mutation information can be extracted accurately for realizing fault location.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121973142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846525
Qiuxing Chen, Lixiu Yao, Jie Yang
As the rapid development of computer technology and network communication, short text data has increased enormously. Classifying the short text snippets is a great challenge to due to its less semantic information and high sparseness. In this paper, we proposed an improved short text classification method based on Latent Dirichlet Allocation topic model and K-Nearest Neighbor algorithm. The generated probabilistic topics help both make the texts more semantic-focused and reduce the sparseness. In addition, we present a novel topic similarity measure method with the topic-word matrix and the relationship of the discriminative terms between two short texts. A short text dataset for experiment validation is constructed by crawling the posts from Sina News website. The extensive and comparable experimental results obtained show the effectiveness of our proposed method.
{"title":"Short text classification based on LDA topic model","authors":"Qiuxing Chen, Lixiu Yao, Jie Yang","doi":"10.1109/ICALIP.2016.7846525","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846525","url":null,"abstract":"As the rapid development of computer technology and network communication, short text data has increased enormously. Classifying the short text snippets is a great challenge to due to its less semantic information and high sparseness. In this paper, we proposed an improved short text classification method based on Latent Dirichlet Allocation topic model and K-Nearest Neighbor algorithm. The generated probabilistic topics help both make the texts more semantic-focused and reduce the sparseness. In addition, we present a novel topic similarity measure method with the topic-word matrix and the relationship of the discriminative terms between two short texts. A short text dataset for experiment validation is constructed by crawling the posts from Sina News website. The extensive and comparable experimental results obtained show the effectiveness of our proposed method.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125515415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846656
Qiyun Sun, W. Wan, Xiaoqing Yu
With the maturity of a variety of 3D modeling software and animation software as well as the emergence of various virtual reality equipment, Virtual reality technology has received more attention and application. In this paper, the author uses 3Ds Max and Esri CityEngine to generate 3D modeling and animation. These models are imported into Unity 3D to simulate the escape scene. The result of the simulation can be displayed and interacted by virtual reality head wear equipment named Oculus Rift DK2. The simulation provides users with direct feelings and experience about the proposed escape system.
{"title":"The simulation of building escape system based on Unity3D","authors":"Qiyun Sun, W. Wan, Xiaoqing Yu","doi":"10.1109/ICALIP.2016.7846656","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846656","url":null,"abstract":"With the maturity of a variety of 3D modeling software and animation software as well as the emergence of various virtual reality equipment, Virtual reality technology has received more attention and application. In this paper, the author uses 3Ds Max and Esri CityEngine to generate 3D modeling and animation. These models are imported into Unity 3D to simulate the escape scene. The result of the simulation can be displayed and interacted by virtual reality head wear equipment named Oculus Rift DK2. The simulation provides users with direct feelings and experience about the proposed escape system.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-11DOI: 10.1109/ICALIP.2016.7846634
Zhikui Luo, Y. Wan
There are many methods for compressing color images nowadays and most of them are lossy. However, in many important situations, lossless color image compression is irreplaceable. In this paper, we propose an efficient lossless color image compression framework. In this framework, for an RGB image, it is first decorrelated via a reversible color transform. In the new color space, the color components are directly downsampled and then interpolated to the original size. Next we subtract these interpolated components from their original counterparts to obtain the prediction errors, which together with the color subcomponents are compressed via Huffman coding. At the decoder, the exact reverse process is used to reconstruct the original color images. Experimental results show that the proposed method achieves overall better compression performance compared with the famous CALIC algorithm and some other popular methods used in the TIFF and PNG color image compression standards.
{"title":"An efficient framework for lossless color image compression","authors":"Zhikui Luo, Y. Wan","doi":"10.1109/ICALIP.2016.7846634","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846634","url":null,"abstract":"There are many methods for compressing color images nowadays and most of them are lossy. However, in many important situations, lossless color image compression is irreplaceable. In this paper, we propose an efficient lossless color image compression framework. In this framework, for an RGB image, it is first decorrelated via a reversible color transform. In the new color space, the color components are directly downsampled and then interpolated to the original size. Next we subtract these interpolated components from their original counterparts to obtain the prediction errors, which together with the color subcomponents are compressed via Huffman coding. At the decoder, the exact reverse process is used to reconstruct the original color images. Experimental results show that the proposed method achieves overall better compression performance compared with the famous CALIC algorithm and some other popular methods used in the TIFF and PNG color image compression standards.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120807489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}