In consideration of problems with augmented reality, including untimeliness, inaccuracy and instability of spatial registration results, we proposes an improved algorithm based on FAST-ER (Features from Accelerated Segment Test) and SURF (Speeded-Up Robust Features) in this paper, which does not only improve recursive adjustment methods for decision trees during feature point extraction, but also overcome problems of traditional FAST-ER algorithms such as heavy computation load and ineffective feature point extraction. After information about location parameters of a camera is obtained in this paper, the virtual model is rendered into real scenes with OpenGL to realize virtual-real fusion. The experimental results suggest that it costs short time to process complicated natural images with the algorithm proposed in this paper. In case of any illumination change, scale change, rotation in scenes, it is adaptable to complex outdoor environment, showing relatively high timeliness and robustness.
针对增强现实中空间配准结果的不及时性、不准确性和不稳定性等问题,本文提出了一种基于FAST-ER (Features from Accelerated Segment Test)和SURF (Accelerated Robust Features)的改进算法,不仅改进了特征点提取过程中决策树的递归调整方法,同时也克服了传统FAST-ER算法计算量大、特征点提取效率低等问题。本文在获取摄像机的位置参数信息后,利用OpenGL将虚拟模型渲染到真实场景中,实现虚实融合。实验结果表明,本文提出的算法处理复杂的自然图像所需的时间短。在场景中发生光照变化、尺度变化、旋转等情况时,能适应复杂的室外环境,表现出较高的时效性和鲁棒性。
{"title":"Application of scene recognition technology based on fast ER and surf algorithm in augmented reality","authors":"Xiangjie Li, Xuzhi Wang, Cheng Cheng","doi":"10.1049/CP.2017.0125","DOIUrl":"https://doi.org/10.1049/CP.2017.0125","url":null,"abstract":"In consideration of problems with augmented reality, including untimeliness, inaccuracy and instability of spatial registration results, we proposes an improved algorithm based on FAST-ER (Features from Accelerated Segment Test) and SURF (Speeded-Up Robust Features) in this paper, which does not only improve recursive adjustment methods for decision trees during feature point extraction, but also overcome problems of traditional FAST-ER algorithms such as heavy computation load and ineffective feature point extraction. After information about location parameters of a camera is obtained in this paper, the virtual model is rendered into real scenes with OpenGL to realize virtual-real fusion. The experimental results suggest that it costs short time to process complicated natural images with the algorithm proposed in this paper. In case of any illumination change, scale change, rotation in scenes, it is adaptable to complex outdoor environment, showing relatively high timeliness and robustness.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121424638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The promotion of autonomous vehicles is a decisive step to implement smart urban planning. The machine vision technique applied in the self-driving car can facilitate the car detecting and tracking other vehicles, pedestrians, lanes and traffic signs on the road, etc. This paper proposed an algorithm to track the vehicle with the adaptively changed scale. First, we use the tracker to obtain the vehicle candidates at each frame based on kernelized correlation filter. Next, an array of particles was created to represent different scales. Further, a new image feature representation based on integrated-color-histogram was proposed to insert the updated scheme concerning the particle filter algorithm. Last, we used one smooth method to make the scales change have its own memory to prevent it from violent variation. In the experiment section, we have chosen some pervasive tracker to analyze. The results showed that in the aspects of both accuracy and robustness, our proposed algorithm worked more properly compared with the other algorithm, by virtue of its minimal error relative to the data benchmark.
{"title":"Adaptively self-driving tracking algorithm based on particle filter","authors":"Shiyu Yang, K. Hao, Yongsheng Ding, Jian Liu","doi":"10.1049/CP.2017.0103","DOIUrl":"https://doi.org/10.1049/CP.2017.0103","url":null,"abstract":"The promotion of autonomous vehicles is a decisive step to implement smart urban planning. The machine vision technique applied in the self-driving car can facilitate the car detecting and tracking other vehicles, pedestrians, lanes and traffic signs on the road, etc. This paper proposed an algorithm to track the vehicle with the adaptively changed scale. First, we use the tracker to obtain the vehicle candidates at each frame based on kernelized correlation filter. Next, an array of particles was created to represent different scales. Further, a new image feature representation based on integrated-color-histogram was proposed to insert the updated scheme concerning the particle filter algorithm. Last, we used one smooth method to make the scales change have its own memory to prevent it from violent variation. In the experiment section, we have chosen some pervasive tracker to analyze. The results showed that in the aspects of both accuracy and robustness, our proposed algorithm worked more properly compared with the other algorithm, by virtue of its minimal error relative to the data benchmark.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132524280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Head-Related Transfer Function (HRTFS) is the key to many applications in spatial audio. Its large amount of data makes it difficult to make real-time implementation. Reducing HRTF data is necessary and important. In this paper, we apply a new developed signal decomposition theory, named Adaptive Fourier Decomposition (AFD), to decompose and compress HRTF data, comparing with traditional Fourier's convergence property and PCA's compression property. Simulation results show that the proposed AFD-based decomposition and compression method enables evident performance improvement for HRTF.
{"title":"The decomposition and compression of HRTF based on adaptive fourier decomposition","authors":"Yong Fang, Mengjie Shi, Qinghua Huang, Liming Zhang","doi":"10.1049/CP.2017.0120","DOIUrl":"https://doi.org/10.1049/CP.2017.0120","url":null,"abstract":"Head-Related Transfer Function (HRTFS) is the key to many applications in spatial audio. Its large amount of data makes it difficult to make real-time implementation. Reducing HRTF data is necessary and important. In this paper, we apply a new developed signal decomposition theory, named Adaptive Fourier Decomposition (AFD), to decompose and compress HRTF data, comparing with traditional Fourier's convergence property and PCA's compression property. Simulation results show that the proposed AFD-based decomposition and compression method enables evident performance improvement for HRTF.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133774916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video plays an important role in our daily life. But in most video websites such as YouTube, it is always a problem to classify millions of videos that are updated every day. So there is an urgent need to develop a classification algorithm to accurately assign labels to those videos. In this paper, we use Google Cloud Platform as our calculating environment and choose the new and improved YT-8M V2 as dataset. Based on these, we compare the estimation of distribution algorithm and the recurrent neural network algorithm, trace their accuracy, and finally find the more suitable one for this problem.
{"title":"A comparative study on large-size video indexing","authors":"Ziyue Luo, Xiaoging Yu, Linxia Zhong","doi":"10.1049/CP.2017.0114","DOIUrl":"https://doi.org/10.1049/CP.2017.0114","url":null,"abstract":"Video plays an important role in our daily life. But in most video websites such as YouTube, it is always a problem to classify millions of videos that are updated every day. So there is an urgent need to develop a classification algorithm to accurately assign labels to those videos. In this paper, we use Google Cloud Platform as our calculating environment and choose the new and improved YT-8M V2 as dataset. Based on these, we compare the estimation of distribution algorithm and the recurrent neural network algorithm, trace their accuracy, and finally find the more suitable one for this problem.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129154951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensor networks (WSNs) used in distributed surveillance commonly requires network-wide time synchronization. Most existing time synchronization protocols assume that the clock with each node can be modeled by a linear equation at + b with t being the universal time, a the clock drift (skew) coefficient, and b the clock offset. Some protocols assume that a = 1, hence the synchronization target is the parameter b while others assume that a could deviate from one and both parameters a and b are the synchronization targets. In the latter case algorithmic synchronization details become complicated, requiring either involved computation or memory use. For example, the recently proposed Average TimeSync (ATS) protocol demands expensive use of memory within each WSN node. In this work a memory-lite time synchronization (MLTS) protocol is proposed, which can achieve synchronization of both drift and offset just by sending synchronization packet including the past time stamps received by the sender node, but the number of such past stamps is minor. Both simulation and hardware experimental results justify that the proposed memory-lite protocol is still capable of effective distributive time synchronization with robustness but at the slight price of a little slowed down synchronization speed.
{"title":"A memory-lite time synchronization protocol for wireless sensor networks","authors":"Jin He, G. Shi, Hongtao Chen","doi":"10.1049/CP.2017.0118","DOIUrl":"https://doi.org/10.1049/CP.2017.0118","url":null,"abstract":"Wireless sensor networks (WSNs) used in distributed surveillance commonly requires network-wide time synchronization. Most existing time synchronization protocols assume that the clock with each node can be modeled by a linear equation at + b with t being the universal time, a the clock drift (skew) coefficient, and b the clock offset. Some protocols assume that a = 1, hence the synchronization target is the parameter b while others assume that a could deviate from one and both parameters a and b are the synchronization targets. In the latter case algorithmic synchronization details become complicated, requiring either involved computation or memory use. For example, the recently proposed Average TimeSync (ATS) protocol demands expensive use of memory within each WSN node. In this work a memory-lite time synchronization (MLTS) protocol is proposed, which can achieve synchronization of both drift and offset just by sending synchronization packet including the past time stamps received by the sender node, but the number of such past stamps is minor. Both simulation and hardware experimental results justify that the proposed memory-lite protocol is still capable of effective distributive time synchronization with robustness but at the slight price of a little slowed down synchronization speed.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114546581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After ten years of development, ILP has been widely used in the field of data mining, it is also a hot topic in today's research. But ILP also has many disadvantages, such as it is a NP problem, but also a stand-alone algorithm, so that when the data is large, the efficiency is relatively low. To solve this problem, in this article, the new expression of frequent patterns as well as the heterogeneous knowledge base depending on ontology and knowledge are proposed. Based on the above two improvements, the parallel implementation of ILP can be realized.
{"title":"Research on parallel frequent pattern mining based on ontology and rules","authors":"Chenxi Yi, Ming Sun","doi":"10.1049/CP.2017.0109","DOIUrl":"https://doi.org/10.1049/CP.2017.0109","url":null,"abstract":"After ten years of development, ILP has been widely used in the field of data mining, it is also a hot topic in today's research. But ILP also has many disadvantages, such as it is a NP problem, but also a stand-alone algorithm, so that when the data is large, the efficiency is relatively low. To solve this problem, in this article, the new expression of frequent patterns as well as the heterogeneous knowledge base depending on ontology and knowledge are proposed. Based on the above two improvements, the parallel implementation of ILP can be realized.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ultimate goal of this paper is to train a model based on the given administrative data to predict the amount of each administrative item of month in different years and different regions as accurate as possible. In this paper, we propose a novel approach for quantity forecast of administrative data which is named after parallel random forest (parallel RF). Firstly, we collect administrative data from different online systems using java program and store it in MongoDB. Then we extract key information from these data and assign different numbers to different administrative areas and item names. Next, as the core of whole method, we train the prediction model by implementing the random forest method on Hadoop Map-Reduce. Finally, we compare the execution efficiency and prediction accuracy with other standard algorithms such as SVM and gradient boosting. The experiment shows that the accuracy and efficiency of our method is much better than other algorithms and our method is more reliable and useful.
本文的最终目标是训练一个基于给定行政数据的模型,尽可能准确地预测不同年份和不同地区的每个月的行政项目的数量。本文提出了一种新的行政数据数量预测方法——并行随机森林(parallel random forest)。首先,我们使用java程序从不同的在线系统收集管理数据,并将其存储在MongoDB中。然后,我们从这些数据中提取关键信息,并为不同的管理区域和项目名称分配不同的编号。接下来,作为整个方法的核心,我们通过在Hadoop Map-Reduce上实现随机森林方法来训练预测模型。最后,比较了SVM和梯度增强算法的执行效率和预测精度。实验表明,该方法的精度和效率都大大优于其他算法,具有较高的可靠性和实用性。
{"title":"Quantity forecast of administrative items based on parallel random forest","authors":"Linxia Zhong, W. Wan, Ziyue Luo, Xiaodong Zhang","doi":"10.1049/CP.2017.0112","DOIUrl":"https://doi.org/10.1049/CP.2017.0112","url":null,"abstract":"The ultimate goal of this paper is to train a model based on the given administrative data to predict the amount of each administrative item of month in different years and different regions as accurate as possible. In this paper, we propose a novel approach for quantity forecast of administrative data which is named after parallel random forest (parallel RF). Firstly, we collect administrative data from different online systems using java program and store it in MongoDB. Then we extract key information from these data and assign different numbers to different administrative areas and item names. Next, as the core of whole method, we train the prediction model by implementing the random forest method on Hadoop Map-Reduce. Finally, we compare the execution efficiency and prediction accuracy with other standard algorithms such as SVM and gradient boosting. The experiment shows that the accuracy and efficiency of our method is much better than other algorithms and our method is more reliable and useful.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125774430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problem that pedestrian tracking algorithm is prone to target tracking error in complex background, this paper proposes a pedestrian tracking algorithm based on human head detection to adapt to pedestrian tracking in many complex scenes. Firstly, the foreground segmentation technique is used to extract the motion foreground quickly. In the Adaboost classifier, the human body negative sample is added, and the Haar-like feature is used to detect the head on the basis of the movement foreground. The target tracking chain is established by detecting the head Walking tracker. The experimental results show that the algorithm proposed in this paper reduces the false detection rate and missed detection rate of the head, and improves the robustness to pedestrian tracking in many complex scenes.
{"title":"A pedestrian tracking algorithm based on background unrelated head detection","authors":"Yibing Zhang, T. Fan","doi":"10.1049/CP.2017.0128","DOIUrl":"https://doi.org/10.1049/CP.2017.0128","url":null,"abstract":"Aiming at the problem that pedestrian tracking algorithm is prone to target tracking error in complex background, this paper proposes a pedestrian tracking algorithm based on human head detection to adapt to pedestrian tracking in many complex scenes. Firstly, the foreground segmentation technique is used to extract the motion foreground quickly. In the Adaboost classifier, the human body negative sample is added, and the Haar-like feature is used to detect the head on the basis of the movement foreground. The target tracking chain is established by detecting the head Walking tracker. The experimental results show that the algorithm proposed in this paper reduces the false detection rate and missed detection rate of the head, and improves the robustness to pedestrian tracking in many complex scenes.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125878319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi Yuan, R. Ahas, A. Aasa, Xiaoging Yu, Qiyun Sun
Smartphones or mobile phones are rapidly becoming the primary computer and communication device in everybody's daily lives. This research introduced some indices of dynamic interaction for detecting the wildlife animals. We applied the indices to explore the mankind interaction using smartphone daily GPS data. We program these indices in the statistical software R and got some simulation results. Besides, we put forward some method to solve the GPS data gaps problem and visualized the GPS data on 3D maps.
{"title":"Modeling the dynamic social relations of citizens based on daily GPS data","authors":"Chi Yuan, R. Ahas, A. Aasa, Xiaoging Yu, Qiyun Sun","doi":"10.1049/CP.2017.0110","DOIUrl":"https://doi.org/10.1049/CP.2017.0110","url":null,"abstract":"Smartphones or mobile phones are rapidly becoming the primary computer and communication device in everybody's daily lives. This research introduced some indices of dynamic interaction for detecting the wildlife animals. We applied the indices to explore the mankind interaction using smartphone daily GPS data. We program these indices in the statistical software R and got some simulation results. Besides, we put forward some method to solve the GPS data gaps problem and visualized the GPS data on 3D maps.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123075321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In view of the unstructured road, a linear path can be detected easily, but complicated road cannot be detected easily. We put forward the unstructured road detection method based on contour selection. Firstly, the canny edge detector is adopted to detect all edges in the picture. The expansion of processing is used to repair broken line. Secondly, we use the Hough transform to detect linear and contour detection function detecting edge profile. Then we match straight lines and contour. Finally, the vote for the best results of coincidence degree is the road on the edge of the contour. The experiment result proves that the road detection under complicated background environment has the better effectiveness than the traditional linear path detection method.
{"title":"Unstructured road detection based on contour selection","authors":"Wang Xiang, Zhang Juan, Fang Zhijun","doi":"10.1049/CP.2017.0106","DOIUrl":"https://doi.org/10.1049/CP.2017.0106","url":null,"abstract":"In view of the unstructured road, a linear path can be detected easily, but complicated road cannot be detected easily. We put forward the unstructured road detection method based on contour selection. Firstly, the canny edge detector is adopted to detect all edges in the picture. The expansion of processing is used to repair broken line. Secondly, we use the Hough transform to detect linear and contour detection function detecting edge profile. Then we match straight lines and contour. Finally, the vote for the best results of coincidence degree is the road on the edge of the contour. The experiment result proves that the road detection under complicated background environment has the better effectiveness than the traditional linear path detection method.","PeriodicalId":424212,"journal":{"name":"4th International Conference on Smart and Sustainable City (ICSSC 2017)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}