Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8988929
Zhenhong Liu, Hongfang Yuan, Huaqing Wang, Min Liu
Aiming at the problem that the traditional geodesic active contour (GAC) model is prone to produce boundary leakage and cannot segment adaptively, this paper proposes an improved GAC model and realizes the automatic segmentation of pulmonary artery in computed tomographic pulmonary angiography (CTPA) image sequence. Firstly, the variable velocity C(I) is used to replace the constant velocity c of the traditional GAC model, and base on the improved GAC model with C(I) to segment the first frame image of the CTPA sequence to obtain a convergent pulmonary artery contour. Secondly, the grayscale information of the target area is used to improve C(I) to V(I), which makes its direction variable. Finally, based on the improved GAC model with V(I), automatic segmentation of pulmonary artery in subsequent images is realized. Among them, the initial contour of each subsequent image is the final contour of the previous image. These two improvement strategies can solve the problem that the model is easy to be over-segmented and drive the curve evolve adaptively inward or outward to the target contour. Experimental results show that the proposed algorithm can realize automatic segmentation of pulmonary artery, and has a high coincidence rate with the results of physician segmentation.
{"title":"Improved GAC Model-based Pulmonary Artery Segmentation of CTPA Image Sequence","authors":"Zhenhong Liu, Hongfang Yuan, Huaqing Wang, Min Liu","doi":"10.1109/CCET48361.2019.8988929","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8988929","url":null,"abstract":"Aiming at the problem that the traditional geodesic active contour (GAC) model is prone to produce boundary leakage and cannot segment adaptively, this paper proposes an improved GAC model and realizes the automatic segmentation of pulmonary artery in computed tomographic pulmonary angiography (CTPA) image sequence. Firstly, the variable velocity C(I) is used to replace the constant velocity c of the traditional GAC model, and base on the improved GAC model with C(I) to segment the first frame image of the CTPA sequence to obtain a convergent pulmonary artery contour. Secondly, the grayscale information of the target area is used to improve C(I) to V(I), which makes its direction variable. Finally, based on the improved GAC model with V(I), automatic segmentation of pulmonary artery in subsequent images is realized. Among them, the initial contour of each subsequent image is the final contour of the previous image. These two improvement strategies can solve the problem that the model is easy to be over-segmented and drive the curve evolve adaptively inward or outward to the target contour. Experimental results show that the proposed algorithm can realize automatic segmentation of pulmonary artery, and has a high coincidence rate with the results of physician segmentation.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127517203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989282
Deyu Zhang, Yaolin Wang, Yanhui Lv, Zhao-jing Liu
Topology control technology is one of the key technologies in wireless sensor network (WSN). To solve the problem of excessive energy consumption of single-hop communication between cluster head and coordinator in WSN after clustering, a cluster head multi-hop topology control algorithm based on geographical location is presented in this paper. This algorithm first divides the whole region into a number of radial fan regions centered on the base station. Each cluster head judges its own fan region by its geographical location and searches the base station in the same sector area to find its upper hop cluster head to form a multi-hop backbone network to communicate with the base station. Thus, the energy consumption pressure of cluster heads far away from base stations can be alleviated and the network lifetime can be prolonged. The simulation results show that the algorithm has obvious advantages in balancing energy consumption of network nodes and prolonging network lifetime.
{"title":"A Cluster Head Multi-hop Topology Control Algorithm for WSN","authors":"Deyu Zhang, Yaolin Wang, Yanhui Lv, Zhao-jing Liu","doi":"10.1109/CCET48361.2019.8989282","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989282","url":null,"abstract":"Topology control technology is one of the key technologies in wireless sensor network (WSN). To solve the problem of excessive energy consumption of single-hop communication between cluster head and coordinator in WSN after clustering, a cluster head multi-hop topology control algorithm based on geographical location is presented in this paper. This algorithm first divides the whole region into a number of radial fan regions centered on the base station. Each cluster head judges its own fan region by its geographical location and searches the base station in the same sector area to find its upper hop cluster head to form a multi-hop backbone network to communicate with the base station. Thus, the energy consumption pressure of cluster heads far away from base stations can be alleviated and the network lifetime can be prolonged. The simulation results show that the algorithm has obvious advantages in balancing energy consumption of network nodes and prolonging network lifetime.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125095180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989295
Sujuan Zhou, Luyue Lin, Hao Cai, Bo Liu, J. Meng
Moutan Cortex charcoal is a processed product of Moutan Cortex which is widely used to cure various types of hemorrhagic diseases. In this study, a SVM-based pharmacokinetic-pharmacodynamic model of Moutan Cortex charcoal was established by correlating drug concentration and pharmacodynamics with time. The relationship between drug concentration and pharmacodynamic index of main components in Moutan Cortex charcoal was analyzed based on sensitivity analysis of SVM using the method of MIV-SVM. The results are in agreement with those of chemical experiments By this means, the pharmacodynamic substance basis of Paeonia suffruticosa charcoal was clarified.
{"title":"Investigation into the Relationship between Pharmacokinetics and Pharmacodynamics of Moutan Cortex Charcoal Based on MIV-SVM","authors":"Sujuan Zhou, Luyue Lin, Hao Cai, Bo Liu, J. Meng","doi":"10.1109/CCET48361.2019.8989295","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989295","url":null,"abstract":"Moutan Cortex charcoal is a processed product of Moutan Cortex which is widely used to cure various types of hemorrhagic diseases. In this study, a SVM-based pharmacokinetic-pharmacodynamic model of Moutan Cortex charcoal was established by correlating drug concentration and pharmacodynamics with time. The relationship between drug concentration and pharmacodynamic index of main components in Moutan Cortex charcoal was analyzed based on sensitivity analysis of SVM using the method of MIV-SVM. The results are in agreement with those of chemical experiments By this means, the pharmacodynamic substance basis of Paeonia suffruticosa charcoal was clarified.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132031373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989389
Renxia Wan, Kai Liu
In order to improve the global searching ability of PSO, the weighted aggregation based on a redefined similarity is reconstructed to describe the degree of diversity of the population. The improved PSO also adjusts the particle searching space with an adaptive decision. The experimental analysis shows the effectiveness of the algorithm in terms of optimization ability, convergence speed and stability.
{"title":"An Aggregation Degree Based PSO Algorithm","authors":"Renxia Wan, Kai Liu","doi":"10.1109/CCET48361.2019.8989389","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989389","url":null,"abstract":"In order to improve the global searching ability of PSO, the weighted aggregation based on a redefined similarity is reconstructed to describe the degree of diversity of the population. The improved PSO also adjusts the particle searching space with an adaptive decision. The experimental analysis shows the effectiveness of the algorithm in terms of optimization ability, convergence speed and stability.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134113339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989132
Bin Mu, Meng Dai
Mining valuable information from taxi trip data to recommend taxi drivers hotspot pickup areas become a hot research problem since taxis cruising in the city causes large energy waste every day. In existing methods, many methods are just cluster pick-up hotspot areas by various clustering algorithm without further analysis of the differences between the hotspots. In this paper, we propose a novel taxi pick-up recommendation model analyzing hotspot areas according to different factors based on an improved DBSCAN algorithm. We conduct several experiments with synthetic datasets and a real taxi trip dataset to illustrate cluster results and a real taxi trip data to verify the efficiency of algorithm and the precision of the recommendation model. The experiment results show that the proposed algorithm is capable for efficiently and effectively detecting clusters with multiple density-levels automatically with different multiple density-levels and the proposed taxi pick-up hotspots recommendation model has higher precision than other methods.
{"title":"Recommend Taxi Pick-up Hotspots Based on Density-based Clustering","authors":"Bin Mu, Meng Dai","doi":"10.1109/CCET48361.2019.8989132","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989132","url":null,"abstract":"Mining valuable information from taxi trip data to recommend taxi drivers hotspot pickup areas become a hot research problem since taxis cruising in the city causes large energy waste every day. In existing methods, many methods are just cluster pick-up hotspot areas by various clustering algorithm without further analysis of the differences between the hotspots. In this paper, we propose a novel taxi pick-up recommendation model analyzing hotspot areas according to different factors based on an improved DBSCAN algorithm. We conduct several experiments with synthetic datasets and a real taxi trip dataset to illustrate cluster results and a real taxi trip data to verify the efficiency of algorithm and the precision of the recommendation model. The experiment results show that the proposed algorithm is capable for efficiently and effectively detecting clusters with multiple density-levels automatically with different multiple density-levels and the proposed taxi pick-up hotspots recommendation model has higher precision than other methods.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128946414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989324
Junyu Lai, Kecheng Zhang, Jiaqi Tian, Han Xiao, Yingbing Sun
Network emulation is a vital testing and verification approach for networking protocols of all layers, as well as for application software during a network’s entire life-cycle from designing to maintenance phases. Network emulation method combines the merits of simulation and test-bed, and balances cost and accuracy. This paper introduces a promising cloud-based network emulati on platform, which leverages a set of novel virtualization and cloud related technologies to implement flexibility, agility and scalability in network emulation domain. Aiming at realizing high fidelity (HiFi) network emulation, External physical network nodes should be able to connect and communicate with the emulated nodes. This paper elaborates an innovative strategy to integrate outside physical nodes with virtual nodes inside of the emulation platform to implement virtual and physical nodes fused network emulation. Functional tests are carried out and indicate that the proposed strategy can effectively bridge the physical node to the virtual platform to achieve HiFi network emulation. Besides, performance evaluation also illustrates that the derived strategy can efficiently utilize the limited computation and networking resources of the platform so to achieve sufficient scalability and flexibility for the typical emulation scenarios.
{"title":"Towards Virtual and Physical Nodes Fused Network Emulation","authors":"Junyu Lai, Kecheng Zhang, Jiaqi Tian, Han Xiao, Yingbing Sun","doi":"10.1109/CCET48361.2019.8989324","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989324","url":null,"abstract":"Network emulation is a vital testing and verification approach for networking protocols of all layers, as well as for application software during a network’s entire life-cycle from designing to maintenance phases. Network emulation method combines the merits of simulation and test-bed, and balances cost and accuracy. This paper introduces a promising cloud-based network emulati on platform, which leverages a set of novel virtualization and cloud related technologies to implement flexibility, agility and scalability in network emulation domain. Aiming at realizing high fidelity (HiFi) network emulation, External physical network nodes should be able to connect and communicate with the emulated nodes. This paper elaborates an innovative strategy to integrate outside physical nodes with virtual nodes inside of the emulation platform to implement virtual and physical nodes fused network emulation. Functional tests are carried out and indicate that the proposed strategy can effectively bridge the physical node to the virtual platform to achieve HiFi network emulation. Besides, performance evaluation also illustrates that the derived strategy can efficiently utilize the limited computation and networking resources of the platform so to achieve sufficient scalability and flexibility for the typical emulation scenarios.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116892537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989335
Cao Zhang, Wei-yu Dong, Yu Zhu Ren
In Fuzzing facing binary coverage, the main role of instrumentation is feedback code coverage (in the case of Fuzz for binary, instrumentation can provide coverage information, which plays an important role in guiding the operation of seeds in Fuzz) . The current instrumentation optimization technique mainly relies on the control flow graph (CFG) to select key basic blocks at the basic block level, but the accuracy of this method is not high enough. Considering that the actual path in the actual operation of the binary may be different from the CFG generated in advance, this paper is based on the indirect jump that cannot be accurately analyzed in the CFG, and some of the basic blocks that can be optimized for high-frequency interpolation. According to the algorithm proposed in this paper, The combination of static analysis and dynamic analysis is used to continuously adjust and select key basic block nodes for instrumentation. It is verified by experiments that this kind of instrumentation method can effectively improve the coverage rate and reduce the overhead, and provide effective guidance for Fuzzing, which can effectively reduce the Fuzzer’s false negatives.
{"title":"INSTRCR: Lightweight instrumentation optimization based on coverage-guided fuzz testing","authors":"Cao Zhang, Wei-yu Dong, Yu Zhu Ren","doi":"10.1109/CCET48361.2019.8989335","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989335","url":null,"abstract":"In Fuzzing facing binary coverage, the main role of instrumentation is feedback code coverage (in the case of Fuzz for binary, instrumentation can provide coverage information, which plays an important role in guiding the operation of seeds in Fuzz) . The current instrumentation optimization technique mainly relies on the control flow graph (CFG) to select key basic blocks at the basic block level, but the accuracy of this method is not high enough. Considering that the actual path in the actual operation of the binary may be different from the CFG generated in advance, this paper is based on the indirect jump that cannot be accurately analyzed in the CFG, and some of the basic blocks that can be optimized for high-frequency interpolation. According to the algorithm proposed in this paper, The combination of static analysis and dynamic analysis is used to continuously adjust and select key basic block nodes for instrumentation. It is verified by experiments that this kind of instrumentation method can effectively improve the coverage rate and reduce the overhead, and provide effective guidance for Fuzzing, which can effectively reduce the Fuzzer’s false negatives.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129292933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989069
Xingyuan Ren, Lin Zhang, Kunpeng Xie, Qiankun Dong
For modern software systems, larger numbers of log massages have been generated every day. By analyzing these log messages with vital information such as exception reports, developers can manage and monitor software systems efficiently. Each log message in the log file consists of a fixed part (template) and a variable part, and the fixed parts of log messages with one event type are the same, while the variable part are different. LKE (Log Key Extraction), a widely used log parser for analyzing log messages, can find the fixed parts efficiently, due to the cluster strategy base on the calculation of weighted edit distance between log messages. However, it is time-consuming to calculate the weighted edit distance for large scale log files. In this paper, we proposed a parallel approach using a unique hierarchical index structure to calculate the weighted edit distance on GPU (Graph Processing Unit). GPU has an advantage of high parallelism and is suitable for intensive computing, therefore, the time required to process large-scale logs could be reduced by this approach. Experiments show that LKE parser using GPU to calculate the weighted edit distance has high efficiency and accuracy in the HDFS data set and the marine information data set.
{"title":"A Parallel Approach of Weighted Edit Distance Calculation for Log Parsing","authors":"Xingyuan Ren, Lin Zhang, Kunpeng Xie, Qiankun Dong","doi":"10.1109/CCET48361.2019.8989069","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989069","url":null,"abstract":"For modern software systems, larger numbers of log massages have been generated every day. By analyzing these log messages with vital information such as exception reports, developers can manage and monitor software systems efficiently. Each log message in the log file consists of a fixed part (template) and a variable part, and the fixed parts of log messages with one event type are the same, while the variable part are different. LKE (Log Key Extraction), a widely used log parser for analyzing log messages, can find the fixed parts efficiently, due to the cluster strategy base on the calculation of weighted edit distance between log messages. However, it is time-consuming to calculate the weighted edit distance for large scale log files. In this paper, we proposed a parallel approach using a unique hierarchical index structure to calculate the weighted edit distance on GPU (Graph Processing Unit). GPU has an advantage of high parallelism and is suitable for intensive computing, therefore, the time required to process large-scale logs could be reduced by this approach. Experiments show that LKE parser using GPU to calculate the weighted edit distance has high efficiency and accuracy in the HDFS data set and the marine information data set.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The broadcast storm problem in mobile ad hoc networks can be effectively alleviated by a connected dominating set-based virtual backbone network. However, the existing methods of connected dominating sets are not adapted to dynamic ad hoc networks. In this paper, a connected dominating set-based energy-efficient distributed routing algorithm is proposed. Furthermore, in order to select an appropriate relay forwarding node which reflects the energy efficiency of the dominating node and reduces its contribution to the entire network communication overhead, the additional coverage, residual energy and mobility of nodes are taken into overall consideration. To optimize the construction of network topology, the information entropy method is used to quantize each factor’s weight. The numerical results show that compared to the existing algorithm, the proposed algorithm can significantly reduce the network structure construction overhead, guarantee network connectivity, improve energy efficiency and extend the network lifetime.
{"title":"Connected Dominating Set-based Energy-efficient Distributed Routing Algorithm","authors":"Wanfeng Mao, Wei Feng, Guanqun Zhang, Kunfu Wang, Yiyun Zhang, Xing Li","doi":"10.1109/CCET48361.2019.8989323","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989323","url":null,"abstract":"The broadcast storm problem in mobile ad hoc networks can be effectively alleviated by a connected dominating set-based virtual backbone network. However, the existing methods of connected dominating sets are not adapted to dynamic ad hoc networks. In this paper, a connected dominating set-based energy-efficient distributed routing algorithm is proposed. Furthermore, in order to select an appropriate relay forwarding node which reflects the energy efficiency of the dominating node and reduces its contribution to the entire network communication overhead, the additional coverage, residual energy and mobility of nodes are taken into overall consideration. To optimize the construction of network topology, the information entropy method is used to quantize each factor’s weight. The numerical results show that compared to the existing algorithm, the proposed algorithm can significantly reduce the network structure construction overhead, guarantee network connectivity, improve energy efficiency and extend the network lifetime.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116257166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/CCET48361.2019.8989397
Baoling Qin, Xiaowei Lin, F. Zheng, Xinhong Chen, Chenglin Huang, Riqiang Pan
At present, the development of NB-IoT technology is changing every day, and leads to a large amount of data exchange, analysis and storage issues, which brings many challenges to the traditional computing model--cloud computing. At the same time, it also points out the shortcomings and disadvantages of cloud computing in big data calculation and analysis. Based on this, the NB-IoT model based on fog computing was proposed and its main technologies and applications are analyzed to reduce the delay of NB-IoT data calculation, improve the response speed of application system, save network bandwidth, accelerate data exchange, ensure the quality of data analysis and improve the efficiency of data storage space and meanwhile, it is studied how to apply theory and technology to practice, and the application of NB-IoT main technology based on fog computing is also analyzed.
{"title":"Research and Application of Key Technology of NB-IoT Based on Fog Computing","authors":"Baoling Qin, Xiaowei Lin, F. Zheng, Xinhong Chen, Chenglin Huang, Riqiang Pan","doi":"10.1109/CCET48361.2019.8989397","DOIUrl":"https://doi.org/10.1109/CCET48361.2019.8989397","url":null,"abstract":"At present, the development of NB-IoT technology is changing every day, and leads to a large amount of data exchange, analysis and storage issues, which brings many challenges to the traditional computing model--cloud computing. At the same time, it also points out the shortcomings and disadvantages of cloud computing in big data calculation and analysis. Based on this, the NB-IoT model based on fog computing was proposed and its main technologies and applications are analyzed to reduce the delay of NB-IoT data calculation, improve the response speed of application system, save network bandwidth, accelerate data exchange, ensure the quality of data analysis and improve the efficiency of data storage space and meanwhile, it is studied how to apply theory and technology to practice, and the application of NB-IoT main technology based on fog computing is also analyzed.","PeriodicalId":231425,"journal":{"name":"2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}