Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947008
Feng Luo, P. Wei
This paper proposes a CAN, LIN bus board design based on STM32. This paper analyzes the need for the control commands sent by the computer to be transmitted to the bus-controlled components during automotive test, and analyzes the insufficiency of existing PC-CAN/LIN adapter boards. Then this paper designs a function board containing serial port, RS232 module, CAN transceiver and LIN transceiver, and writes embedded software. After testing, the board designed in this paper can receive the message sent by the computer and convert it into bus message, and send it to the bus, to achieve the test of buses.
{"title":"A Design of CAN, LIN Bus Test Board","authors":"Feng Luo, P. Wei","doi":"10.1109/ICCT46805.2019.8947008","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947008","url":null,"abstract":"This paper proposes a CAN, LIN bus board design based on STM32. This paper analyzes the need for the control commands sent by the computer to be transmitted to the bus-controlled components during automotive test, and analyzes the insufficiency of existing PC-CAN/LIN adapter boards. Then this paper designs a function board containing serial port, RS232 module, CAN transceiver and LIN transceiver, and writes embedded software. After testing, the board designed in this paper can receive the message sent by the computer and convert it into bus message, and send it to the bus, to achieve the test of buses.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124059991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947246
Ziwen Tang, Jie Sheng, Cheng Wu, Yiming Wang
As the speed of high-speed railways continues to accelerate and the carrying capacity is also increasing, the service quality of rail transit wireless communication users has declined due to the impact of railway speed and user density. We propose a user classification method based interference alignment algorithm for high-speed railway wireless communication in this paper. The users on the train are divided into central users and edge users by mobility prediction, and then interference management is performed on different types of users. The simulation results show that the user classification based on mobility prediction is beneficial to accurately grasp the trend of user types, and conducive to interference management, and has significantly improved the performance of high-speed railway communication network
{"title":"Interference Alignment Algorithm for High-Speed Railway Wireless Communication Based on Mobile User Classification","authors":"Ziwen Tang, Jie Sheng, Cheng Wu, Yiming Wang","doi":"10.1109/ICCT46805.2019.8947246","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947246","url":null,"abstract":"As the speed of high-speed railways continues to accelerate and the carrying capacity is also increasing, the service quality of rail transit wireless communication users has declined due to the impact of railway speed and user density. We propose a user classification method based interference alignment algorithm for high-speed railway wireless communication in this paper. The users on the train are divided into central users and edge users by mobility prediction, and then interference management is performed on different types of users. The simulation results show that the user classification based on mobility prediction is beneficial to accurately grasp the trend of user types, and conducive to interference management, and has significantly improved the performance of high-speed railway communication network","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124822597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947111
B. Varun, V. AbhishekM., A. Gangadhar, U. Purushotham
Progress of mobile communication and VLSI technology has aided in development of smart devices. These devices process the information of various formats and sizes in a limited amount of time. This information will be either stored in the devices or in cloud, hence there is a need for some kind of methodology to process and secure the data. Implementation of new algorithms to secure the information is always of immense interest. These algorithms will improve the performance of smart devices and helps for better human-machine interaction. Generally, symmetric and asymmetric approaches are used to secure the data from unauthorized users or attacks. Considering the amount of delay and complexity involved in processing the data, various forms of algorithms are used. In this paper, we propose a novel algorithm to secure the data from vulnerable attacks. These algorithms can be implemented on various platforms. The experimental results demonstrate an improvement of 10% for contacts and 15% for the encryption of images as compared to other conventional approaches.
{"title":"Implementation of Encryption and Decryption Algorithms for Security of Mobile Devices","authors":"B. Varun, V. AbhishekM., A. Gangadhar, U. Purushotham","doi":"10.1109/ICCT46805.2019.8947111","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947111","url":null,"abstract":"Progress of mobile communication and VLSI technology has aided in development of smart devices. These devices process the information of various formats and sizes in a limited amount of time. This information will be either stored in the devices or in cloud, hence there is a need for some kind of methodology to process and secure the data. Implementation of new algorithms to secure the information is always of immense interest. These algorithms will improve the performance of smart devices and helps for better human-machine interaction. Generally, symmetric and asymmetric approaches are used to secure the data from unauthorized users or attacks. Considering the amount of delay and complexity involved in processing the data, various forms of algorithms are used. In this paper, we propose a novel algorithm to secure the data from vulnerable attacks. These algorithms can be implemented on various platforms. The experimental results demonstrate an improvement of 10% for contacts and 15% for the encryption of images as compared to other conventional approaches.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122679628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947238
Heming Ding, Guangming Li, Menghui Xu
The methods of realizing broadband of dipole antenna are summarized and analyzed. Due to the narrow operating frequency band of the dipole antenna, in order to improve the bandwidth of the dipole antenna and improve its spectrum efficiency, the broadband of the dipole antenna is studied. This is also in line with the research trend of broadband antenna. This paper probes into the broadband methods of dipole antenna and analyzes the advantages and disadvantages of each method. It is pointed out that the core problem of broadband antenna is how to compromise the relationship between broadband antenna and other performance changes brought by broadband antenna. We propose that optimization algorithms can be used in antenna design to solve the problems caused by broadband antenna.
{"title":"Overview of Research on Broadband of Dipole Antenna","authors":"Heming Ding, Guangming Li, Menghui Xu","doi":"10.1109/ICCT46805.2019.8947238","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947238","url":null,"abstract":"The methods of realizing broadband of dipole antenna are summarized and analyzed. Due to the narrow operating frequency band of the dipole antenna, in order to improve the bandwidth of the dipole antenna and improve its spectrum efficiency, the broadband of the dipole antenna is studied. This is also in line with the research trend of broadband antenna. This paper probes into the broadband methods of dipole antenna and analyzes the advantages and disadvantages of each method. It is pointed out that the core problem of broadband antenna is how to compromise the relationship between broadband antenna and other performance changes brought by broadband antenna. We propose that optimization algorithms can be used in antenna design to solve the problems caused by broadband antenna.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128865603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/icct46805.2019.8947147
{"title":"ICCT 2019 Conference Committee","authors":"","doi":"10.1109/icct46805.2019.8947147","DOIUrl":"https://doi.org/10.1109/icct46805.2019.8947147","url":null,"abstract":"","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129307938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947266
Yuan Zhang, Mingyang Xie
This paper studies the resource scheduling problem for multi-cell cellular edge computing systems. Firstly, the analytical formulas of the communication delay and computing delay in multi-cell cellular edge computing systems are derived and expressed as virtual delay queues. Then, a delay based Lyapunov function is defined and a novel backpressure based subcarrier and virtual machine (VM) scheduling algorithm is proposed which stabilizes the virtual delay queues. Simulation results show that the total delay of the proposed scheduling algorithm is always lower than that of the queue length based traditional scheduling algorithm.
{"title":"Delay Based Subcarrier and VM Scheduling for Multi-cell Cellular Edge Computing Systems","authors":"Yuan Zhang, Mingyang Xie","doi":"10.1109/ICCT46805.2019.8947266","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947266","url":null,"abstract":"This paper studies the resource scheduling problem for multi-cell cellular edge computing systems. Firstly, the analytical formulas of the communication delay and computing delay in multi-cell cellular edge computing systems are derived and expressed as virtual delay queues. Then, a delay based Lyapunov function is defined and a novel backpressure based subcarrier and virtual machine (VM) scheduling algorithm is proposed which stabilizes the virtual delay queues. Simulation results show that the total delay of the proposed scheduling algorithm is always lower than that of the queue length based traditional scheduling algorithm.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947219
Xin Lin, Yikang Zhang, Hongmin Li, Gang Li, W. Qiao, Falin Liu
In wide-band digital predistortion linearizers, the number of coefficients of a simplified Volterra polynomial model required to model memory effects can increase dramatically, which causes large computational complexity, ill-conditioning or overfitting problems. We propose a novel digital predistortion (DPD) implementation approach called covariance matrix based independent parameters estimation (CM-IPE) method for a direct learning structure (DLA). In the approach, we use the constant transformation matrix to replace the time-varying transformation matrix because of the stationary and ergodic nature of input signals. And then the principal component analysis (PCA) method is applied for independent parameters estimation. The proposed method can reduce computational complexity. And by utilizing the PCA technique, the coefficients can be estimated independently which, at the same time, can prevent ill-conditioning or overfitting problems. Experimental results demonstrate that the proposed approach realizes the equivalent linearization performance as the traditional DLA method at lower computational complexity.
{"title":"Low Computational Complexity Digital Predistortion Based on Independent Parameters Estimation","authors":"Xin Lin, Yikang Zhang, Hongmin Li, Gang Li, W. Qiao, Falin Liu","doi":"10.1109/ICCT46805.2019.8947219","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947219","url":null,"abstract":"In wide-band digital predistortion linearizers, the number of coefficients of a simplified Volterra polynomial model required to model memory effects can increase dramatically, which causes large computational complexity, ill-conditioning or overfitting problems. We propose a novel digital predistortion (DPD) implementation approach called covariance matrix based independent parameters estimation (CM-IPE) method for a direct learning structure (DLA). In the approach, we use the constant transformation matrix to replace the time-varying transformation matrix because of the stationary and ergodic nature of input signals. And then the principal component analysis (PCA) method is applied for independent parameters estimation. The proposed method can reduce computational complexity. And by utilizing the PCA technique, the coefficients can be estimated independently which, at the same time, can prevent ill-conditioning or overfitting problems. Experimental results demonstrate that the proposed approach realizes the equivalent linearization performance as the traditional DLA method at lower computational complexity.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947146
Yueyue Dai, Du Xu, Kecheng Zhang, Yunlong Lu, Sabita Maharjan, Yan Zhang
By extending computation capacity to the edge of wireless networks, edge computing has the potential to enable computation-intensive and delay-sensitive applications in 5G and beyond via computation offloading. However, in multi-user heterogeneous networks, it is challenging to capture complete network information, such as wireless channel state, available bandwidth or computation resources. The strong couplings among devices on application requirements or radio access mode make it more difficult to design an optimal computation offloading scheme. Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information. In this paper, we utilize DRL to design an optimal computation offloading and resource allocation strategy for minimizing system energy consumption. We first present a multi-user edge computing framework in heterogeneous networks. Then, we formulate the joint computation offloading and resource allocation problem as a DRL form and propose a new DRL-inspired algorithm to minimize system energy consumption. Numerical results based on a realworld dataset demonstrate demonstrate the effectiveness of our proposed algorithm, compared to two benchmark solutions.
{"title":"Deep Reinforcement Learning for Edge Computing and Resource Allocation in 5G Beyond","authors":"Yueyue Dai, Du Xu, Kecheng Zhang, Yunlong Lu, Sabita Maharjan, Yan Zhang","doi":"10.1109/ICCT46805.2019.8947146","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947146","url":null,"abstract":"By extending computation capacity to the edge of wireless networks, edge computing has the potential to enable computation-intensive and delay-sensitive applications in 5G and beyond via computation offloading. However, in multi-user heterogeneous networks, it is challenging to capture complete network information, such as wireless channel state, available bandwidth or computation resources. The strong couplings among devices on application requirements or radio access mode make it more difficult to design an optimal computation offloading scheme. Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information. In this paper, we utilize DRL to design an optimal computation offloading and resource allocation strategy for minimizing system energy consumption. We first present a multi-user edge computing framework in heterogeneous networks. Then, we formulate the joint computation offloading and resource allocation problem as a DRL form and propose a new DRL-inspired algorithm to minimize system energy consumption. Numerical results based on a realworld dataset demonstrate demonstrate the effectiveness of our proposed algorithm, compared to two benchmark solutions.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129931302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947137
Maged Fakirah, S. Leng, Qing Wang
Roundabouts are sort of circular intersections founded to ensure vehicles safety on the roads. Nevertheless, they do not prevent accidents entirely, where conflicts may occur at existed intersections causing traffic congestion, or even, serious accidents. In this study, an intelligent transportation system (ITS) solution is proposed to safely coordinate autonomous vehicles crossing at roundabouts via the Visible Light Communication (VLC) technique, since recent studies have shown that the VLC technique can be effectively utilized in dynamic vehicular communications. The proposed system focuses on vehicle-to-infrastructure communications (V2I), where on-site VLC terminals are distributed at entrances of the roundabout to control the traffic flow in real time. Simulation results exhibit that prospective collision at roundabouts can be avoided effectively based on the proposed system.
{"title":"Real-Time Traffic Flow Management Based on Visible Light Communication: A Case Study at Roundabout","authors":"Maged Fakirah, S. Leng, Qing Wang","doi":"10.1109/ICCT46805.2019.8947137","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947137","url":null,"abstract":"Roundabouts are sort of circular intersections founded to ensure vehicles safety on the roads. Nevertheless, they do not prevent accidents entirely, where conflicts may occur at existed intersections causing traffic congestion, or even, serious accidents. In this study, an intelligent transportation system (ITS) solution is proposed to safely coordinate autonomous vehicles crossing at roundabouts via the Visible Light Communication (VLC) technique, since recent studies have shown that the VLC technique can be effectively utilized in dynamic vehicular communications. The proposed system focuses on vehicle-to-infrastructure communications (V2I), where on-site VLC terminals are distributed at entrances of the roundabout to control the traffic flow in real time. Simulation results exhibit that prospective collision at roundabouts can be avoided effectively based on the proposed system.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128168841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCT46805.2019.8947220
Zeyu Lei, Yan Wang, Yufan Xu, Rui Huang
Predicting depth from an image is an essential problem in the area of computer vision and deep learning shows a great potential in this area. However most deep Convolutional Neural Networks are need to train them using vast amount of manually labelled data, which is difficult or even scarcely possible in some special environment. In this paper, we proposed an unsupervised method based on left-right consistence with multi-loss fusion, which can perform single image depth estimation, despite the absence of ground truth data. We treat the issue as an image reconstruction problem by training our network with a combine of SSIM and Huber loss. To achieve estimation the depth from coarse to fine, we estimate a coarse map in the former layer and using bilinear sample to transmit the map to the latter layer to obtain a fine depth map. Our method achieves more accurate result on KITTI driving dataset.
{"title":"From Coarse to Fine: A Monocular Depth Estimation Model Based on Left-Right Consistency","authors":"Zeyu Lei, Yan Wang, Yufan Xu, Rui Huang","doi":"10.1109/ICCT46805.2019.8947220","DOIUrl":"https://doi.org/10.1109/ICCT46805.2019.8947220","url":null,"abstract":"Predicting depth from an image is an essential problem in the area of computer vision and deep learning shows a great potential in this area. However most deep Convolutional Neural Networks are need to train them using vast amount of manually labelled data, which is difficult or even scarcely possible in some special environment. In this paper, we proposed an unsupervised method based on left-right consistence with multi-loss fusion, which can perform single image depth estimation, despite the absence of ground truth data. We treat the issue as an image reconstruction problem by training our network with a combine of SSIM and Huber loss. To achieve estimation the depth from coarse to fine, we estimate a coarse map in the former layer and using bilinear sample to transmit the map to the latter layer to obtain a fine depth map. Our method achieves more accurate result on KITTI driving dataset.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124606125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}