Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924731
Honglei Wei, Danni Peng, Xufei Zhu, Dongdong Wu
For capturing sea cucumber in the sea, an object tracking method based on Mean-Shift algorithm was proposed. Firstly, defogging algorithm was used to get restored image, and the feature of color histogram was extracted from the restored image. Then the local image was cropped at the same position and local area in the second frame. Mean-Shift algorithm was used to search the feature of color histogram in cropped area. Finally, tracking was realized by searching the searching process converged in the next frames one by one. Compared with the color histogram Mean-Shift algorithm (CHMS) algorithm, the proposed algorithm have better tracking performance and robustness.
{"title":"A target tracking algorithm for vision based sea cucumber capture","authors":"Honglei Wei, Danni Peng, Xufei Zhu, Dongdong Wu","doi":"10.1109/COMPCOMM.2016.7924731","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924731","url":null,"abstract":"For capturing sea cucumber in the sea, an object tracking method based on Mean-Shift algorithm was proposed. Firstly, defogging algorithm was used to get restored image, and the feature of color histogram was extracted from the restored image. Then the local image was cropped at the same position and local area in the second frame. Mean-Shift algorithm was used to search the feature of color histogram in cropped area. Finally, tracking was realized by searching the searching process converged in the next frames one by one. Compared with the color histogram Mean-Shift algorithm (CHMS) algorithm, the proposed algorithm have better tracking performance and robustness.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129428747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924654
M. Babar, Muhammad Jehanzeb, Masitah Ghazali, D. Jawawi, Falak Sher, S. Ghayyur
Healthcare is one of the core areas in medical domain. In healthcare the data exist in various forms like respiration data, blood pressure readings, prescriptions and others. The data may help in decision-making for different initiatives in order to provide better healthcare services. However, in order to make this possible there is a need to diagnose the data in a professional way. Currently, there is a lack of a system or way which may help in decision-making in big data analysis in the form of phases. In this research, a framework has been proposed to diagnose the healthcare data for efficient data analysis.
{"title":"Big data survey in healthcare and a proposal for intelligent data diagnosis framework","authors":"M. Babar, Muhammad Jehanzeb, Masitah Ghazali, D. Jawawi, Falak Sher, S. Ghayyur","doi":"10.1109/COMPCOMM.2016.7924654","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924654","url":null,"abstract":"Healthcare is one of the core areas in medical domain. In healthcare the data exist in various forms like respiration data, blood pressure readings, prescriptions and others. The data may help in decision-making for different initiatives in order to provide better healthcare services. However, in order to make this possible there is a need to diagnose the data in a professional way. Currently, there is a lack of a system or way which may help in decision-making in big data analysis in the form of phases. In this research, a framework has been proposed to diagnose the healthcare data for efficient data analysis.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128560496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924673
Guangyu Liu, Chuanrong Li, W. Tian, Ziyang Li
Earth observation (EO) satellite data includes geospatial information (GI), which is a significant source to help human being to understand the situation of the earth ecosystem. Due to the large volume and heterogeneity of the geospatial data, sharing and integrating the geospatial data are a huge issue for GI catalog services. The catalog service specification implemented by a working group co-lead by Key Laboratory of Quantitive Remote Sensing Information Technology (QRSIT) of Chinese Academy of Sciences utilizes the OpenSearch (OS) protocol to centralize and integrate heterogeneous satellite data from different satellite data centers and enables the heterogeneous geospatial metadata to retrieve. This paper mainly concentrates on the specification of OpenSearch Description Document (OSDD), the structure of the catalog service, and the approach to search EO satellite data.
{"title":"Distributed geospatial data service based on OpenSearch","authors":"Guangyu Liu, Chuanrong Li, W. Tian, Ziyang Li","doi":"10.1109/COMPCOMM.2016.7924673","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924673","url":null,"abstract":"Earth observation (EO) satellite data includes geospatial information (GI), which is a significant source to help human being to understand the situation of the earth ecosystem. Due to the large volume and heterogeneity of the geospatial data, sharing and integrating the geospatial data are a huge issue for GI catalog services. The catalog service specification implemented by a working group co-lead by Key Laboratory of Quantitive Remote Sensing Information Technology (QRSIT) of Chinese Academy of Sciences utilizes the OpenSearch (OS) protocol to centralize and integrate heterogeneous satellite data from different satellite data centers and enables the heterogeneous geospatial metadata to retrieve. This paper mainly concentrates on the specification of OpenSearch Description Document (OSDD), the structure of the catalog service, and the approach to search EO satellite data.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924760
Nureddin A. F. Aldali, Miao Jun-gang
These days satellite images are being used in different fields, so it is essential for those images to have high resolution. Satellite images are affected by various factors in space such as: absorption, scattering, etc. Resolution of those images is very low. To have better perception of these images it is necessary to have the image with clear and well defined edges, which provides better visible line of separation. Resolution enhancement of these images has always been a major issue to extract more information from them. GEO Satellite imagery is an important tool and can be used to estimate rainfall during the thunderstorms and hurricanes for flash flood warnings in real time. However, for the GEO satellite is located higher than normal LEO satellite, the spatial resolution images are lower. Therefore, in order to obtain an image with enough resolution, some methods are to be implemented in GEO satellite observation such as: resolution enhancement which is a technique that achieves higher resolution for satellite image with lower resolution. There are many approaches that can be used to enhance the resolution of a satellite image. This paper focuses on the comparison between two techniques that are used to increase resolution of the images. Noise and sampling interval techniques, the two algorithms are shown via simulation. The simulation is introduced, and the results are discussed.
{"title":"Analysis of the effect of image noise and sampling interval on the resolution enhancement","authors":"Nureddin A. F. Aldali, Miao Jun-gang","doi":"10.1109/COMPCOMM.2016.7924760","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924760","url":null,"abstract":"These days satellite images are being used in different fields, so it is essential for those images to have high resolution. Satellite images are affected by various factors in space such as: absorption, scattering, etc. Resolution of those images is very low. To have better perception of these images it is necessary to have the image with clear and well defined edges, which provides better visible line of separation. Resolution enhancement of these images has always been a major issue to extract more information from them. GEO Satellite imagery is an important tool and can be used to estimate rainfall during the thunderstorms and hurricanes for flash flood warnings in real time. However, for the GEO satellite is located higher than normal LEO satellite, the spatial resolution images are lower. Therefore, in order to obtain an image with enough resolution, some methods are to be implemented in GEO satellite observation such as: resolution enhancement which is a technique that achieves higher resolution for satellite image with lower resolution. There are many approaches that can be used to enhance the resolution of a satellite image. This paper focuses on the comparison between two techniques that are used to increase resolution of the images. Noise and sampling interval techniques, the two algorithms are shown via simulation. The simulation is introduced, and the results are discussed.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128218762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924755
Nannan Sun, Sheng Fang, Zhe Li
In recently years, visual tracking has achieved great development in the field of computer vision. But occlusion remains a challenging problem. Though sparse representation has been introduced into visual tracking, most of existing visual tracking methods based sparse representation treat the occlusion challenges as one of the special scenes simply, and did not make full use of sparse coefficient. In this paper, a novel occlusion detection via sparse analysis is proposed. We can judge whether the occlusion is happening and determine the definite occlusion area in current frame. And the detection result is introduced into the process of visual tracking in order to exclude the influence of occluding area of target object. In addition, we put forward a novel template update strategy. Both of these strategies collectively help the tracker to reduce the probability of drifting. Experimental results on a series of challenging image sequences demonstrate that the proposed visual tracking method achieves more favorable performance than other state-of-the-art tracking methods.
{"title":"A novel visual tracking with occlusion detection via sparse coefficient analysis","authors":"Nannan Sun, Sheng Fang, Zhe Li","doi":"10.1109/COMPCOMM.2016.7924755","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924755","url":null,"abstract":"In recently years, visual tracking has achieved great development in the field of computer vision. But occlusion remains a challenging problem. Though sparse representation has been introduced into visual tracking, most of existing visual tracking methods based sparse representation treat the occlusion challenges as one of the special scenes simply, and did not make full use of sparse coefficient. In this paper, a novel occlusion detection via sparse analysis is proposed. We can judge whether the occlusion is happening and determine the definite occlusion area in current frame. And the detection result is introduced into the process of visual tracking in order to exclude the influence of occluding area of target object. In addition, we put forward a novel template update strategy. Both of these strategies collectively help the tracker to reduce the probability of drifting. Experimental results on a series of challenging image sequences demonstrate that the proposed visual tracking method achieves more favorable performance than other state-of-the-art tracking methods.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129829199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7925041
Yuqian Chen, Wenhui Zhang
Sign language is a kind of important communication gesture to be studied in the human-computer interaction filed. Kinect is a 3D somatosensory camera launched by Microsoft, which can capture the color, depth and skeleton frames and is helpful to the gesture recognition research. In this paper, a method using the HOG and SVM algorithms with the Kinect software libraries to recognize sign language by recognizing the hand position, hand shape and hand action features is proposed. In order to realize this method, a special 3D sign language dataset contains 72 words is collected with Kinect, and experiments are conducted to evaluate the method. It is shown in the experimental results that the use of the HOG and SVM algorithms significantly increases the recognition accuracy of the Kinect, and is insensitive to background and other factors. The average recognition rate is up to 89.8%, which means the Kinect-based recognition method proposed in this paper can effectively and efficiently recognize sign language, and it has a great significance to the research and promotion of the sign language recognition technology.
{"title":"Research and implementation of sign language recognition method based on Kinect","authors":"Yuqian Chen, Wenhui Zhang","doi":"10.1109/COMPCOMM.2016.7925041","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7925041","url":null,"abstract":"Sign language is a kind of important communication gesture to be studied in the human-computer interaction filed. Kinect is a 3D somatosensory camera launched by Microsoft, which can capture the color, depth and skeleton frames and is helpful to the gesture recognition research. In this paper, a method using the HOG and SVM algorithms with the Kinect software libraries to recognize sign language by recognizing the hand position, hand shape and hand action features is proposed. In order to realize this method, a special 3D sign language dataset contains 72 words is collected with Kinect, and experiments are conducted to evaluate the method. It is shown in the experimental results that the use of the HOG and SVM algorithms significantly increases the recognition accuracy of the Kinect, and is insensitive to background and other factors. The average recognition rate is up to 89.8%, which means the Kinect-based recognition method proposed in this paper can effectively and efficiently recognize sign language, and it has a great significance to the research and promotion of the sign language recognition technology.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130556884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7925037
Xiao-dong Shi, Haiyan Yang, Ping Zhou
Focused on the issue that the robustness of traditional Mel Frequency Cepstral Coefficients (MFCC) feature degrades drastically in speaker recognition system, a kind algorithm that based improved Gammatone Frequency Cepstral Coefficients (GFCC) is proposed. The different between traditional MFCC and GFCC is that GFCC uses Gammatone filter bank to replace Mel filter bank to improve robustness. On this basis, this paper proposes one way that use Multitaper Estimation, MVA (Mean Subtraction, Variance Normzlization and Autoregressive Moving Average Filter)and other technologies to further enhance its robustness and tested with TIMIT speech database. The experimental results show that under different noise and different SNR, the improved GFCC that proposed by this paper has the lowest equal error rate and the best robustness, especially in the noise ratio is lower than 10dB, has greater advantage compared to other algorithms.
针对传统的Mel频率倒谱系数(MFCC)特征在说话人识别系统中鲁棒性急剧下降的问题,提出了一种基于改进Gammatone频率倒谱系数(GFCC)的说话人识别算法。传统MFCC与GFCC的不同之处在于,GFCC使用Gammatone滤波器组代替Mel滤波器组来提高鲁棒性。在此基础上,本文提出了一种利用多渐估计、MVA (Mean Subtraction, Variance normizzation and Autoregressive Moving Average Filter)等技术进一步增强鲁棒性的方法,并在TIMIT语音数据库中进行了测试。实验结果表明,在不同噪声和不同信噪比下,本文提出的改进GFCC具有最低的等错误率和最佳的鲁棒性,特别是在噪声比低于10dB的情况下,与其他算法相比具有更大的优势。
{"title":"Robust speaker recognition based on improved GFCC","authors":"Xiao-dong Shi, Haiyan Yang, Ping Zhou","doi":"10.1109/COMPCOMM.2016.7925037","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7925037","url":null,"abstract":"Focused on the issue that the robustness of traditional Mel Frequency Cepstral Coefficients (MFCC) feature degrades drastically in speaker recognition system, a kind algorithm that based improved Gammatone Frequency Cepstral Coefficients (GFCC) is proposed. The different between traditional MFCC and GFCC is that GFCC uses Gammatone filter bank to replace Mel filter bank to improve robustness. On this basis, this paper proposes one way that use Multitaper Estimation, MVA (Mean Subtraction, Variance Normzlization and Autoregressive Moving Average Filter)and other technologies to further enhance its robustness and tested with TIMIT speech database. The experimental results show that under different noise and different SNR, the improved GFCC that proposed by this paper has the lowest equal error rate and the best robustness, especially in the noise ratio is lower than 10dB, has greater advantage compared to other algorithms.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130753688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924701
Yu Zhou, Bo Zhao, Jin Han, Jun Zheng
Compared with traditional cryptography, biometric cryptosystems provide more convenient and safe protection of keys. In this paper, an effective scheme for biometric cryptosystems based on the intersections of hyper-plane and hashed discrete space is proposed. Only genuine feature vectors, which are located on the hyper-plane, find the correct solution. Several tricks are provided to ensure the security of proposed scheme. Experiments have shown that our method achieved approximate performance compared with one of the latest methods based on fuzzy vault. However, our method is less complex than fuzzy vault based methods.
{"title":"An effective scheme for biometric cryptosystems","authors":"Yu Zhou, Bo Zhao, Jin Han, Jun Zheng","doi":"10.1109/COMPCOMM.2016.7924701","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924701","url":null,"abstract":"Compared with traditional cryptography, biometric cryptosystems provide more convenient and safe protection of keys. In this paper, an effective scheme for biometric cryptosystems based on the intersections of hyper-plane and hashed discrete space is proposed. Only genuine feature vectors, which are located on the hyper-plane, find the correct solution. Several tricks are provided to ensure the security of proposed scheme. Experiments have shown that our method achieved approximate performance compared with one of the latest methods based on fuzzy vault. However, our method is less complex than fuzzy vault based methods.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132898795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7924828
Dongping Zhao, Jiapeng Xiu, Zhengqiu Yang, Chen Liu
This paper introduces an improved user-based movie recommendation algorithm based on the user-based recommendation algorithm, which rewrites usersimilarity by merging users' ages and genders into users' preference values on items. The RMSE (Root Mean Square Error) then is provided to indicate that the proposed algorithm is more accurate that the original one.
{"title":"An improved user-based movie recommendation algorithm","authors":"Dongping Zhao, Jiapeng Xiu, Zhengqiu Yang, Chen Liu","doi":"10.1109/COMPCOMM.2016.7924828","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7924828","url":null,"abstract":"This paper introduces an improved user-based movie recommendation algorithm based on the user-based recommendation algorithm, which rewrites usersimilarity by merging users' ages and genders into users' preference values on items. The RMSE (Root Mean Square Error) then is provided to indicate that the proposed algorithm is more accurate that the original one.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132971276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/COMPCOMM.2016.7925227
Dong Tang, Jialiang Xuan, Gaofei Huang
Simultaneous wireless information and energy transfer (SWIET) is a promising energy harvesting (EH) technique to power energy-constrained wireless nodes in wireless communications. In wireless relay networks, employing SWIET at the source-to-relay link, an energy-constrained relay node can harvest energy from the radio-frequency (RF) signals transmitted by a source node while assisting information relaying, and thus its lifetime can be prolonged. This paper investigate the resource allocation in a SWIET-based two-hop amply-and-forward or decode-and-forward relay network with a power-splitting EH receiver at the relay. The goal is to maximize the end-to-end achievable rate. Firstly, this paper formulate the resource allocation problem as a non-convex optimization problem. Then, this paper transform the non-convex problem into a convex problem by algebraic transformations. By solving the convex problem, this paper obtain the optimal resource allocation policy for the AF or DF relay network. Simulation results verify the optimality of the proposed resource allocation policy in this paper.
{"title":"Optimal resource allocation in wireless relay networks with SWIET","authors":"Dong Tang, Jialiang Xuan, Gaofei Huang","doi":"10.1109/COMPCOMM.2016.7925227","DOIUrl":"https://doi.org/10.1109/COMPCOMM.2016.7925227","url":null,"abstract":"Simultaneous wireless information and energy transfer (SWIET) is a promising energy harvesting (EH) technique to power energy-constrained wireless nodes in wireless communications. In wireless relay networks, employing SWIET at the source-to-relay link, an energy-constrained relay node can harvest energy from the radio-frequency (RF) signals transmitted by a source node while assisting information relaying, and thus its lifetime can be prolonged. This paper investigate the resource allocation in a SWIET-based two-hop amply-and-forward or decode-and-forward relay network with a power-splitting EH receiver at the relay. The goal is to maximize the end-to-end achievable rate. Firstly, this paper formulate the resource allocation problem as a non-convex optimization problem. Then, this paper transform the non-convex problem into a convex problem by algebraic transformations. By solving the convex problem, this paper obtain the optimal resource allocation policy for the AF or DF relay network. Simulation results verify the optimality of the proposed resource allocation policy in this paper.","PeriodicalId":210833,"journal":{"name":"2016 2nd IEEE International Conference on Computer and Communications (ICCC)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131950044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}