Pub Date : 2017-02-01DOI: 10.1016/S1005-8885(17)60190-0
Gao Na , Han Xiaoguang , Chen Zengqiang , Zhang Qing
The reachability problem of synchronizing transitions bounded Petri net systems (BPNSs) is investigated in this paper by constructing a mathematical model for dynamics of BPNS. Using the semi-tensor product (STP) of matrices, the dynamics of BPNSs, which can be viewed as a combination of several small bounded subnets via synchronizing transitions, are described by an algebraic equation. When the algebraic form for its dynamics is established, we can present a necessary and sufficient condition for the reachability between any marking (or state) and initial marking. Also, we give a corresponding algorithm to calculate all of the transition paths between initial marking and any target marking. Finally, an example is shown to illustrate proposed results. The key advantage of our approach, in which the set of reachable markings of BPNSs can be expressed by the set of reachable markings of subnets such that the big reachability set of BPNSs do not need generate, is partly avoid the state explosion problem of Petri nets (PNs).
{"title":"Modeling and reachability analysis of synchronizing transitions bounded Petri net systems based upon semi-tensor product of matrices","authors":"Gao Na , Han Xiaoguang , Chen Zengqiang , Zhang Qing","doi":"10.1016/S1005-8885(17)60190-0","DOIUrl":"https://doi.org/10.1016/S1005-8885(17)60190-0","url":null,"abstract":"<div><p>The reachability problem of synchronizing transitions bounded Petri net systems (BPNSs) is investigated in this paper by constructing a mathematical model for dynamics of BPNS. Using the semi-tensor product (STP) of matrices, the dynamics of BPNSs, which can be viewed as a combination of several small bounded subnets via synchronizing transitions, are described by an algebraic equation. When the algebraic form for its dynamics is established, we can present a necessary and sufficient condition for the reachability between any marking (or state) and initial marking. Also, we give a corresponding algorithm to calculate all of the transition paths between initial marking and any target marking. Finally, an example is shown to illustrate proposed results. The key advantage of our approach, in which the set of reachable markings of BPNSs can be expressed by the set of reachable markings of subnets such that the big reachability set of BPNSs do not need generate, is partly avoid the state explosion problem of Petri nets (PNs).</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"24 1","pages":"Pages 77-86"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(17)60190-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72228589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1016/S1005-8885(17)60182-1
Tan Yuxi, Gao Zehua, Guo Siyan, Gao Feng
Sparse code multiple access (SCMA) is a competitive nonorthogonal access scheme for the next mobile communication. As a multiuser sharing system, SCMA adopts message passing algorithm (MPA) for decoding scheme in receiver, but its iterative method leads to high computational complexity. Therefore, a serial message passing algorithm based on variable node (VMPA) is proposed in this paper. Making some subtle alterations to message update in original MPA, VMPA can greatly reduce overall computing complexity of decoding scheme. Furthermore, considering that serial structure may increase decoding delay, a novel grouping scheme based sparse matrix is applied to VMPA. Simulation results verify that the new algorithm, termed as grouping VMPA (G-VMPA), can achieve a better tradeoff between bit error rate (BER) and computing complexity than MPA.
{"title":"Optimized multiuser detection algorithm for uplink SCMA system","authors":"Tan Yuxi, Gao Zehua, Guo Siyan, Gao Feng","doi":"10.1016/S1005-8885(17)60182-1","DOIUrl":"https://doi.org/10.1016/S1005-8885(17)60182-1","url":null,"abstract":"<div><p>Sparse code multiple access (SCMA) is a competitive nonorthogonal access scheme for the next mobile communication. As a multiuser sharing system, SCMA adopts message passing algorithm (MPA) for decoding scheme in receiver, but its iterative method leads to high computational complexity. Therefore, a serial message passing algorithm based on variable node (VMPA) is proposed in this paper. Making some subtle alterations to message update in original MPA, VMPA can greatly reduce overall computing complexity of decoding scheme. Furthermore, considering that serial structure may increase decoding delay, a novel grouping scheme based sparse matrix is applied to VMPA. Simulation results verify that the new algorithm, termed as grouping VMPA (G-VMPA), can achieve a better tradeoff between bit error rate (BER) and computing complexity than MPA.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"24 1","pages":"Pages 11-17"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(17)60182-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72228585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1016/S1005-8885(17)60189-4
Ren Xingyi, Song Meina, E Haihong, Song Junde
With the rapid growth of location-based social networks (LBSNs), point-of-interest (POI) recommendation has become an important research problem. As one of the most representative social media platforms, Twitter provides various real-life information for POI recommendation in real time. Despite that POI recommendation has been actively studied, tweet images have not been well utilized for this research problem. State-of-the-art visual features like convolutional neural network (CNN) features have shown significant performance gains over the traditional bag-of-visual-words in unveiling the image's semantics. Unfortunately, they have not been employed for POI recommendation from social websites. Hence, how to make the most of tweet images to improve the performance of POI recommendation and visualization remains open. In this paper, we thoroughly study the impact of tweet images on POI recommendation for different POI categories using various visual features. A novel topic model called social media Twitter-latent Dirichlet allocation (SM-TwitterLDA) which jointly models five Twitter features, (i.e., text, image, location, timestamp and hashtag) is designed to discover POIs from the sheer amount of tweets. Moreover, each POI is visualized by representative images selected on three predefined criteria. Extensive experiments have been conducted on a real-life tweet dataset to verify the effectiveness of our method.
{"title":"Social media mining and visualization for point-of-interest recommendation","authors":"Ren Xingyi, Song Meina, E Haihong, Song Junde","doi":"10.1016/S1005-8885(17)60189-4","DOIUrl":"https://doi.org/10.1016/S1005-8885(17)60189-4","url":null,"abstract":"<div><p>With the rapid growth of location-based social networks (LBSNs), point-of-interest (POI) recommendation has become an important research problem. As one of the most representative social media platforms, Twitter provides various real-life information for POI recommendation in real time. Despite that POI recommendation has been actively studied, tweet images have not been well utilized for this research problem. State-of-the-art visual features like convolutional neural network (CNN) features have shown significant performance gains over the traditional bag-of-visual-words in unveiling the image's semantics. Unfortunately, they have not been employed for POI recommendation from social websites. Hence, how to make the most of tweet images to improve the performance of POI recommendation and visualization remains open. In this paper, we thoroughly study the impact of tweet images on POI recommendation for different POI categories using various visual features. A novel topic model called social media Twitter-latent Dirichlet allocation (SM-TwitterLDA) which jointly models five Twitter features, (i.e., text, image, location, timestamp and hashtag) is designed to discover POIs from the sheer amount of tweets. Moreover, each POI is visualized by representative images selected on three predefined criteria. Extensive experiments have been conducted on a real-life tweet dataset to verify the effectiveness of our method.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"24 1","pages":"Pages 67-76, 86"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(17)60189-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72228588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1016/S1005-8885(17)60191-2
Chen Shangdi, Tian Wenjing, Li Xue
The authentication codes with arbitration are able to solve dispute between the sender and the receiver. The authentication codes with trusted arbitration are called A2-codes, the authentication codes with distrust arbitration are called A3-codes. As an expansion of A2-codes, an A3-codes is an authentication system which is closer to the reality environment. Therefore, A3-codes have more extensive application value. In this paper, we construct a class of A3-codes based on polynomials over finite fields, give the parameters of the constructed codes, and calculate a variety of cheating attacks the maximum probabilities of success. Especially, in a special case, the constructed A3-codes are perfect. Compared with a known type of codes, they have almost the same security level, however, our codes need less storage requirements. Therefore, our codes have more advantages.
{"title":"Construction of authentication codes with distrust arbitration from polynomials over finite fields","authors":"Chen Shangdi, Tian Wenjing, Li Xue","doi":"10.1016/S1005-8885(17)60191-2","DOIUrl":"https://doi.org/10.1016/S1005-8885(17)60191-2","url":null,"abstract":"<div><p>The authentication codes with arbitration are able to solve dispute between the sender and the receiver. The authentication codes with trusted arbitration are called <em>A</em><sup>2</sup>-codes, the authentication codes with distrust arbitration are called <em>A</em><sup>3</sup>-codes. As an expansion of <em>A</em><sup>2</sup>-codes, an <em>A</em><sup>3</sup>-codes is an authentication system which is closer to the reality environment. Therefore, <em>A</em><sup>3</sup>-codes have more extensive application value. In this paper, we construct a class of <em>A</em><sup>3</sup>-codes based on polynomials over finite fields, give the parameters of the constructed codes, and calculate a variety of cheating attacks the maximum probabilities of success. Especially, in a special case, the constructed <em>A</em><sup>3</sup>-codes are perfect. Compared with a known type of codes, they have almost the same security level, however, our codes need less storage requirements. Therefore, our codes have more advantages.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"24 1","pages":"Pages 87-95"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(17)60191-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72228590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1016/S1005-8885(16)60064-X
Fu Xiong , Cang Yeliang , Zhu Lipeng , Hu Bin , Deng Song , Wang Dong
Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world. It provides good chances to solve large scale scientific problems with fewer efforts. Application deployment remains an important issue in clouds. Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users. Unlike current scheduling algorithms which mostly focus on single task allocation, we propose a deadline based scheduling approach for data-intensive applications in clouds. It does not simply consider the total completion time of an application as the sum of all its subtasks' completion time. Not only the computation capacity of virtual machine (VM) is considered, but also the communication delay and data access latencies are taken into account. Simulations show that our proposed approach has a decided advantage over the two other algorithms.
{"title":"Deadline based scheduling for data-intensive applications in clouds","authors":"Fu Xiong , Cang Yeliang , Zhu Lipeng , Hu Bin , Deng Song , Wang Dong","doi":"10.1016/S1005-8885(16)60064-X","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60064-X","url":null,"abstract":"<div><p>Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world. It provides good chances to solve large scale scientific problems with fewer efforts. Application deployment remains an important issue in clouds. Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users. Unlike current scheduling algorithms which mostly focus on single task allocation, we propose a deadline based scheduling approach for data-intensive applications in clouds. It does not simply consider the total completion time of an application as the sum of all its subtasks' completion time. Not only the computation capacity of virtual machine (VM) is considered, but also the communication delay and data access latencies are taken into account. Simulations show that our proposed approach has a decided advantage over the two other algorithms.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 8-15"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60064-X","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1016/S1005-8885(16)60063-8
Shao Jie , Zhao Zhicheng , Su Fei , Cai Anni
We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.
{"title":"Progressive framework for deep neural networks: from linear to non-linear","authors":"Shao Jie , Zhao Zhicheng , Su Fei , Cai Anni","doi":"10.1016/S1005-8885(16)60063-8","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60063-8","url":null,"abstract":"<div><p>We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60063-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1016/S1005-8885(16)60065-1
Xu Kai , Zhang Qinghua , Xue Yubin , Hu Feng
Rough set theory is an important tool to solve uncertain problems. Attribute reduction, as one of the core issues of rough set theory, has been proven to be an effective method for knowledge acquisition. Most of heuristic attribute reduction algorithms usually keep the positive region of a target set unchanged and ignore boundary region information. So, how to acquire knowledge from the boundary region of a target set in a multi-granulation space is an interesting issue. In this paper, a new concept, fuzziness of an approximation set of rough set is put forward firstly. Then the change rules of fuzziness in changing granularity spaces are analyzed. Finally, a new algorithm for attribute reduction based on the fuzziness of 0.5-approximation set is presented. Several experimental results show that the attribute reduction by the proposed method has relative better classification characteristics compared with various classification algorithms.
{"title":"Attribute reduction based on fuzziness of approximation set in multi-granulation spaces","authors":"Xu Kai , Zhang Qinghua , Xue Yubin , Hu Feng","doi":"10.1016/S1005-8885(16)60065-1","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60065-1","url":null,"abstract":"<div><p>Rough set theory is an important tool to solve uncertain problems. Attribute reduction, as one of the core issues of rough set theory, has been proven to be an effective method for knowledge acquisition. Most of heuristic attribute reduction algorithms usually keep the positive region of a target set unchanged and ignore boundary region information. So, how to acquire knowledge from the boundary region of a target set in a multi-granulation space is an interesting issue. In this paper, a new concept, fuzziness of an approximation set of rough set is put forward firstly. Then the change rules of fuzziness in changing granularity spaces are analyzed. Finally, a new algorithm for attribute reduction based on the fuzziness of 0.5-approximation set is presented. Several experimental results show that the attribute reduction by the proposed method has relative better classification characteristics compared with various classification algorithms.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 16-23"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60065-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1016/S1005-8885(16)60071-7
Zhang Biao, Wen Xiangming, Lu Zhaoming, Lei Tao
In IEEE 802.11 networks, many access points (APs) are required to cover a large area due to the limited coverage range of APs, and frequent handoffs may occur while a station (STA) is moving in an area covered by several APs. However, traditional handoff mechanisms employed at STAs introduce a few hundred milliseconds delay, which is far longer than what can be tolerated by some multimedia streams such as voice over Internet protocol (VoIP), it is a challenging issue for supporting seamless handoff service in IEEE 802.11 networks. In this paper, we propose a pre-scan based fast handoff scheme within an IEEE 802.11 enterprise wireless local area network (EWLAN) environment. The proposed scheme can help STA obtain the best alternative AP in advance after the pre-scan process, and when the handoff is actually triggered, STA can perform the authentication and reassociation process toward the alternative AP directly. Furthermore, we adopt Kalman filter to minimize the fluctuation of received signal strength (RSS), thus reducing the unnecessary pre-scan process and handoffs. We performed simulations to evaluate performance, and the simulation results show that the proposed scheme can effectively reduce the handoff delay.
{"title":"Pre-scan based fast handoff scheme for enterprise IEEE 802.11 networks","authors":"Zhang Biao, Wen Xiangming, Lu Zhaoming, Lei Tao","doi":"10.1016/S1005-8885(16)60071-7","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60071-7","url":null,"abstract":"<div><p>In IEEE 802.11 networks, many access points (APs) are required to cover a large area due to the limited coverage range of APs, and frequent handoffs may occur while a station (STA) is moving in an area covered by several APs. However, traditional handoff mechanisms employed at STAs introduce a few hundred milliseconds delay, which is far longer than what can be tolerated by some multimedia streams such as voice over Internet protocol (VoIP), it is a challenging issue for supporting seamless handoff service in IEEE 802.11 networks. In this paper, we propose a pre-scan based fast handoff scheme within an IEEE 802.11 enterprise wireless local area network (EWLAN) environment. The proposed scheme can help STA obtain the best alternative AP in advance after the pre-scan process, and when the handoff is actually triggered, STA can perform the authentication and reassociation process toward the alternative AP directly. Furthermore, we adopt Kalman filter to minimize the fluctuation of received signal strength (RSS), thus reducing the unnecessary pre-scan process and handoffs. We performed simulations to evaluate performance, and the simulation results show that the proposed scheme can effectively reduce the handoff delay.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 60-67"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60071-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a combination technique of the frequency-domain random demodulation (FRD) and the broadband digital predistorter (DPD). This technique can linearize the power amplifiers (PAs) at a low sampling rate in the feedback loop. Based on the theory of compressed sensing (CS), the FRD method preprocesses the original signal using the frequency domain sampling signal with different stages through multiple parallel channels. Then the FRD method is applied to the broadband DPD system to restrict the sampling process in the feedback loop. The proposed technique is assessed using a 30 W Class-F wideband PA driven by a 20 MHz orthogonal frequency division multiplexing (OFDM) signal, and a 40 W GaN Doherty PA driven by a 40 MHz 4-carrier long-term evolution (LTE) signal. The simulation and experimental results show that good linearization performance can be achieved at a lower sampling rate with about 24 dBc adjacent channel power ratio (ACPR) improvement by applying the proposed combination technique FRD-DPD. Furthermore, the performance of normalized mean square error (NMSE) and error vector magnitude (EVM) also has been much improved compared with the conventional technique.
{"title":"Low sampling rate technique based frequency-domain random demodulation for broadband digital predistortion","authors":"Zhao Jingmei , Liu Yuan'an , Yu Cuiping , Yu Jianguo","doi":"10.1016/S1005-8885(16)60069-9","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60069-9","url":null,"abstract":"<div><p>This paper proposes a combination technique of the frequency-domain random demodulation (FRD) and the broadband digital predistorter (DPD). This technique can linearize the power amplifiers (PAs) at a low sampling rate in the feedback loop. Based on the theory of compressed sensing (CS), the FRD method preprocesses the original signal using the frequency domain sampling signal with different stages through multiple parallel channels. Then the FRD method is applied to the broadband DPD system to restrict the sampling process in the feedback loop. The proposed technique is assessed using a 30 W Class-F wideband PA driven by a 20 MHz orthogonal frequency division multiplexing (OFDM) signal, and a 40 W GaN Doherty PA driven by a 40 MHz 4-carrier long-term evolution (LTE) signal. The simulation and experimental results show that good linearization performance can be achieved at a lower sampling rate with about 24 dBc adjacent channel power ratio (ACPR) improvement by applying the proposed combination technique FRD-DPD. Furthermore, the performance of normalized mean square error (NMSE) and error vector magnitude (EVM) also has been much improved compared with the conventional technique.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 47-52"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60069-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1016/S1005-8885(16)60070-5
Li Xiaohui , Lin Yingchao , Meng Meimei , Hei Yongqiang
Due to the high cost and power consumption of the radio frequency (RF) chains, it is difficult to implement the full digital beamforming in millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. Fortunately, the hybrid beamforming (HBF) is proposed to overcome these limitations by splitting the beamforming process between the analog and digital domains. In recent works, most HBF schemes improve the spectral efficiency based on greedy algorithms. However, the iterative process in greedy algorithms leads to high computational complexity. In this paper, a new method is proposed to achieve a reasonable compromise between complexity and performance. The novel algorithm utilizes the low-complexity Gram-Schmidt method to orthogonalize the candidate vectors. With the orthogonal candidate matrix, the slow greedy algorithm is avoided. Thus, the RF vectors are found simultaneously without any iteration. Additionally, the phase extraction is applied to satisfy the element-wise constant-magnitude constraint on the RF matrix. Simulation results demonstrate that the new HBF algorithm can make substantial improvements in complexity while maintaining good performance.
{"title":"Gram-Schmidt based hybrid beamforming for mmWave MIMO systems","authors":"Li Xiaohui , Lin Yingchao , Meng Meimei , Hei Yongqiang","doi":"10.1016/S1005-8885(16)60070-5","DOIUrl":"https://doi.org/10.1016/S1005-8885(16)60070-5","url":null,"abstract":"<div><p>Due to the high cost and power consumption of the radio frequency (RF) chains, it is difficult to implement the full digital beamforming in millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. Fortunately, the hybrid beamforming (HBF) is proposed to overcome these limitations by splitting the beamforming process between the analog and digital domains. In recent works, most HBF schemes improve the spectral efficiency based on greedy algorithms. However, the iterative process in greedy algorithms leads to high computational complexity. In this paper, a new method is proposed to achieve a reasonable compromise between complexity and performance. The novel algorithm utilizes the low-complexity Gram-Schmidt method to orthogonalize the candidate vectors. With the orthogonal candidate matrix, the slow greedy algorithm is avoided. Thus, the RF vectors are found simultaneously without any iteration. Additionally, the phase extraction is applied to satisfy the element-wise constant-magnitude constraint on the RF matrix. Simulation results demonstrate that the new HBF algorithm can make substantial improvements in complexity while maintaining good performance.</p></div>","PeriodicalId":35359,"journal":{"name":"Journal of China Universities of Posts and Telecommunications","volume":"23 6","pages":"Pages 53-59"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1005-8885(16)60070-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72232225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}