Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482392
Soliha Rahman, Kanza Naeem, Dileep Raju Seera
This discussion aims at introducing an electronic device with a transparent display screen put to the specific use-case of augmenting real-world images. This can be very useful for an individual who wants to extract reliable information with efficiency and ease. There has been extensive research on transparent devices over the past few years. A transparent device may contain two layers-one for information extraction from surroundings and another one for display. In some cases, a single layer may be able to serve both the purposes. The following section discusses in detail the information extraction/display features exhibited by the device. The idea is to capture the documents which consist of images, handwritten text, maps and mathematical and chemical equations. Using scanning technology or 360-degree camera, we are able to capture the documents. The present invention determines the real time text/ images from the documents and understands them. By the image recognition technique, features like words, formulae, real numbers, shapes of diagrams/ maps etc. are extracted collectively from the images. The data thus extracted is analyzed using Natural Language Processing and Machine learning techniques and matched with the key results in the cloud, as a result giving more insights to the text or diagram. The cloud search typically includes the latest data from the APIs such as wolfram alpha, Wikipedia, articles, current affairs etc., as a result, our final solutions to any query are detailed and will relate to the context and meaning. Our novel device primarily aims to teach students in a digitized way by providing summarizations and helping them understand the logical flow by providing an enhanced solution with steps involved.
{"title":"Cloud Based Information Retrieval Device for Augmentation of Real Time Images","authors":"Soliha Rahman, Kanza Naeem, Dileep Raju Seera","doi":"10.1109/CONECCT.2018.8482392","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482392","url":null,"abstract":"This discussion aims at introducing an electronic device with a transparent display screen put to the specific use-case of augmenting real-world images. This can be very useful for an individual who wants to extract reliable information with efficiency and ease. There has been extensive research on transparent devices over the past few years. A transparent device may contain two layers-one for information extraction from surroundings and another one for display. In some cases, a single layer may be able to serve both the purposes. The following section discusses in detail the information extraction/display features exhibited by the device. The idea is to capture the documents which consist of images, handwritten text, maps and mathematical and chemical equations. Using scanning technology or 360-degree camera, we are able to capture the documents. The present invention determines the real time text/ images from the documents and understands them. By the image recognition technique, features like words, formulae, real numbers, shapes of diagrams/ maps etc. are extracted collectively from the images. The data thus extracted is analyzed using Natural Language Processing and Machine learning techniques and matched with the key results in the cloud, as a result giving more insights to the text or diagram. The cloud search typically includes the latest data from the APIs such as wolfram alpha, Wikipedia, articles, current affairs etc., as a result, our final solutions to any query are detailed and will relate to the context and meaning. Our novel device primarily aims to teach students in a digitized way by providing summarizations and helping them understand the logical flow by providing an enhanced solution with steps involved.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127523657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482370
K. Ashwin, P. Vignesh, M. Rajasekhar Reddy, K. Ravichandran
With the increase in the amount of data transferred online, there is an increasing requirement of security. Steganography is one field of data encryption which focuses on hiding a secret message inside a cover image. A normal person intercepting the image would not be able to identify the secret message. This paper proposes a new algorithm which performs a variation of LSB substitution method in the spatial domain, where the cover image is split into various sub-images and data embedding is done on the sub-images individually. Two LSB embedding methods which operate using three threshold values are also proposed and a key matrix identifies which one of the two methods is to be applied to each of the sub-images. The results and analysis show that the proposed technique is more secure with high PSNR values and is resistant to steganalysis. It is also found that the proposed algorithm has high data embedding capacity.
{"title":"A Hybrid Steganography Method using 3LSB Substitution on Sub-Images based on a Key-Matrix","authors":"K. Ashwin, P. Vignesh, M. Rajasekhar Reddy, K. Ravichandran","doi":"10.1109/CONECCT.2018.8482370","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482370","url":null,"abstract":"With the increase in the amount of data transferred online, there is an increasing requirement of security. Steganography is one field of data encryption which focuses on hiding a secret message inside a cover image. A normal person intercepting the image would not be able to identify the secret message. This paper proposes a new algorithm which performs a variation of LSB substitution method in the spatial domain, where the cover image is split into various sub-images and data embedding is done on the sub-images individually. Two LSB embedding methods which operate using three threshold values are also proposed and a key matrix identifies which one of the two methods is to be applied to each of the sub-images. The results and analysis show that the proposed technique is more secure with high PSNR values and is resistant to steganalysis. It is also found that the proposed algorithm has high data embedding capacity.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115616309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482375
C. Prakash, Y. Vasavada
This paper proposes a compressive signal processing (CSP) receiver for detection and filtering of narrowband jamming signal within a wideband satellite transponder. The proposed receiver utilizes Modulated Wideband Converter (MWC) sensing architecture to detect the presence of the jamming signal using sub-Nyquist sampling. An Automatic Frequency Control (AFC) technique is used to estimate and track the center frequency of interfering signal. The interfering signal is removed using an on-board notch filter with programmable center frequency and bandwidth. The MWC technique combined with the adaptive notch filter is used to limit the impact of jamming signal without affecting the ongoing communication within the filtered sub band. The proposed scheme provides for a computationally efficient implementation of the next-generation wideband satellite transponders that are robust to narrowband jamming.
{"title":"Narrow Band Jamming Detection & Filtering in Satellite Transponder","authors":"C. Prakash, Y. Vasavada","doi":"10.1109/CONECCT.2018.8482375","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482375","url":null,"abstract":"This paper proposes a compressive signal processing (CSP) receiver for detection and filtering of narrowband jamming signal within a wideband satellite transponder. The proposed receiver utilizes Modulated Wideband Converter (MWC) sensing architecture to detect the presence of the jamming signal using sub-Nyquist sampling. An Automatic Frequency Control (AFC) technique is used to estimate and track the center frequency of interfering signal. The interfering signal is removed using an on-board notch filter with programmable center frequency and bandwidth. The MWC technique combined with the adaptive notch filter is used to limit the impact of jamming signal without affecting the ongoing communication within the filtered sub band. The proposed scheme provides for a computationally efficient implementation of the next-generation wideband satellite transponders that are robust to narrowband jamming.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482363
Pramod Yelmewad, B. Talawar
Traveling Salesman Problem (TSP) is an NP-complete, a combinatorial optimization problem. Finding an optimal solution is intractable due to its time complexity. Therefore, approximation approaches have great importance which gives a good quality solution in a reasonable time. This paper presents the importance of constructing the initial solution using construction heuristic rather than setting up arbitrarily. Proposed GPU based Parallel Iterative Hill Climbing (PIHC) algorithm solves large TSPLIB instances. We demonstrate the efficiency of PIHC approach with the state-of-the-art approximation based and GPU based TSP solvers. PIHC produces 181× speedup over its sequential counterpart and 251× over the state-of-the-art GPU based TSP solver. Moreover, PIHC receives a better cost quality than the state-of-the-art GPU based TSP solvers which has gap rate in range of 0.72 % - 8.06%.
{"title":"Near Optimal Solution for Traveling Salesman Problem using GPU","authors":"Pramod Yelmewad, B. Talawar","doi":"10.1109/CONECCT.2018.8482363","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482363","url":null,"abstract":"Traveling Salesman Problem (TSP) is an NP-complete, a combinatorial optimization problem. Finding an optimal solution is intractable due to its time complexity. Therefore, approximation approaches have great importance which gives a good quality solution in a reasonable time. This paper presents the importance of constructing the initial solution using construction heuristic rather than setting up arbitrarily. Proposed GPU based Parallel Iterative Hill Climbing (PIHC) algorithm solves large TSPLIB instances. We demonstrate the efficiency of PIHC approach with the state-of-the-art approximation based and GPU based TSP solvers. PIHC produces 181× speedup over its sequential counterpart and 251× over the state-of-the-art GPU based TSP solver. Moreover, PIHC receives a better cost quality than the state-of-the-art GPU based TSP solvers which has gap rate in range of 0.72 % - 8.06%.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131696443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482396
Dhawal Mahajan, V. Ruparelia
Reliability studies have conventionally been limited to process qualification and as an input to design guides that come with the PDKs. However, Reliability concerns have increased in the present state-of-the-art chip designs due to scaling, new materials and devices, more demanding mission profiles, and increasing constraints of time and money. RelXpert is a tool developed by Cadence to simulate MOSFET devices for device degradation due to various reliability mechanisms like HCI, NBTWBTI, etc. In this study, we have analyzed key building blocks used in RF receiver front-end such as Cascode and Folded-Cascode LNA amplifiers, Cross-coupled LC VCO and Mixer using Cadence RelXpert and GLOBALFOUNDRIES 45nm RFSOI PDK models. The simulations were run for 10-year EOL (End Of Life) criteria. Also, some insights have been presented to make designs more robust. The model equations used are based on energy driven model for HCI. In addition, there is a standard NBTI model with relaxation effects modeled for AC stress.
{"title":"Reliability Simulation and Analysis of Important RF Circuits Using Cadence Relxpert","authors":"Dhawal Mahajan, V. Ruparelia","doi":"10.1109/CONECCT.2018.8482396","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482396","url":null,"abstract":"Reliability studies have conventionally been limited to process qualification and as an input to design guides that come with the PDKs. However, Reliability concerns have increased in the present state-of-the-art chip designs due to scaling, new materials and devices, more demanding mission profiles, and increasing constraints of time and money. RelXpert is a tool developed by Cadence to simulate MOSFET devices for device degradation due to various reliability mechanisms like HCI, NBTWBTI, etc. In this study, we have analyzed key building blocks used in RF receiver front-end such as Cascode and Folded-Cascode LNA amplifiers, Cross-coupled LC VCO and Mixer using Cadence RelXpert and GLOBALFOUNDRIES 45nm RFSOI PDK models. The simulations were run for 10-year EOL (End Of Life) criteria. Also, some insights have been presented to make designs more robust. The model equations used are based on energy driven model for HCI. In addition, there is a standard NBTI model with relaxation effects modeled for AC stress.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123739454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482394
B. S. S. Govind, Dr. Ramakrishnudu Tene, K. L. Saideep
Websites like Netflix, Amazon, Yelp have lot of reviews and ratings. Ratings are of usually on a scale of 1-5 points or stars. Reviews are free-form text consisting of a few sentences. Text sentiment analysis classification has occupied a pivotal role in sentiment analysis research as it offers important opinion mining options. Using these reviews and ratings of a person, we can recommend him new products, movies, restaurants. Usually Recommender systems match user patterns by finding similar users and recommendations are developed. We solve the problem of taking into account user’s personal sentiments and judgments making the recommendations more directed and useful to him. In this paper we provide an example using movies from the MovieLens dataset. Making recommendations using sentiment tags along with usual recommendations proves to be a novel and more intuitive way from a user’s likability of the movie and preferences standpoint. We analyze various methods like Unigrams, Bigrams, Support vector machines, Bernoulli Naive Bayes, Random Forests on popular datasets like Yelp, MovieLens to get the best method for sentiment generation. Then we introduce a novel recommendation systems combining the Alternate Least Square (ALS) method and sentiment generation recommending movies to users which proved to give lesser RMSE than traditional models on the MovieLens Dataset.
{"title":"Novel Recommender Systems Using Personalized Sentiment Mining","authors":"B. S. S. Govind, Dr. Ramakrishnudu Tene, K. L. Saideep","doi":"10.1109/CONECCT.2018.8482394","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482394","url":null,"abstract":"Websites like Netflix, Amazon, Yelp have lot of reviews and ratings. Ratings are of usually on a scale of 1-5 points or stars. Reviews are free-form text consisting of a few sentences. Text sentiment analysis classification has occupied a pivotal role in sentiment analysis research as it offers important opinion mining options. Using these reviews and ratings of a person, we can recommend him new products, movies, restaurants. Usually Recommender systems match user patterns by finding similar users and recommendations are developed. We solve the problem of taking into account user’s personal sentiments and judgments making the recommendations more directed and useful to him. In this paper we provide an example using movies from the MovieLens dataset. Making recommendations using sentiment tags along with usual recommendations proves to be a novel and more intuitive way from a user’s likability of the movie and preferences standpoint. We analyze various methods like Unigrams, Bigrams, Support vector machines, Bernoulli Naive Bayes, Random Forests on popular datasets like Yelp, MovieLens to get the best method for sentiment generation. Then we introduce a novel recommendation systems combining the Alternate Least Square (ALS) method and sentiment generation recommending movies to users which proved to give lesser RMSE than traditional models on the MovieLens Dataset.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115665560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482378
S. Natarajan, N. Krishnakumar, M. Pavan, D. Pal, S. Nandy
Parsing a very long genomic string (human genome is typically 3 billion characters long) abstracts the whole complexity of biocomputing. Approximate String Matching (ASM) is the most eligible computing paradigm that captures the biological complexity of the genome, integrating various sources of biological information into tractable probabilistic models. Though computationally complex, the Dynamic Programming (DP) methodology proves to be very efficient for ASM, in discriminating substantial similarities amongst severe noise in genetic data presented by evolution. Though a significant amount of computations in the DP algorithms are accelerated on multiple platforms, the less complex traceback step is still performed in the host, presenting significant memory and Input/Output bottleneck. With billions of such alignments required to analyse the genomic big data from the Next Generation Sequencing (NGS) Platforms, this bottleneck can severely affect system performance. This paper presents ReneGENE-DP, our implementations of the DP computations on hardware accelerators, with the novelty of realizing traceback in hardware in parallel with the forward scan during analysis, on both FPGA and GPU. The fastest FPGA implementation is around 43.63x better than the fastest GPU implementation of ReneGENE-DP, which in turn, is 380.85x faster than the reference design, which is a GPU based DP algorithm with traceback on host.
{"title":"ReneGENE-DP: Accelerated Parallel Dynamic Programming for Genome Informatics","authors":"S. Natarajan, N. Krishnakumar, M. Pavan, D. Pal, S. Nandy","doi":"10.1109/CONECCT.2018.8482378","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482378","url":null,"abstract":"Parsing a very long genomic string (human genome is typically 3 billion characters long) abstracts the whole complexity of biocomputing. Approximate String Matching (ASM) is the most eligible computing paradigm that captures the biological complexity of the genome, integrating various sources of biological information into tractable probabilistic models. Though computationally complex, the Dynamic Programming (DP) methodology proves to be very efficient for ASM, in discriminating substantial similarities amongst severe noise in genetic data presented by evolution. Though a significant amount of computations in the DP algorithms are accelerated on multiple platforms, the less complex traceback step is still performed in the host, presenting significant memory and Input/Output bottleneck. With billions of such alignments required to analyse the genomic big data from the Next Generation Sequencing (NGS) Platforms, this bottleneck can severely affect system performance. This paper presents ReneGENE-DP, our implementations of the DP computations on hardware accelerators, with the novelty of realizing traceback in hardware in parallel with the forward scan during analysis, on both FPGA and GPU. The fastest FPGA implementation is around 43.63x better than the fastest GPU implementation of ReneGENE-DP, which in turn, is 380.85x faster than the reference design, which is a GPU based DP algorithm with traceback on host.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116081333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482393
L. S. Praveen, S. N. Nagananda, P. Shankapal
a large amount of study is carried out in the field of prosthetics to restore functionalities of lost organs. Bionic hand is one of those device which helps to replace the lost hand functionalities for the amputees. A lot of effort is put into to development of bionic hand which can able to mimic the action performed by normal hand. This paper provides an insight on development of real time control of bionic hand based on the collected Electromyography(EMG) signal from lower elbow amputee which able to perform hand opposition and re-position. This paper also provides an understanding of signal processing techniques required to classify the EMG signals for identifying required action to control bionic hand. The results indicate that root mean square and integrated absolute value for Feature extraction, k-Nearest Neighbor and Naive Bayesian Pattern Classification methods are chosen for feature classification EMG signals to control bionic hand. The developed algorithms are capable of producing accuracy up to 92-94%.
{"title":"Design and Development of Real Time Bionic Hand Control Using EMG Signal","authors":"L. S. Praveen, S. N. Nagananda, P. Shankapal","doi":"10.1109/CONECCT.2018.8482393","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482393","url":null,"abstract":"a large amount of study is carried out in the field of prosthetics to restore functionalities of lost organs. Bionic hand is one of those device which helps to replace the lost hand functionalities for the amputees. A lot of effort is put into to development of bionic hand which can able to mimic the action performed by normal hand. This paper provides an insight on development of real time control of bionic hand based on the collected Electromyography(EMG) signal from lower elbow amputee which able to perform hand opposition and re-position. This paper also provides an understanding of signal processing techniques required to classify the EMG signals for identifying required action to control bionic hand. The results indicate that root mean square and integrated absolute value for Feature extraction, k-Nearest Neighbor and Naive Bayesian Pattern Classification methods are chosen for feature classification EMG signals to control bionic hand. The developed algorithms are capable of producing accuracy up to 92-94%.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/CONECCT.2018.8482391
Varun Kumar, Poonam Singh, S. K. Patra
In this paper, we have considered a relay assisted cooperative network, where relay station (RS) and base station (BS) have very large but finite number of antenna. The data detection is done by linear zero forcing (ZF) technique assuming BS and RS have imperfect channel state information (CSI). We derive a new analytical expression for the uplink rate in different channel imperfection scenario. In single hop signal transmission from mobile users (MU) to BS via RS, large number of RS and BS antennas play a vital role over the fixed channel error variance. For dealing with large MIMO, we have used the property of random matrix theory (specially Wishart Matrix decomposition). We have drawn the relation, where uplink rate in single hop signal transmission is the function of number of RS and BS antenna, both link channel error variance and other parameter also. Keeping other parameters constant the uplink rate vs number of RS and BS antenna has been numerically validated for large MIMO perspective under suitable simulation parameter.
{"title":"Large-Scale Antenna System Performance with Imperfect CSI in Cooperative Networks","authors":"Varun Kumar, Poonam Singh, S. K. Patra","doi":"10.1109/CONECCT.2018.8482391","DOIUrl":"https://doi.org/10.1109/CONECCT.2018.8482391","url":null,"abstract":"In this paper, we have considered a relay assisted cooperative network, where relay station (RS) and base station (BS) have very large but finite number of antenna. The data detection is done by linear zero forcing (ZF) technique assuming BS and RS have imperfect channel state information (CSI). We derive a new analytical expression for the uplink rate in different channel imperfection scenario. In single hop signal transmission from mobile users (MU) to BS via RS, large number of RS and BS antennas play a vital role over the fixed channel error variance. For dealing with large MIMO, we have used the property of random matrix theory (specially Wishart Matrix decomposition). We have drawn the relation, where uplink rate in single hop signal transmission is the function of number of RS and BS antenna, both link channel error variance and other parameter also. Keeping other parameters constant the uplink rate vs number of RS and BS antenna has been numerically validated for large MIMO perspective under suitable simulation parameter.","PeriodicalId":430389,"journal":{"name":"2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123642097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}