Pub Date : 2015-09-03DOI: 10.1109/ReTIS.2015.7232855
S. Chatterjee, P. Banerjee
Detection of spectrum holes in Cognitive Radio Networks (CRNs) is possible using theory of compressive sensing (CS). At the same time, these unlicensed Cognitive secondary users (SUs) must reliably detect the presence of licensed users or owners of spectrum holes to avoid interference. Precise localization of the transmitting nodes is a complicated but important problem for operation of Cognitive Radio Networks. To ensure the effectiveness of spectrum sharing, it is desirable to locate primary users, secondary users, and unauthorized users in a non-interactive fashion based on limited measurement data at receivers. Localization of non-cooperative objects refers to the process of locating an object that is not intentionally participating in the localization process but still interferes with the radio spectrum. In this paper, we have proposed a hardware scheme to perform precise localization of non-cooperative primary users based on the measurement of Received Signal Strength Indicator (RSSI) at the receiver. Among all non cooperative localization algorithms, we have chosen the Weighted Centroid Localization (WCL) scheme. To improve the localization accuracy, a classical propagation model with correction factor has been introduced. A modified WCL scheme has been used for precise calculation of Unknown PUs' Coordinates. The main challenge in RSSI based localization is its high sensitivity to the changes possible in radio frequency (RF) environment. The nature of RSSI measurements limits the accuracy in the estimation. To improve the localization accuracy, a classical RF propagation model namely log-normal shadowing is used. Usually to model Cognitive Radio (CR) the shadowing effects cannot be neglected. If the shadowing effect is neglected, the Path Loss is simply a straight line and becomes unrealistic. To make our model more realistic, a correction factor by way of a standard deviation-σ has been added to the classical model. Experiment has been performed to simulate different environmental scenarios by keeping all the node positions fixed. Calculation of unknown PUs' distance has been plotted with and without the correction factor for comparison.
利用压缩感知理论对认知无线网络(CRNs)中的频谱空洞进行检测是可能的。同时,这些未授权的认知辅助用户(SUs)必须可靠地检测是否存在授权用户或频谱漏洞的所有者,以避免干扰。发射节点的精确定位是认知无线网络运行中一个复杂而又重要的问题。为了确保频谱共享的有效性,需要基于接收机有限的测量数据,以非交互方式定位主用户、辅助用户和未授权用户。非合作对象的定位是指对没有有意参与定位过程但仍干扰无线电频谱的对象进行定位的过程。本文提出了一种基于接收端接收信号强度指标(Received Signal Strength Indicator, RSSI)测量的非合作主用户精确定位硬件方案。在所有的非合作定位算法中,我们选择了加权质心定位(WCL)方案。为了提高定位精度,引入了带校正因子的经典传播模型。采用一种改进的WCL格式精确计算了未知粒子的坐标。基于RSSI的定位面临的主要挑战是其对射频环境变化的高度敏感性。RSSI测量的性质限制了估计的准确性。为了提高定位精度,采用了经典的射频传播模型,即对数正态阴影。在对认知无线电(CR)进行建模时,阴影效应不可忽视。如果忽略阴影效果,路径损失只是一条直线,变得不现实。为了使我们的模型更加真实,我们在经典模型中加入了一个标准差-σ的修正因子。在保持所有节点位置固定的情况下,进行了模拟不同环境场景的实验。计算未知pu的距离已绘制有和没有校正因子的比较。
{"title":"Non cooperative primary users-localization in cognitive radio networks","authors":"S. Chatterjee, P. Banerjee","doi":"10.1109/ReTIS.2015.7232855","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232855","url":null,"abstract":"Detection of spectrum holes in Cognitive Radio Networks (CRNs) is possible using theory of compressive sensing (CS). At the same time, these unlicensed Cognitive secondary users (SUs) must reliably detect the presence of licensed users or owners of spectrum holes to avoid interference. Precise localization of the transmitting nodes is a complicated but important problem for operation of Cognitive Radio Networks. To ensure the effectiveness of spectrum sharing, it is desirable to locate primary users, secondary users, and unauthorized users in a non-interactive fashion based on limited measurement data at receivers. Localization of non-cooperative objects refers to the process of locating an object that is not intentionally participating in the localization process but still interferes with the radio spectrum. In this paper, we have proposed a hardware scheme to perform precise localization of non-cooperative primary users based on the measurement of Received Signal Strength Indicator (RSSI) at the receiver. Among all non cooperative localization algorithms, we have chosen the Weighted Centroid Localization (WCL) scheme. To improve the localization accuracy, a classical propagation model with correction factor has been introduced. A modified WCL scheme has been used for precise calculation of Unknown PUs' Coordinates. The main challenge in RSSI based localization is its high sensitivity to the changes possible in radio frequency (RF) environment. The nature of RSSI measurements limits the accuracy in the estimation. To improve the localization accuracy, a classical RF propagation model namely log-normal shadowing is used. Usually to model Cognitive Radio (CR) the shadowing effects cannot be neglected. If the shadowing effect is neglected, the Path Loss is simply a straight line and becomes unrealistic. To make our model more realistic, a correction factor by way of a standard deviation-σ has been added to the classical model. Experiment has been performed to simulate different environmental scenarios by keeping all the node positions fixed. Calculation of unknown PUs' distance has been plotted with and without the correction factor for comparison.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115635710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232864
D. Sejal, T. Kamalakant, V. Tejaswi, Dinesh Anvekar, K. Venugopal, S. Iyengar, L. Patnaik
Huge amount of user request data is generated in web-log. Predicting users' future requests based on previously visited pages is important for web page recommendation, reduction of latency, on-line advertising etc. These applications compromise with prediction accuracy and modelling complexity. we propose a Web Navigation Prediction Framework for webpage Recommendation(WNPWR) which creates and generates a classifier based on sessions as training examples. As sessions are used as training examples, they are created by calculating average time on visiting web pages rather than traditional method which uses 30 minutes as default timeout. This paper uses standard benchmark datasets to analyze and compare our framework with two-tier prediction framework. Simulation results shows that our generated classifier framework WNPWR outperforms two-tier prediction framework in prediction accuracy and time.
{"title":"WNPWR: Web navigation prediction framework for webpage recommendation","authors":"D. Sejal, T. Kamalakant, V. Tejaswi, Dinesh Anvekar, K. Venugopal, S. Iyengar, L. Patnaik","doi":"10.1109/ReTIS.2015.7232864","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232864","url":null,"abstract":"Huge amount of user request data is generated in web-log. Predicting users' future requests based on previously visited pages is important for web page recommendation, reduction of latency, on-line advertising etc. These applications compromise with prediction accuracy and modelling complexity. we propose a Web Navigation Prediction Framework for webpage Recommendation(WNPWR) which creates and generates a classifier based on sessions as training examples. As sessions are used as training examples, they are created by calculating average time on visiting web pages rather than traditional method which uses 30 minutes as default timeout. This paper uses standard benchmark datasets to analyze and compare our framework with two-tier prediction framework. Simulation results shows that our generated classifier framework WNPWR outperforms two-tier prediction framework in prediction accuracy and time.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125346718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232889
Sumant Kumar Mohapatra, S. Khandai, Sushil Kumar Mahapatra, S. Behera
This paper proposes an algorithm to predict the performance characteristics of self-excited induction generator (SEIG) and also predict for the capacitance requirement of SEIG as a function of speed which yields minimum value of capacitance for a required speed. as the induction generator are used widely in nonconventional energy systems for its advantages over synchronous generators like cost and size, self protection on overloads, short circuits, brushless construction, ruggedness.
{"title":"A simulation based algorithm for SEIG as an isolated power source for wind mills","authors":"Sumant Kumar Mohapatra, S. Khandai, Sushil Kumar Mahapatra, S. Behera","doi":"10.1109/ReTIS.2015.7232889","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232889","url":null,"abstract":"This paper proposes an algorithm to predict the performance characteristics of self-excited induction generator (SEIG) and also predict for the capacitance requirement of SEIG as a function of speed which yields minimum value of capacitance for a required speed. as the induction generator are used widely in nonconventional energy systems for its advantages over synchronous generators like cost and size, self protection on overloads, short circuits, brushless construction, ruggedness.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122550872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232901
Gourav Bansal, Pulkit Gera, Deepti R. Bathula
Electrocardiogram (ECG) is a key diagnostic tool to visualize the heart's activity and to study its normal or abnormal functioning. Physicians perform routine diagnosis by visually examining the shapes of ECG waveform. However, automatic processing and classification of ECG data would be extremely useful in patient monitoring and telemedicine systems. Such realtime applications require techniques that are highly accurate and very efficient. Most of the literature on ECG data rely on timing based features for heartbeat classification. This paper presents a shape or template based method to classify heartbeats as Normal vs. Premature Ventricular Contraction (PVC) beats which is capable of being implemented on low computing, low power consuming and low cost mobile devices such as smartphones. Data analysis is based on MIT-BIH Arrhythmia Database containing 48 Holter recordings of different patients. An overall accuracy of 91% was achieved using the proposed method, which is quite significant considering more than 40,000 heartbeats were analysed. Furthermore, it was observed that only 3 patients with peculiar recordings had significantly low accuracies. Excluding these recordings increased the overall accuracy to 97%. Atypical nature of these recordings was closely investigated to elicit ideas for future work.
{"title":"Template based classification of cardiac Arrhythmia in ECG data","authors":"Gourav Bansal, Pulkit Gera, Deepti R. Bathula","doi":"10.1109/ReTIS.2015.7232901","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232901","url":null,"abstract":"Electrocardiogram (ECG) is a key diagnostic tool to visualize the heart's activity and to study its normal or abnormal functioning. Physicians perform routine diagnosis by visually examining the shapes of ECG waveform. However, automatic processing and classification of ECG data would be extremely useful in patient monitoring and telemedicine systems. Such realtime applications require techniques that are highly accurate and very efficient. Most of the literature on ECG data rely on timing based features for heartbeat classification. This paper presents a shape or template based method to classify heartbeats as Normal vs. Premature Ventricular Contraction (PVC) beats which is capable of being implemented on low computing, low power consuming and low cost mobile devices such as smartphones. Data analysis is based on MIT-BIH Arrhythmia Database containing 48 Holter recordings of different patients. An overall accuracy of 91% was achieved using the proposed method, which is quite significant considering more than 40,000 heartbeats were analysed. Furthermore, it was observed that only 3 patients with peculiar recordings had significantly low accuracies. Excluding these recordings increased the overall accuracy to 97%. Atypical nature of these recordings was closely investigated to elicit ideas for future work.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114611264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232938
S. Mukherjee, Chinmoy Maji
In our proposed plan, we try to obtain an optimal power allocation scheme of cognitive radio along with singlehop relay node strategy. Regenerative relaying protocol is used in many previous references and same is true for our case also. Our main objective in this paper is to maximize the overall system capacity with respect to the channel interference coefficients in single-hop Cognitive Radio systems and in order to keep the interference under a predetermined threshold limit. Along with this, life-time of the network is also considered as a factor to improve the overall capacity of the network similarly as the concept of battery. On the basis of this, non-energy aware and energy aware schemes are taken into consideration. After evaluating with extensive simulations, the results show that energy aware approach tends to be more effective than the non-energy aware approach.
{"title":"Single hop relay-based CRN: Life-centric approach","authors":"S. Mukherjee, Chinmoy Maji","doi":"10.1109/ReTIS.2015.7232938","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232938","url":null,"abstract":"In our proposed plan, we try to obtain an optimal power allocation scheme of cognitive radio along with singlehop relay node strategy. Regenerative relaying protocol is used in many previous references and same is true for our case also. Our main objective in this paper is to maximize the overall system capacity with respect to the channel interference coefficients in single-hop Cognitive Radio systems and in order to keep the interference under a predetermined threshold limit. Along with this, life-time of the network is also considered as a factor to improve the overall capacity of the network similarly as the concept of battery. On the basis of this, non-energy aware and energy aware schemes are taken into consideration. After evaluating with extensive simulations, the results show that energy aware approach tends to be more effective than the non-energy aware approach.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232935
Sumant Kumar Mohapatra, B. Swain, Sushil Kumar Mahapatra, S. Behera
Most of the multipath on demand routing protocols are mobile Adhoc Networks suffer due to frequent changes in the network topology confined network resources such as battery capacity of nodes, security of data packets and scalability of the network. This paper has been proposed a stability and Energy Aware Reverse Adhoc On demand Distance Vector (SEAR-AODV) Routing protocol. This method is a modification of existing Reverse R-AODV routing protocol. It is based on optimization of the existing R-AODV routing protocol by computing the reliability factor (RF) of nodes that includes both energy and rout stability aware metric. SEAR-AODV uses the path with high RF value as the primary path to rout the data packets where as secondary paths are used based on the descending order of their RF values. it uses a new make-before-break route maintenance mechanism. In order to reduce the control overhead due to route recovery. a comparative study by NS2 tool signifies that the proposed protocol SEAR-AODV enhances the packet delivery fraction and reduces both the latency as well as the average consumed energy.
{"title":"Stability and energy aware reverse AODV routing protocol in MANETS","authors":"Sumant Kumar Mohapatra, B. Swain, Sushil Kumar Mahapatra, S. Behera","doi":"10.1109/ReTIS.2015.7232935","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232935","url":null,"abstract":"Most of the multipath on demand routing protocols are mobile Adhoc Networks suffer due to frequent changes in the network topology confined network resources such as battery capacity of nodes, security of data packets and scalability of the network. This paper has been proposed a stability and Energy Aware Reverse Adhoc On demand Distance Vector (SEAR-AODV) Routing protocol. This method is a modification of existing Reverse R-AODV routing protocol. It is based on optimization of the existing R-AODV routing protocol by computing the reliability factor (RF) of nodes that includes both energy and rout stability aware metric. SEAR-AODV uses the path with high RF value as the primary path to rout the data packets where as secondary paths are used based on the descending order of their RF values. it uses a new make-before-break route maintenance mechanism. In order to reduce the control overhead due to route recovery. a comparative study by NS2 tool signifies that the proposed protocol SEAR-AODV enhances the packet delivery fraction and reduces both the latency as well as the average consumed energy.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232876
Jyotismita Chaki, R. Parekh, S. Bhattacharya
This paper proposes a methodology for recognition of plant species by using a set of statistical features obtained from digital leaf images. As the features are sensitive to geometric transformations of the leaf image, a pre processing step is initially performed to make the features invariant to transformations like translation, rotation and scaling. Images are classified to 32 pre-defined classes using a Neuro fuzzy classifier. Comparisons are also done with Neural Network and k-Nearest Neighbor classifiers. Recognizing the fact that leaves are fragile and prone to deformations due to various environmental and biological factors, the basic technique is subsequently extended to address recognition of leaves with small deformations. Experimentations using 640 leaf images varying in shape, size, orientations and deformations demonstrate that the technique produces acceptable recognition rates.
{"title":"Recognition of whole and deformed plant leaves using statistical shape features and neuro-fuzzy classifier","authors":"Jyotismita Chaki, R. Parekh, S. Bhattacharya","doi":"10.1109/ReTIS.2015.7232876","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232876","url":null,"abstract":"This paper proposes a methodology for recognition of plant species by using a set of statistical features obtained from digital leaf images. As the features are sensitive to geometric transformations of the leaf image, a pre processing step is initially performed to make the features invariant to transformations like translation, rotation and scaling. Images are classified to 32 pre-defined classes using a Neuro fuzzy classifier. Comparisons are also done with Neural Network and k-Nearest Neighbor classifiers. Recognizing the fact that leaves are fragile and prone to deformations due to various environmental and biological factors, the basic technique is subsequently extended to address recognition of leaves with small deformations. Experimentations using 640 leaf images varying in shape, size, orientations and deformations demonstrate that the technique produces acceptable recognition rates.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131526191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232934
Nagendra Kumar, B. Kumbhani, R. S. Kshetrimayum
Cooperative communication and relay selection are techniques that increase the quality of service (QoS) in wireless communication system. In this paper, we investigate the symbol error rate (SER) performance of cooperative decode-and-forward (DF) relaying in multiple input multiple output (MIMO) system over independent and identically distributed (i.i.d.) κ-μ fading channel environment for M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) schemes. According to DF relaying relay nodes forward the signal received from source to destination only if it is able to decode source signal correctly. The system under consideration is such that every relay may have different number of receive antennas. We apply Orthogonal Space Time Block Coding (OSTBC) scheme at transmit antennas of source and relays. The received signals at the relays and destination are combined using maximal ratio combining (MRC) scheme. Finally, we derive the expression for SER performance using moment generating function (MGF) based approach. Through close agreement of the results form Monte Carlo simulations, the analytical results have been validated.
{"title":"Performance analysis of MIMO based cooperative communication in κ-μ fading channels","authors":"Nagendra Kumar, B. Kumbhani, R. S. Kshetrimayum","doi":"10.1109/ReTIS.2015.7232934","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232934","url":null,"abstract":"Cooperative communication and relay selection are techniques that increase the quality of service (QoS) in wireless communication system. In this paper, we investigate the symbol error rate (SER) performance of cooperative decode-and-forward (DF) relaying in multiple input multiple output (MIMO) system over independent and identically distributed (i.i.d.) κ-μ fading channel environment for M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) schemes. According to DF relaying relay nodes forward the signal received from source to destination only if it is able to decode source signal correctly. The system under consideration is such that every relay may have different number of receive antennas. We apply Orthogonal Space Time Block Coding (OSTBC) scheme at transmit antennas of source and relays. The received signals at the relays and destination are combined using maximal ratio combining (MRC) scheme. Finally, we derive the expression for SER performance using moment generating function (MGF) based approach. Through close agreement of the results form Monte Carlo simulations, the analytical results have been validated.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114590811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232872
M. Reza, Aman Sinha, Rajkumar Nag, P. Mohanty
Compute Unified Device Architecture (CUDA) is an architecture and programming model that allows leveraging the high compute-intensive processing power of the Graphical Processing Units (GPUs) to perform general, non-graphical tasks in a massively parallel manner. Hadoop is an open-source software framework that has its own file system, the Hadoop Distributed File System (HDFS), and its own programming model, the Map Reduce, in order to accomplish the tasks of storage of very large amount of data and their fast processing in a distributed manner in a cluster of inexpensive hardware. This paper presents a model and implementation of a Hadoop-CUDA Hybrid approach to perform Sparse Matrix Vector Multiplication (SpMV) of very large matrices in a very high performing manner. Hadoop is used for splitting the input matrix into smaller sub-matrices, storing them on individual data nodes and then invoking the required CUDA kernels on the individual GPU-possessing cluster nodes. The original SpMV is done using CUDA. Such an implementation has been seen to improve the performance of the SpMV operation over very large matrices by speedup of around 1.4 in comparison to non-Hadoop, single-GPU CUDA implementation.
{"title":"CUDA-enabled Hadoop cluster for Sparse Matrix Vector Multiplication","authors":"M. Reza, Aman Sinha, Rajkumar Nag, P. Mohanty","doi":"10.1109/ReTIS.2015.7232872","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232872","url":null,"abstract":"Compute Unified Device Architecture (CUDA) is an architecture and programming model that allows leveraging the high compute-intensive processing power of the Graphical Processing Units (GPUs) to perform general, non-graphical tasks in a massively parallel manner. Hadoop is an open-source software framework that has its own file system, the Hadoop Distributed File System (HDFS), and its own programming model, the Map Reduce, in order to accomplish the tasks of storage of very large amount of data and their fast processing in a distributed manner in a cluster of inexpensive hardware. This paper presents a model and implementation of a Hadoop-CUDA Hybrid approach to perform Sparse Matrix Vector Multiplication (SpMV) of very large matrices in a very high performing manner. Hadoop is used for splitting the input matrix into smaller sub-matrices, storing them on individual data nodes and then invoking the required CUDA kernels on the individual GPU-possessing cluster nodes. The original SpMV is done using CUDA. Such an implementation has been seen to improve the performance of the SpMV operation over very large matrices by speedup of around 1.4 in comparison to non-Hadoop, single-GPU CUDA implementation.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114749851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232911
Biswajit Biswas, A. Chakrabarti, K. Dey
Image registration is extensively used in many application domains such as medical, remote sensing, computer vision etc. The basic purpose of image registration is to obtain finest geometrical and radio-metrically aligned image from temporal or multi-modal image sensors. In this study, a novel salient feature-based image registration scheme has been designed and implemented by establishing a set of rotation, scale invariant features and corresponding them by a confirmation buildup method using Harris Corner Detection technique. It is an invariant feature vector model containing control points used for affine transformation. A bi-feature vector mapping method has been developed to choose the effective control points. Once feature selection and correspondence is been established, the transformation constraints are approximated using Near Set and modified Hausdorff distance. The proposed algorithm is evaluated under affine transform (translation, rotation, scale) and corresponding image intensity variation. Experimental results demonstrate the superiority of our proposed registration algorithm compared to the existing state-of-art research works in terms of accuracy and robustness.
{"title":"Image registration method using Harris Corner and modified Hausdorff distance with near set","authors":"Biswajit Biswas, A. Chakrabarti, K. Dey","doi":"10.1109/ReTIS.2015.7232911","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232911","url":null,"abstract":"Image registration is extensively used in many application domains such as medical, remote sensing, computer vision etc. The basic purpose of image registration is to obtain finest geometrical and radio-metrically aligned image from temporal or multi-modal image sensors. In this study, a novel salient feature-based image registration scheme has been designed and implemented by establishing a set of rotation, scale invariant features and corresponding them by a confirmation buildup method using Harris Corner Detection technique. It is an invariant feature vector model containing control points used for affine transformation. A bi-feature vector mapping method has been developed to choose the effective control points. Once feature selection and correspondence is been established, the transformation constraints are approximated using Near Set and modified Hausdorff distance. The proposed algorithm is evaluated under affine transform (translation, rotation, scale) and corresponding image intensity variation. Experimental results demonstrate the superiority of our proposed registration algorithm compared to the existing state-of-art research works in terms of accuracy and robustness.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"401 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114706113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}