Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736490
Karim Badawi, Qiuting Huang
In this paper, we propose a novel framework for the design of equalization techniques that provide an efficient superior performance and exhibit a flexible scalable performance-complexity trade-off. The 3GPP time-duplexing high speed packet access (TD-HSPA) wireless communication system is chosen for application due to its time-relevance. The proposed framework utilizes a low-complexity pre-processing stage that implements linear filter-assisted progressive group detection (PGD), a technique which we have proposed in previous works. PGD is a near-maximum likelihood (ML) detection technique that spans and intelligently prunes the set of possible transmit-symbol combinations, and provides a set of the most probable combinations as interim hypotheses for symbol-decisions. Afterwards, the intermin hypotheses are utilized by an equalizer such as a constrained-Viterbi algorithm or an adapted decision-feedback equalizer, as a reduced set of candidates for transmit-symbol combinations. Hence, the equalizer stage decides on the best candidate according to the equalizer metric. Numerical simulations show that the proposed receiver outperforms traditional receivers found in literature, and provides substantial performance gains that scale with complexity. The proposed receiver architecture is able to approach the performance of the optimal equalizer with significant complexity savings.
{"title":"A novel framework for scalable equalization","authors":"Karim Badawi, Qiuting Huang","doi":"10.1109/ICENCO.2013.6736490","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736490","url":null,"abstract":"In this paper, we propose a novel framework for the design of equalization techniques that provide an efficient superior performance and exhibit a flexible scalable performance-complexity trade-off. The 3GPP time-duplexing high speed packet access (TD-HSPA) wireless communication system is chosen for application due to its time-relevance. The proposed framework utilizes a low-complexity pre-processing stage that implements linear filter-assisted progressive group detection (PGD), a technique which we have proposed in previous works. PGD is a near-maximum likelihood (ML) detection technique that spans and intelligently prunes the set of possible transmit-symbol combinations, and provides a set of the most probable combinations as interim hypotheses for symbol-decisions. Afterwards, the intermin hypotheses are utilized by an equalizer such as a constrained-Viterbi algorithm or an adapted decision-feedback equalizer, as a reduced set of candidates for transmit-symbol combinations. Hence, the equalizer stage decides on the best candidate according to the equalizer metric. Numerical simulations show that the proposed receiver outperforms traditional receivers found in literature, and provides substantial performance gains that scale with complexity. The proposed receiver architecture is able to approach the performance of the optimal equalizer with significant complexity savings.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736481
Noha S. Fareed, Hamdy M. Mousa, Ashraf B. Elsisi
Due to the great amount of information available on the web, Question/Answering systems have become a focus for researchers and users as well. This paper introduces a proposed design for an Arabic Question Answering system based on Query Expansion ontology and an Arabic Stemmer. A set of factoid CLEF and TREC questions used to evaluate the system. Improved results obtained using AWN as a semantic Query Expansion and Khoja stemmer as a stemming system. Two experiments conducted using AWN the first using one level of expansion and the second using two level of expansion. Three measures are performed: Accuracy, Mean Reciprocal Rank, and Answered Questions, and the obtained results are 35.5%, 20.2%, and 65.33% respectively when using one level of expansion. But when using two level of expansion we get 38.77%, 16.2%, and 65.55% respectively.
{"title":"Enhanced semantic arabic Question Answering system based on Khoja stemmer and AWN","authors":"Noha S. Fareed, Hamdy M. Mousa, Ashraf B. Elsisi","doi":"10.1109/ICENCO.2013.6736481","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736481","url":null,"abstract":"Due to the great amount of information available on the web, Question/Answering systems have become a focus for researchers and users as well. This paper introduces a proposed design for an Arabic Question Answering system based on Query Expansion ontology and an Arabic Stemmer. A set of factoid CLEF and TREC questions used to evaluate the system. Improved results obtained using AWN as a semantic Query Expansion and Khoja stemmer as a stemming system. Two experiments conducted using AWN the first using one level of expansion and the second using two level of expansion. Three measures are performed: Accuracy, Mean Reciprocal Rank, and Answered Questions, and the obtained results are 35.5%, 20.2%, and 65.33% respectively when using one level of expansion. But when using two level of expansion we get 38.77%, 16.2%, and 65.55% respectively.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130370430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736474
Ahmed H. Asad, Eid El Amry, A. Hassanien
Accurate segmentation of retinal blood vessels is an important task in computer aided diagnosis and surgery planning of retinopathy. In this paper, an unsupervised image segmentation of retinal vessels based on water flooding model is presented. The proposed approach imitates the nature of water flooding over land, where water always goes toward the low lands by the effect of gravity. The water flooding model supports water feeding to allow for covering more uncovered land regions and also allows for evaporation of water to help getting red of tiny regions or regions that may temporarily covered with water. The proposed vessel segmentation approach consists of three main phases. In the first phase, image image enhancement technique is employed to enhance the brightness corrected retina. Then a water flooding-based segmentation approach is applied to segment and extracts the retina vessel. Finally, a post processing phase is added to improve the results obtained from the segmentation phase using structural characteristics of the retinal vascular network. The proposed water flooding approach is tested on DRIVE databases of retinal images. The results demonstrate that the performance of the proposed approach is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity.
{"title":"Retinal vessels segmentation based on water flooding model","authors":"Ahmed H. Asad, Eid El Amry, A. Hassanien","doi":"10.1109/ICENCO.2013.6736474","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736474","url":null,"abstract":"Accurate segmentation of retinal blood vessels is an important task in computer aided diagnosis and surgery planning of retinopathy. In this paper, an unsupervised image segmentation of retinal vessels based on water flooding model is presented. The proposed approach imitates the nature of water flooding over land, where water always goes toward the low lands by the effect of gravity. The water flooding model supports water feeding to allow for covering more uncovered land regions and also allows for evaporation of water to help getting red of tiny regions or regions that may temporarily covered with water. The proposed vessel segmentation approach consists of three main phases. In the first phase, image image enhancement technique is employed to enhance the brightness corrected retina. Then a water flooding-based segmentation approach is applied to segment and extracts the retina vessel. Finally, a post processing phase is added to improve the results obtained from the segmentation phase using structural characteristics of the retinal vascular network. The proposed water flooding approach is tested on DRIVE databases of retinal images. The results demonstrate that the performance of the proposed approach is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128168134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736469
M. Wafy, A. M. Madbouly
This paper introduces an automatic method for licenses plate detection that using local corners points features, clustering and some properties of license plate. Licenses plates are rich corner point's area that can be used with some properties of license plate to locate licenses plate location. The algorithm has four stages; firstly, the quality of image is improved via preprocessing operations. Consequently, Harries corner point's detector is used. These corner points are gathered in clusters, where clustering of corner points were built on relative difference distance that guarantee the whole license plate will be in only one cluster.
{"title":"Automatic license plate detection based on corner point and cluster","authors":"M. Wafy, A. M. Madbouly","doi":"10.1109/ICENCO.2013.6736469","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736469","url":null,"abstract":"This paper introduces an automatic method for licenses plate detection that using local corners points features, clustering and some properties of license plate. Licenses plates are rich corner point's area that can be used with some properties of license plate to locate licenses plate location. The algorithm has four stages; firstly, the quality of image is improved via preprocessing operations. Consequently, Harries corner point's detector is used. These corner points are gathered in clusters, where clustering of corner points were built on relative difference distance that guarantee the whole license plate will be in only one cluster.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131755109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736484
Alaa E. Abdel-Hakim
We propose a low-cost low-rank-based framework for the operation of wireless surveillance systems. The proposed framework has two modes of operations: an initialization offline mode, in which low-rank terms of few initial frames are recovered using RPCA. Then these recovered low-rank terms are transmitted over the wireless network to the receiver. In the real-time mode of operation, sparse terms of the captured frames are calculated using FRPCA, then transmitted to the receiver. Transmission of only the sparse terms greatly saves the used bandwidth and hence the cost of the transmission process.
{"title":"A sparse representation for efficient bandwidth utilization in wireless surveillance networks","authors":"Alaa E. Abdel-Hakim","doi":"10.1109/ICENCO.2013.6736484","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736484","url":null,"abstract":"We propose a low-cost low-rank-based framework for the operation of wireless surveillance systems. The proposed framework has two modes of operations: an initialization offline mode, in which low-rank terms of few initial frames are recovered using RPCA. Then these recovered low-rank terms are transmitted over the wireless network to the receiver. In the real-time mode of operation, sparse terms of the captured frames are calculated using FRPCA, then transmitted to the receiver. Transmission of only the sparse terms greatly saves the used bandwidth and hence the cost of the transmission process.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736483
Basem I. Mohammad, S. Shaheen, Sahar A. Mokhtar
Example tracing tutors has proven to be one of the simplest methods for modelling domain limited scenarios within intelligent tutoring systems [1]. The intelligence embedded within this type of tutors lies in the feedback it possesses during creation and how it is triggered based on learner interaction with the example. Since real-life tutoring in 1-to-1 scenarios proves to have the ultimate learning gain factor of 2 sigma [7]; the tutor model planted in example tracing is very limited compared to real life tutor behavior. In this paper we present a novel tutor modeling technique that records instructional behavior and scaffolding scenarios on top of example tracing and student responses. The model is designed conforming to the standard teacher and student dialogue moves [8] and in a way that allows evolving the information in the model while it is being used. A set of 3 face to face lectures each with 10 problems in math, has been logged and analyzed to extract the principle moves of the instructor strategies and related keywords. A special virtual classroom was developed for simultaneous capturing of teacher dialogue moves and additional VCR tools, along with student responses to construct the model information.
{"title":"Novel online tutor modeling for intelligent tutoring systems","authors":"Basem I. Mohammad, S. Shaheen, Sahar A. Mokhtar","doi":"10.1109/ICENCO.2013.6736483","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736483","url":null,"abstract":"Example tracing tutors has proven to be one of the simplest methods for modelling domain limited scenarios within intelligent tutoring systems [1]. The intelligence embedded within this type of tutors lies in the feedback it possesses during creation and how it is triggered based on learner interaction with the example. Since real-life tutoring in 1-to-1 scenarios proves to have the ultimate learning gain factor of 2 sigma [7]; the tutor model planted in example tracing is very limited compared to real life tutor behavior. In this paper we present a novel tutor modeling technique that records instructional behavior and scaffolding scenarios on top of example tracing and student responses. The model is designed conforming to the standard teacher and student dialogue moves [8] and in a way that allows evolving the information in the model while it is being used. A set of 3 face to face lectures each with 10 problems in math, has been logged and analyzed to extract the principle moves of the instructor strategies and related keywords. A special virtual classroom was developed for simultaneous capturing of teacher dialogue moves and additional VCR tools, along with student responses to construct the model information.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114656930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736470
Amira Ali Bebars, E. Hemayed
This paper quantifies existing techniques for feature detection in human action recognition. Four different feature detection approaches are investigated using Motion SIFT descriptor, a standard bag-of-features SVM classifier with x2 kernel. Specifically we used two popular feature detectors; Motion SIFT (MOSIFT) and Motion FAST (MOFAST) with and without Statis interest points. The system was tested on commonly used datasets; KTH and Weizmann. Based on several experiments we conclude that using MOSIFT detector with Statis interest point results in the best classification accuracy on Weizmann dataset but MOFAST without Statis points achieve the best classification accuracy on KTH dataset.
{"title":"Comparative study for feature detectors in human activity recognition","authors":"Amira Ali Bebars, E. Hemayed","doi":"10.1109/ICENCO.2013.6736470","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736470","url":null,"abstract":"This paper quantifies existing techniques for feature detection in human action recognition. Four different feature detection approaches are investigated using Motion SIFT descriptor, a standard bag-of-features SVM classifier with x2 kernel. Specifically we used two popular feature detectors; Motion SIFT (MOSIFT) and Motion FAST (MOFAST) with and without Statis interest points. The system was tested on commonly used datasets; KTH and Weizmann. Based on several experiments we conclude that using MOSIFT detector with Statis interest point results in the best classification accuracy on Weizmann dataset but MOFAST without Statis points achieve the best classification accuracy on KTH dataset.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736471
A. Eleliemy, D. Hegazy, W. Elkilani
Object recognition and categorization are two important key features of computer vision. Accuracy aspects represent research challenge fo r both object recognition and categorization techniques. High performance computing (HPC) technologies usually manage the increasing time and complexity of computations. In this paper, a new approach that use 3D spin-images for 3D object categorization is introduced. The main contribution of our approach i s that it employs the MPI techniques in a unique way to extract spin-images. The technique proposed utilizes the independence between spin-images generated at each point. Time estimation of our technique ha ve shown dramatic decrease of the categorization time proportional to number of workers used.
{"title":"MPI parallel implementation of 3D object categorization using spin-images","authors":"A. Eleliemy, D. Hegazy, W. Elkilani","doi":"10.1109/ICENCO.2013.6736471","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736471","url":null,"abstract":"Object recognition and categorization are two important key features of computer vision. Accuracy aspects represent research challenge fo r both object recognition and categorization techniques. High performance computing (HPC) technologies usually manage the increasing time and complexity of computations. In this paper, a new approach that use 3D spin-images for 3D object categorization is introduced. The main contribution of our approach i s that it employs the MPI techniques in a unique way to extract spin-images. The technique proposed utilizes the independence between spin-images generated at each point. Time estimation of our technique ha ve shown dramatic decrease of the categorization time proportional to number of workers used.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"5 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736489
D. AbdElminaam, Hatem M. Abdul Kader, Mohie M. Hadhoud, S. El-Sayed
The increasing use of wireless Internet and smartphone has accelerated the need for widespread computing. Smartphones stimulate growth of Global Position Systems (GPS) and mobile cloud computing. Mobile cloud computing is the cloud infrastructure where the computation and storage are moved away from mobile devices. However, smartphone mobile computing poses challenges because of the limited battery capacity, constraints of wireless networks and the limitations of device. Therefore, it is necessary to offload the computation-intensive part by careful partitioning of application functions across a cloud. Mobile applications can be executed in the mobile device or offloaded to the cloud clone for execution, in this paper; we propose a new elastic application model that enables transparent use of cloud resources to augment the capability of resource constrained mobile devices. The significant features of this model include the partition of a single application into multiple components. Its execution location is transparent it can be run on a mobile device or migrated to the cloud. Thus, an elastic application can augment the capabilities of a mobile device including computation power, storage, and network bandwidth, with the light of dynamic execution configuration according to device's status including CPU load, memory, battery level. We demonstrate promising results of the proposed application model using data collected from one of our example elastic applications.
{"title":"Elastic framework for augmenting the performance of mobile applications using cloud computing","authors":"D. AbdElminaam, Hatem M. Abdul Kader, Mohie M. Hadhoud, S. El-Sayed","doi":"10.1109/ICENCO.2013.6736489","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736489","url":null,"abstract":"The increasing use of wireless Internet and smartphone has accelerated the need for widespread computing. Smartphones stimulate growth of Global Position Systems (GPS) and mobile cloud computing. Mobile cloud computing is the cloud infrastructure where the computation and storage are moved away from mobile devices. However, smartphone mobile computing poses challenges because of the limited battery capacity, constraints of wireless networks and the limitations of device. Therefore, it is necessary to offload the computation-intensive part by careful partitioning of application functions across a cloud. Mobile applications can be executed in the mobile device or offloaded to the cloud clone for execution, in this paper; we propose a new elastic application model that enables transparent use of cloud resources to augment the capability of resource constrained mobile devices. The significant features of this model include the partition of a single application into multiple components. Its execution location is transparent it can be run on a mobile device or migrated to the cloud. Thus, an elastic application can augment the capabilities of a mobile device including computation power, storage, and network bandwidth, with the light of dynamic execution configuration according to device's status including CPU load, memory, battery level. We demonstrate promising results of the proposed application model using data collected from one of our example elastic applications.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126761155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/ICENCO.2013.6736487
Hazem Mohammed, T. Khalaf
In this paper, we consider a wireless cooperative communication network that consists of single source, single relay, and single destination and derive a general upper bound (UB) on the end-to-end bit error rate (BER). The relay node uses the decode and forward (DF) cooperation protocol in order to increase the reliability of the source data at the destination. The derivation takes into account the distances between the system nodes in addition to the channel noise and fading effects. The destination uses the maximum a posterior (MAP) decoder to estimate the data sent from the source. The derived UB is very tight and it almost coincides with the exact BER results obtained from simulations. Therefore, the closed form expression of the UB can be used for further studies. In this paper, we use the UB closed form expression to study the effects of the relay position on the BER performance. The genetic algorithm is used to find the optimal location of the relay node.
{"title":"Optimal positioning of relay node in wireless cooperative communication networks","authors":"Hazem Mohammed, T. Khalaf","doi":"10.1109/ICENCO.2013.6736487","DOIUrl":"https://doi.org/10.1109/ICENCO.2013.6736487","url":null,"abstract":"In this paper, we consider a wireless cooperative communication network that consists of single source, single relay, and single destination and derive a general upper bound (UB) on the end-to-end bit error rate (BER). The relay node uses the decode and forward (DF) cooperation protocol in order to increase the reliability of the source data at the destination. The derivation takes into account the distances between the system nodes in addition to the channel noise and fading effects. The destination uses the maximum a posterior (MAP) decoder to estimate the data sent from the source. The derived UB is very tight and it almost coincides with the exact BER results obtained from simulations. Therefore, the closed form expression of the UB can be used for further studies. In this paper, we use the UB closed form expression to study the effects of the relay position on the BER performance. The genetic algorithm is used to find the optimal location of the relay node.","PeriodicalId":256564,"journal":{"name":"2013 9th International Computer Engineering Conference (ICENCO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130289968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}