Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719966
A. Ali
Iris pattern is the region on human eye that generally used for identifying person. The pattern is unique for each person and must be transformed into a representation that gives meaning to the textures. However, this process could be hampered if the given image has poor contrast of intensity level. This paper suggests an approach to enhance the image in order to obtain abundant iris texture. First, using common method of segmentation, the iris region is localized and transformed to rectangular form. Then, we apply the moving average on the image to reduce random noise. At this stage, an amendment will be imposed to produce uniform gray levels distribution. After that, histogram equalization method will be applied to produce equalized contrast and more embellish iris pattern. Finally, this enhanced image is used to produce one dimensional real value as iris signature. Support Vector Machines (SVM) is used to classify the iris images and the results are promising.
{"title":"Simple features generation method for SVM based iris classification","authors":"A. Ali","doi":"10.1109/ICCSCE.2013.6719966","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719966","url":null,"abstract":"Iris pattern is the region on human eye that generally used for identifying person. The pattern is unique for each person and must be transformed into a representation that gives meaning to the textures. However, this process could be hampered if the given image has poor contrast of intensity level. This paper suggests an approach to enhance the image in order to obtain abundant iris texture. First, using common method of segmentation, the iris region is localized and transformed to rectangular form. Then, we apply the moving average on the image to reduce random noise. At this stage, an amendment will be imposed to produce uniform gray levels distribution. After that, histogram equalization method will be applied to produce equalized contrast and more embellish iris pattern. Finally, this enhanced image is used to produce one dimensional real value as iris signature. Support Vector Machines (SVM) is used to classify the iris images and the results are promising.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6720007
M. N. Mansor, Mohd Nazri Rejab
Most of infant pain cause changes in the face. Clinicians use image analysis to characterize the pathological faces. Nowadays, infant pain research is increasing dramatically due to high demand from all medical team. This paper presents a sparse and naïve Bayes classifier for the diagnosis of infant pain disorders. Phase congruency image and local binary pattern are proposed. The proposed algorithms provide very promising classification rate.
{"title":"Phase congruency image and sparse classifier for newborn classifying pain state","authors":"M. N. Mansor, Mohd Nazri Rejab","doi":"10.1109/ICCSCE.2013.6720007","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6720007","url":null,"abstract":"Most of infant pain cause changes in the face. Clinicians use image analysis to characterize the pathological faces. Nowadays, infant pain research is increasing dramatically due to high demand from all medical team. This paper presents a sparse and naïve Bayes classifier for the diagnosis of infant pain disorders. Phase congruency image and local binary pattern are proposed. The proposed algorithms provide very promising classification rate.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"30 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120972836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719927
Eman T. Hassan, Hazem M. Abbas, H. K. Mohamed
We present a new inpainting algorithm that is based on image segmentation and segment classification. First, we employ the mean shift algorithm to segment the input image. Then, we divide the original inpainting problem to be either one of the two problems: Large Segment Inpainting problem or Non-uniform Segments inpainting problem. The reason we do that is that human eye is more discerning to the errors in the structure and texture propagation of a large-uniform regions with less details while it is less discerning to errors in non-uniform regions with more details. We propose a novel algorithm for each one of the problems- Large Segment Inpainting and Non-uniform Segments inpainting- according to the main features of each one. The experimental results show the advantage of our technique which produces output images with better perceived visual quality.
{"title":"Image inpainting based on image segmentation and segment classification","authors":"Eman T. Hassan, Hazem M. Abbas, H. K. Mohamed","doi":"10.1109/ICCSCE.2013.6719927","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719927","url":null,"abstract":"We present a new inpainting algorithm that is based on image segmentation and segment classification. First, we employ the mean shift algorithm to segment the input image. Then, we divide the original inpainting problem to be either one of the two problems: Large Segment Inpainting problem or Non-uniform Segments inpainting problem. The reason we do that is that human eye is more discerning to the errors in the structure and texture propagation of a large-uniform regions with less details while it is less discerning to errors in non-uniform regions with more details. We propose a novel algorithm for each one of the problems- Large Segment Inpainting and Non-uniform Segments inpainting- according to the main features of each one. The experimental results show the advantage of our technique which produces output images with better perceived visual quality.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126427183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719982
A. Alkahtani, F. H. Nordin, Z. Sharrif
Electric field emission of electrical appliances has become an important problem, especially when testing for safety and compliance with regulations of electromagnetic compatibility (EMC). To confirm the safety and compliance of an electrical appliance, it is important to measure the levels of the emitted electric and magnetic fields from this appliance and compare them to the exposure limit values set by the international standards. Moreover, modeling these emitted fields can aid understanding their characteristics and ease investigating how different systems react to such emission. However, a good model depends mainly on the accuracy and robustness of the measurement methodology. Hence, the aim of this paper is to present a measurement methodology and a frequency domain model for the emitted electric field of vacuum cleaners using system identification tools. The proposed model is a data-driven model where the recorded signal is used to construct the model using polynomial model estimation methods. Measurement setup, related work and the model equation are presented accordingly.
{"title":"Measurement and estimation of electric field emission of a vacuum cleaner","authors":"A. Alkahtani, F. H. Nordin, Z. Sharrif","doi":"10.1109/ICCSCE.2013.6719982","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719982","url":null,"abstract":"Electric field emission of electrical appliances has become an important problem, especially when testing for safety and compliance with regulations of electromagnetic compatibility (EMC). To confirm the safety and compliance of an electrical appliance, it is important to measure the levels of the emitted electric and magnetic fields from this appliance and compare them to the exposure limit values set by the international standards. Moreover, modeling these emitted fields can aid understanding their characteristics and ease investigating how different systems react to such emission. However, a good model depends mainly on the accuracy and robustness of the measurement methodology. Hence, the aim of this paper is to present a measurement methodology and a frequency domain model for the emitted electric field of vacuum cleaners using system identification tools. The proposed model is a data-driven model where the recorded signal is used to construct the model using polynomial model estimation methods. Measurement setup, related work and the model equation are presented accordingly.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131850248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719936
Andri Mirzal
Latent semantic indexing (LSI) is an indexing method to improve performance of an information retrieval system by indexing terms that appear in related documents and weakening influences of terms that appear in unrelated documents. LSI usually is conducted by using the truncated singular value decomposition (SVD). The main difficulty in using this technique is its retrieval performance depends strongly on the choosing of an appropriate decomposition rank. In this paper, by observing the fact that the truncated SVD makes the related documents more connected, we devise a matrix completion algorithm that can mimick this capability. The proposed algorithm is nonparametric, has convergence guarantee, and produces a unique solution for each input. Thus it is more practical and easier to use than the truncated SVD. Experimental results using four standard datasets in LSI research show that the retrieval performances of the proposed algorithm are comparable to the best results offered by the truncated SVD over some decomposition ranks.
{"title":"Similarity-based matrix completion algorithm for latent semantic indexing","authors":"Andri Mirzal","doi":"10.1109/ICCSCE.2013.6719936","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719936","url":null,"abstract":"Latent semantic indexing (LSI) is an indexing method to improve performance of an information retrieval system by indexing terms that appear in related documents and weakening influences of terms that appear in unrelated documents. LSI usually is conducted by using the truncated singular value decomposition (SVD). The main difficulty in using this technique is its retrieval performance depends strongly on the choosing of an appropriate decomposition rank. In this paper, by observing the fact that the truncated SVD makes the related documents more connected, we devise a matrix completion algorithm that can mimick this capability. The proposed algorithm is nonparametric, has convergence guarantee, and produces a unique solution for each input. Thus it is more practical and easier to use than the truncated SVD. Experimental results using four standard datasets in LSI research show that the retrieval performances of the proposed algorithm are comparable to the best results offered by the truncated SVD over some decomposition ranks.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"353 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719942
M. Omar, M. Hassan, A. C. Soh, M. Kadir
This paper presents a technique of predicting lightning severity on daily basis by using meteorological data. The data used is supplied by Global Lightning Network (GLN) from WSI Corporation. The input of the system consists of seven meteorology parameters which had been provided by Malaysia Meteorology Service with minimal fees. Input parameters are the Minimum Humidity, Maximum Humidity, Minimum Temperature, Maximum Temperature, Rainfall, Week and Month. The output of the system determines the severity of lightning predictions in three stages; Class1: Hazardous; Class2: Warning; and Class3: Low Risk. Two training algorithms that have been tested in this study namely the Gradient Descent with Momentum Backpropagation (traingdm) and the Scaled Conjugated Gradient Backpropagation (trainscg). The traingdm has indicated better accuracy of 70% compared to the trainscg whilst in contrast; trainscg has demonstrated approximately 4 times faster training compare to traingdm.
{"title":"Lightning severity classification utilizing the meteorological parameters: A neural network approach","authors":"M. Omar, M. Hassan, A. C. Soh, M. Kadir","doi":"10.1109/ICCSCE.2013.6719942","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719942","url":null,"abstract":"This paper presents a technique of predicting lightning severity on daily basis by using meteorological data. The data used is supplied by Global Lightning Network (GLN) from WSI Corporation. The input of the system consists of seven meteorology parameters which had been provided by Malaysia Meteorology Service with minimal fees. Input parameters are the Minimum Humidity, Maximum Humidity, Minimum Temperature, Maximum Temperature, Rainfall, Week and Month. The output of the system determines the severity of lightning predictions in three stages; Class1: Hazardous; Class2: Warning; and Class3: Low Risk. Two training algorithms that have been tested in this study namely the Gradient Descent with Momentum Backpropagation (traingdm) and the Scaled Conjugated Gradient Backpropagation (trainscg). The traingdm has indicated better accuracy of 70% compared to the trainscg whilst in contrast; trainscg has demonstrated approximately 4 times faster training compare to traingdm.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123819092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719968
M. N. Mansor, Mohd Nazri Rejab
In the last recent years, non-invasive methods through image analysis of facial have been proved to be excellent and reliable tool to diagnose of pain recognition. This paper proposes a new feature vector based Local Binary Pattern (LBP) for the pain detection. Different sampling point and radius weighted are proposed to distinguishing performance of the proposed features. In this work, Infant COPE database is used with illumination added. Multi Scale Retinex (MSR) is applied to remove the shadow. Two different supervised classifiers such as Gaussian and Nearest Mean Classifier are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 90% for Infant COPE database.
{"title":"A computational model of the infant pain impressions with Gaussian and Nearest Mean Classifier","authors":"M. N. Mansor, Mohd Nazri Rejab","doi":"10.1109/ICCSCE.2013.6719968","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719968","url":null,"abstract":"In the last recent years, non-invasive methods through image analysis of facial have been proved to be excellent and reliable tool to diagnose of pain recognition. This paper proposes a new feature vector based Local Binary Pattern (LBP) for the pain detection. Different sampling point and radius weighted are proposed to distinguishing performance of the proposed features. In this work, Infant COPE database is used with illumination added. Multi Scale Retinex (MSR) is applied to remove the shadow. Two different supervised classifiers such as Gaussian and Nearest Mean Classifier are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 90% for Infant COPE database.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123910959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6720024
Z. Umar, A. Ahmad, W. A. A. W. Akib
On Wednesday September 30, 2009, at 17:16 pm, a 7.6 Mw earthquake strucked the west coast of Sumatera, Indonesia. The earthquake caused severe landslide, death of 1,195 people and significant damage to approximately 140,000 houses and 4,000 other buildings. Out of the 1,195 people who died, 375 people had been buried in the landslide. The source of September 30, 2009 earthquake is in the intraplate not in the interplate (megathrusts). Interplate is the source of large earthquakes (>8.5) which occur repeatedly every 150 and 200 years. In recent times, large destructive earthquakes occurred in 2004, 2005, 2007 and 2010 along the Sumatra trough, for which the moment magnitudes were Mw 9.1, 8.6, 8.5, and 7.7, respectively. The magnitude of 2010 Mentawai earthquake was smaller than expected, hence, the strain has not been fully released. This means that there is still a high possibility of another gigantic earthquake occurring in the near future to this area. This paper presents the results of soil data in the laboratory using soil taken from the location of landslides and using the software SLOPE/W from Geoslope to obtain the amount of rainfall that caused the landslides. This is done to reduce the casualty of landslides due to the large earthquake that was followed by heavy rainfall. A simple early warning system based on rainfall threshold that causes landslides can be done by the community themselves.
{"title":"Early warning system for landslide hazard caused by earthquake and rainfall in West Sumatera Province, Indonesia","authors":"Z. Umar, A. Ahmad, W. A. A. W. Akib","doi":"10.1109/ICCSCE.2013.6720024","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6720024","url":null,"abstract":"On Wednesday September 30, 2009, at 17:16 pm, a 7.6 Mw earthquake strucked the west coast of Sumatera, Indonesia. The earthquake caused severe landslide, death of 1,195 people and significant damage to approximately 140,000 houses and 4,000 other buildings. Out of the 1,195 people who died, 375 people had been buried in the landslide. The source of September 30, 2009 earthquake is in the intraplate not in the interplate (megathrusts). Interplate is the source of large earthquakes (>8.5) which occur repeatedly every 150 and 200 years. In recent times, large destructive earthquakes occurred in 2004, 2005, 2007 and 2010 along the Sumatra trough, for which the moment magnitudes were Mw 9.1, 8.6, 8.5, and 7.7, respectively. The magnitude of 2010 Mentawai earthquake was smaller than expected, hence, the strain has not been fully released. This means that there is still a high possibility of another gigantic earthquake occurring in the near future to this area. This paper presents the results of soil data in the laboratory using soil taken from the location of landslides and using the software SLOPE/W from Geoslope to obtain the amount of rainfall that caused the landslides. This is done to reduce the casualty of landslides due to the large earthquake that was followed by heavy rainfall. A simple early warning system based on rainfall threshold that causes landslides can be done by the community themselves.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116902033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6720032
M. Rahman, M. Ahmad
We have designed and fabricated electrodes microfluidics system for microbio object analysis. Two parallel plate electrodes were fabricated using soft lithography technique integrated with PDMS microfluidics channel. Gold (Au) material was decomposed to fabricate the electrodes. Voltage response through charging and discharging of the electrodes were observed using oscilloscope. For a constant dc voltage of 5 V we have obtained the time constant of the electrodes as 3.6 ms. On the other hand, it requires 850 ms to discharge completely without an external load. We have measured the capacitance of the electrodes as 0.37 pF in air (room environment) medium, on the other hand in distilled water medium electrodes capacitance is 0.77 pF. This is because of the high dielectric constant of distilled water (80.1). We have also measured electrodes capacitance by changing the medium to microbio objects such as; yeast cells (5 pF) and live bacteria cells (30 pF). Results showed that, bacteria have a higher electrical capacitance rather than yeast.
{"title":"Electrical characterizations of electrodes microfluidics system for microbio object analysis","authors":"M. Rahman, M. Ahmad","doi":"10.1109/ICCSCE.2013.6720032","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6720032","url":null,"abstract":"We have designed and fabricated electrodes microfluidics system for microbio object analysis. Two parallel plate electrodes were fabricated using soft lithography technique integrated with PDMS microfluidics channel. Gold (Au) material was decomposed to fabricate the electrodes. Voltage response through charging and discharging of the electrodes were observed using oscilloscope. For a constant dc voltage of 5 V we have obtained the time constant of the electrodes as 3.6 ms. On the other hand, it requires 850 ms to discharge completely without an external load. We have measured the capacitance of the electrodes as 0.37 pF in air (room environment) medium, on the other hand in distilled water medium electrodes capacitance is 0.77 pF. This is because of the high dielectric constant of distilled water (80.1). We have also measured electrodes capacitance by changing the medium to microbio objects such as; yeast cells (5 pF) and live bacteria cells (30 pF). Results showed that, bacteria have a higher electrical capacitance rather than yeast.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"212 0 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116202271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCSCE.2013.6719937
Aqeel Raza Syed, K. Yau
In spectrum leasing, licensed users (or primary users, PUs) and unlicensed users (or secondary users, SUs) interact with each other to achieve mutual agreement on channel access in order to increase their respective network performance. The PUs must select suitable SUs as relay nodes which are expected to uphold the leasing agreement. General speaking, the SU's transmission power must fulfill the minimum and maximum power threshold levels imposed by PUs. The minimum power thresholds ensure that a satisfactory level of successful transmission can be achieved by SUs while helping to relay PUs' packets. On the other hand, the maximum power threshold ensures that SUs' interference to PUs is acceptable to PUs. In this paper, the PUs announce their requirements on minimum and maximum power threshold levels to SUs for the selection of relay nodes; while the SUs maintain their respective transmission power within the threshold level defined by PUs in order to increase their respective network performance (e.g. throughput and end-to-end delay performances). The functionalities are modeled and solved using Reinforcement Learning (RL), which determines the suitable SUs as relay nodes on the basis of the aforementioned power threshold criterion. Our preliminary simulation results show that the number of SUs that qualify as relay nodes increases with the maximum power level imposed by PU, and thus it is expected to provide PUs' and SUs' performance enhancement (e.g. throughput). It also shows that, the convergence rate of SUs' power level increases with the number of simulation iterations.
{"title":"Relay node selection for spectrum leasing in cognitive radio networks","authors":"Aqeel Raza Syed, K. Yau","doi":"10.1109/ICCSCE.2013.6719937","DOIUrl":"https://doi.org/10.1109/ICCSCE.2013.6719937","url":null,"abstract":"In spectrum leasing, licensed users (or primary users, PUs) and unlicensed users (or secondary users, SUs) interact with each other to achieve mutual agreement on channel access in order to increase their respective network performance. The PUs must select suitable SUs as relay nodes which are expected to uphold the leasing agreement. General speaking, the SU's transmission power must fulfill the minimum and maximum power threshold levels imposed by PUs. The minimum power thresholds ensure that a satisfactory level of successful transmission can be achieved by SUs while helping to relay PUs' packets. On the other hand, the maximum power threshold ensures that SUs' interference to PUs is acceptable to PUs. In this paper, the PUs announce their requirements on minimum and maximum power threshold levels to SUs for the selection of relay nodes; while the SUs maintain their respective transmission power within the threshold level defined by PUs in order to increase their respective network performance (e.g. throughput and end-to-end delay performances). The functionalities are modeled and solved using Reinforcement Learning (RL), which determines the suitable SUs as relay nodes on the basis of the aforementioned power threshold criterion. Our preliminary simulation results show that the number of SUs that qualify as relay nodes increases with the maximum power level imposed by PU, and thus it is expected to provide PUs' and SUs' performance enhancement (e.g. throughput). It also shows that, the convergence rate of SUs' power level increases with the number of simulation iterations.","PeriodicalId":319285,"journal":{"name":"2013 IEEE International Conference on Control System, Computing and Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121341622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}