Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417624
A. Asmat, W. Manan, N. Ahmad
Atmospheric aerosol has influenced many atmospheric processes including cloud formation, visibility variation and solar radiation transfer. Atmospheric aerosol plays as an important indicator of visibility distance range because it will obscure the objects that can be seen. Visibility degradation has become an environmental topic of community concern in most urban areas because of low visibility range will lead to the deterioration of air quality. In this study, the atmospheric aerosol loading from the image was retrieved by using urban and maritime with visibility range at the distance of 10 km up to 50 km which are later converted into (%) reflectance. Later the works is established the relationship between aerosol loading and visibility was produced using urban and maritime models. To accommodate with the intention research works, Penang Island has been chosen. This is because of the strategic location of Penang Island which located close to the sea and recognized as one of main urbanized city in Malaysia. Results were indicated that visibility was inversely correlated with aerosol loading, the farthest the visibility range, the lower the aerosol loading. Result also showed that the urban aerosol loading estimated is higher than maritime aerosol. This may influence by meteorological factor such as the higher temperatures in urban could lead to higher rate of smog formation. Lower wind speeds will contribute which it may tend to keep pollutants concentrated over urban areas. Atmospheric urban model can derive the estimated minimum aerosol loading (12.1%) when the visibility range is about 30 km can be used to determine the minimum.
{"title":"Derivation of aerosol loading from visibility range in Penang Island using atmospheric model","authors":"A. Asmat, W. Manan, N. Ahmad","doi":"10.1109/ICOS.2012.6417624","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417624","url":null,"abstract":"Atmospheric aerosol has influenced many atmospheric processes including cloud formation, visibility variation and solar radiation transfer. Atmospheric aerosol plays as an important indicator of visibility distance range because it will obscure the objects that can be seen. Visibility degradation has become an environmental topic of community concern in most urban areas because of low visibility range will lead to the deterioration of air quality. In this study, the atmospheric aerosol loading from the image was retrieved by using urban and maritime with visibility range at the distance of 10 km up to 50 km which are later converted into (%) reflectance. Later the works is established the relationship between aerosol loading and visibility was produced using urban and maritime models. To accommodate with the intention research works, Penang Island has been chosen. This is because of the strategic location of Penang Island which located close to the sea and recognized as one of main urbanized city in Malaysia. Results were indicated that visibility was inversely correlated with aerosol loading, the farthest the visibility range, the lower the aerosol loading. Result also showed that the urban aerosol loading estimated is higher than maritime aerosol. This may influence by meteorological factor such as the higher temperatures in urban could lead to higher rate of smog formation. Lower wind speeds will contribute which it may tend to keep pollutants concentrated over urban areas. Atmospheric urban model can derive the estimated minimum aerosol loading (12.1%) when the visibility range is about 30 km can be used to determine the minimum.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417657
A. Hematian, A. Manaf, S. Chuprat, R. Khaleghparast, S. Yazdani
Iris recognition is one of the most flawless recognition methods in biometrics. However, most of iris recognition algorithms are implemented based on sequential operations running on central processing units (CPUs). In this article we propose a prototype design for iris recognition based on field-programmable gate array (FPGA) in order to improve iris recognition performance by parallel computing. Time-consuming iris recognition sub-processes are fully implemented in parallel to achieve optimum performance. Unlike commonly used iris recognition methods that first capture a single image of an eye and then start the recognition process, we achieved to speed up the iris recognition process by localizing the pupil and the iris boundaries, unwrapping the iris image and extracting features of the iris image while image capturing was in progress. Consequently, live images from human eye can be processed continuously without any delay. We conclude that iris recognition acceleration by parallel computing can be a complete success when it is implemented on low-cost FPGAs.
{"title":"Field programmable gate array system for real-time IRIS recognition","authors":"A. Hematian, A. Manaf, S. Chuprat, R. Khaleghparast, S. Yazdani","doi":"10.1109/ICOS.2012.6417657","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417657","url":null,"abstract":"Iris recognition is one of the most flawless recognition methods in biometrics. However, most of iris recognition algorithms are implemented based on sequential operations running on central processing units (CPUs). In this article we propose a prototype design for iris recognition based on field-programmable gate array (FPGA) in order to improve iris recognition performance by parallel computing. Time-consuming iris recognition sub-processes are fully implemented in parallel to achieve optimum performance. Unlike commonly used iris recognition methods that first capture a single image of an eye and then start the recognition process, we achieved to speed up the iris recognition process by localizing the pupil and the iris boundaries, unwrapping the iris image and extracting features of the iris image while image capturing was in progress. Consequently, live images from human eye can be processed continuously without any delay. We conclude that iris recognition acceleration by parallel computing can be a complete success when it is implemented on low-cost FPGAs.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417625
A. Abroudi, F. Farokhi
This paper presents new method for training intelligent networks such as Multi-Layer Perceptron (MLP) and Neuro-Fuzzy Networks (NFN) with prototypes selected via Fast Condensed Nearest Neighbor (FCNN) rule. By applying FCNN, condensed subsets with instances close to the decision boundary are obtained. We call these points High-Priority Prototypes (HPPs) and the network is trained by them. The main objective of this approach is to improve the performance of the classification by boosting the quality of the training-set. The experimental results on several standard classification databases illustrated the power of the proposed method. In comparison to previous approaches which select prototypes randomly, training with HPPs performs better in terms of classification accuracy.
{"title":"Prototype selection for training artificial neural networks based on Fast Condensed Nearest Neighbor rule","authors":"A. Abroudi, F. Farokhi","doi":"10.1109/ICOS.2012.6417625","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417625","url":null,"abstract":"This paper presents new method for training intelligent networks such as Multi-Layer Perceptron (MLP) and Neuro-Fuzzy Networks (NFN) with prototypes selected via Fast Condensed Nearest Neighbor (FCNN) rule. By applying FCNN, condensed subsets with instances close to the decision boundary are obtained. We call these points High-Priority Prototypes (HPPs) and the network is trained by them. The main objective of this approach is to improve the performance of the classification by boosting the quality of the training-set. The experimental results on several standard classification databases illustrated the power of the proposed method. In comparison to previous approaches which select prototypes randomly, training with HPPs performs better in terms of classification accuracy.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127482932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417623
T. Deng
In this paper, we propose a new approach for designing an all-pass (AP) digital filter that approximates a given ideal phase response in the minimax error sense. Originally, such a design problem is highly non-linear and difficult to solve. This paper formulates the non-linear problem as a second-order-cone programming (SOCP) problem and then solves the design problem using any SOCP-solver (software). After the SOCP-based design is formulated, the AP filter coefficients can be easily found through solving the SOCP problem. As compared with the existing linear programming (LP) design, the SOCP-based design method achieves more accurate fitting. A design example is given for illustrating the performance improvement of the SOCP-based design approach.
{"title":"All-pass digital system design using second-order cone programming","authors":"T. Deng","doi":"10.1109/ICOS.2012.6417623","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417623","url":null,"abstract":"In this paper, we propose a new approach for designing an all-pass (AP) digital filter that approximates a given ideal phase response in the minimax error sense. Originally, such a design problem is highly non-linear and difficult to solve. This paper formulates the non-linear problem as a second-order-cone programming (SOCP) problem and then solves the design problem using any SOCP-solver (software). After the SOCP-based design is formulated, the AP filter coefficients can be easily found through solving the SOCP problem. As compared with the existing linear programming (LP) design, the SOCP-based design method achieves more accurate fitting. A design example is given for illustrating the performance improvement of the SOCP-based design approach.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114994829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417617
A. Nazir, Y. M. Yassin, C. P. Kit, E. Karuppiah
Cloud computing makes data analytics an attractive preposition for small and medium organisations that need to process large datasets and perform fast queries. The remarkable aspect of cloud system is that a nonexpert user can provision resources as virtual machines (VMs) of any size on the cloud within minutes to meet his/her data-processing needs. In this paper, we demonstrate the applicability of running large-scale distributed data analysis in virtualised environment. In achieving this, a series of experiments are conducted to measure and analyze performance of the virtual machine scalability on multi/many-core processors using realistic financial workloads. Our experimental results demonstrate it is crucial to minimise the number of VMs deployed for each application due to high overhead of running parallel tasks on VMs on multicore machines. We also found out that our applications perform significantly better when equipped with sufficient memory and reasonable number of cores.
{"title":"Evaluation of virtual machine scalability on distributed multi/many-core processors for big data analytics","authors":"A. Nazir, Y. M. Yassin, C. P. Kit, E. Karuppiah","doi":"10.1109/ICOS.2012.6417617","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417617","url":null,"abstract":"Cloud computing makes data analytics an attractive preposition for small and medium organisations that need to process large datasets and perform fast queries. The remarkable aspect of cloud system is that a nonexpert user can provision resources as virtual machines (VMs) of any size on the cloud within minutes to meet his/her data-processing needs. In this paper, we demonstrate the applicability of running large-scale distributed data analysis in virtualised environment. In achieving this, a series of experiments are conducted to measure and analyze performance of the virtual machine scalability on multi/many-core processors using realistic financial workloads. Our experimental results demonstrate it is crucial to minimise the number of VMs deployed for each application due to high overhead of running parallel tasks on VMs on multicore machines. We also found out that our applications perform significantly better when equipped with sufficient memory and reasonable number of cores.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132347779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417658
J. Chaudhry, U. Qidwai
The advances in Microelectromechanical systems (MEMS), battery life, low powered communication standards, more capable processing units, and hybrid communication have cemented the use of mobile Wireless Body Area Networks (WBAN) in medical informatics. Although the MEMS were used in medical informatics solutions but they were highly localized, rigged, non-cooperative, and particularly non extendable. The interconnectivity of various network interfaces is the main driving force on the modern technology boom. The morphological features of mobile devices and their use in our daily lives create an opportunity to connect medical informatics systems with the main stream. It promise unobtrusive ambulatory health monitoring for a long period of time and provide real-time updates of the patient's status to the physician. When integrated with the WBAN, the mobile devices play the role of localized data diffusion, classification, and broadcast center. In this paper, the criticality of this `single point of failure' is discussed. Often the untapped flow of data to the mobile device can lead to crashing of the network. A computational model is devised in order to pre estimate the device resource availability matrix and data flow management without creating the denial of service. The speed mismatch due to resource binding violation on the part of the hand held device can be reported and capped before the data loss heeds un noticed. The techniques proposed are analyzed and tested on a test bed, specifically designed for monitoring remote patient vitals. The results obtained show marked improvement from the methods proposed in the contemporary systems.
{"title":"On critical point avoidance among mobile terminals in healthcare monitoring applications: Saving lives through reliable communication software","authors":"J. Chaudhry, U. Qidwai","doi":"10.1109/ICOS.2012.6417658","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417658","url":null,"abstract":"The advances in Microelectromechanical systems (MEMS), battery life, low powered communication standards, more capable processing units, and hybrid communication have cemented the use of mobile Wireless Body Area Networks (WBAN) in medical informatics. Although the MEMS were used in medical informatics solutions but they were highly localized, rigged, non-cooperative, and particularly non extendable. The interconnectivity of various network interfaces is the main driving force on the modern technology boom. The morphological features of mobile devices and their use in our daily lives create an opportunity to connect medical informatics systems with the main stream. It promise unobtrusive ambulatory health monitoring for a long period of time and provide real-time updates of the patient's status to the physician. When integrated with the WBAN, the mobile devices play the role of localized data diffusion, classification, and broadcast center. In this paper, the criticality of this `single point of failure' is discussed. Often the untapped flow of data to the mobile device can lead to crashing of the network. A computational model is devised in order to pre estimate the device resource availability matrix and data flow management without creating the denial of service. The speed mismatch due to resource binding violation on the part of the hand held device can be reported and capped before the data loss heeds un noticed. The techniques proposed are analyzed and tested on a test bed, specifically designed for monitoring remote patient vitals. The results obtained show marked improvement from the methods proposed in the contemporary systems.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130144530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417615
M. Firdhous, S. Hassan, O. Ghazali, M. Mahmuddin
Distributed computing has grown rapidly in the recent years. In addition to the increase in the size of individual networks, new types of networks have also emerged providing different types of services to clients. While these systems provide an invaluable service, they also face certain practical issues. Security is one of the most important issues, that must be dealt with by the implementers in order to provide a satisfactory service. Trust and trust management have been drawing the attention of security researchers in order to identify the malicious nodes and separate them from good benevolent nodes and also to quantify the quality of services provided by the nodes in a system. Several trust computing models have been proposed for distributed systems by various researchers. These models are based on different approaches from fuzzy logic, Bayesian model, social networking to bio-inspired mechanisms. In this paper, the authors take a critical look at the bio-inspired trust models reported in the literature with respect to their principles, advantages and disadvantages.
{"title":"Bio-inspired trust management in distributed systems — A critical review","authors":"M. Firdhous, S. Hassan, O. Ghazali, M. Mahmuddin","doi":"10.1109/ICOS.2012.6417615","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417615","url":null,"abstract":"Distributed computing has grown rapidly in the recent years. In addition to the increase in the size of individual networks, new types of networks have also emerged providing different types of services to clients. While these systems provide an invaluable service, they also face certain practical issues. Security is one of the most important issues, that must be dealt with by the implementers in order to provide a satisfactory service. Trust and trust management have been drawing the attention of security researchers in order to identify the malicious nodes and separate them from good benevolent nodes and also to quantify the quality of services provided by the nodes in a system. Several trust computing models have been proposed for distributed systems by various researchers. These models are based on different approaches from fuzzy logic, Bayesian model, social networking to bio-inspired mechanisms. In this paper, the authors take a critical look at the bio-inspired trust models reported in the literature with respect to their principles, advantages and disadvantages.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116068810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417640
A. R. Otero, G. Tejay, L. D. Otero, A. Ruiz-Torres
For organizations, security of information is eminent as threats of information security incidents that could impact the information continue to increase. Alarming facts within the literature support the current lack of adequate information security practices and prompt for identifying additional methods to help organizations in protecting their sensitive and critical information. Research efforts shows inadequacies within traditional ISC assessment methodologies that do not promote an effective assessment, prioritization, and, therefore, implementation of ISC in organizations. This research-in-progress relates to the development of a tool that can accurately prioritize ISC in organizations. The tool uses fuzzy set theory to allow for a more accurate assessment of imprecise parameters than traditional methodologies. We argue that evaluating information security controls using fuzzy set theory leads to a more detailed and precise assessment and, therefore, supports an effective selection of information security controls in organizations.
{"title":"A fuzzy logic-based information security control assessment for organizations","authors":"A. R. Otero, G. Tejay, L. D. Otero, A. Ruiz-Torres","doi":"10.1109/ICOS.2012.6417640","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417640","url":null,"abstract":"For organizations, security of information is eminent as threats of information security incidents that could impact the information continue to increase. Alarming facts within the literature support the current lack of adequate information security practices and prompt for identifying additional methods to help organizations in protecting their sensitive and critical information. Research efforts shows inadequacies within traditional ISC assessment methodologies that do not promote an effective assessment, prioritization, and, therefore, implementation of ISC in organizations. This research-in-progress relates to the development of a tool that can accurately prioritize ISC in organizations. The tool uses fuzzy set theory to allow for a more accurate assessment of imprecise parameters than traditional methodologies. We argue that evaluating information security controls using fuzzy set theory leads to a more detailed and precise assessment and, therefore, supports an effective selection of information security controls in organizations.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417645
M. Ezani, A. Pathan, S. Haseeb
In network layer protocols, the most efficient way of sending similar packets to a group of nodes is via multicast. Multicast is a profound concept that has been around for some time and has seen through its implementation in various aspects of today's network access technology. The IEEE 802.11 wireless standards, however, loosely honor IP layer multicast packets by encapsulating them in broadcast frames. This in turn degrades the 802.11 network access bandwidth capacity and further decreases its data transmission reliability. An enhancement workaround which has been devised was to encapsulate multicast packets in a unicast MAC layer frame. In addition, RTP was also observed to be an enabler to better user experience in multicast streaming. In this paper, we investigate the 802.11g multicast performance with and without the enhanced multicast mechanism. We further expand our investigation by implementing RTP during the multimedia multicast stream and without it. Our results lead us to conclude that there are significant advantages of using both the enhanced multicast mechanism and RTP during a multicast streaming via the 802.11g wireless standard.
{"title":"Comparative analysis of IEEE 802.11g multimedia multicast performance using RTP with an implemented test-bed","authors":"M. Ezani, A. Pathan, S. Haseeb","doi":"10.1109/ICOS.2012.6417645","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417645","url":null,"abstract":"In network layer protocols, the most efficient way of sending similar packets to a group of nodes is via multicast. Multicast is a profound concept that has been around for some time and has seen through its implementation in various aspects of today's network access technology. The IEEE 802.11 wireless standards, however, loosely honor IP layer multicast packets by encapsulating them in broadcast frames. This in turn degrades the 802.11 network access bandwidth capacity and further decreases its data transmission reliability. An enhancement workaround which has been devised was to encapsulate multicast packets in a unicast MAC layer frame. In addition, RTP was also observed to be an enabler to better user experience in multicast streaming. In this paper, we investigate the 802.11g multicast performance with and without the enhanced multicast mechanism. We further expand our investigation by implementing RTP during the multimedia multicast stream and without it. Our results lead us to conclude that there are significant advantages of using both the enhanced multicast mechanism and RTP during a multicast streaming via the 802.11g wireless standard.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130250606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/ICOS.2012.6417630
H. Ramli, M. Hasan, A. F. Ismail, A. Abdalla, K. Abdullah
Carrier Aggregation (CA) is a potential technique to increase the band width in LTE-Advanced system. However, the introduction of CA brings a number challenges to the traditional Radio Resource Management (RRM) mechanisms. This paper investigates the current algorithms of CA and packet scheduling forthe Long Term Evolution-Advanced (LTE-A). In the beginning, the evolution from third generation (3G) to fourth generation (4G) is illustrated in terms of featured performance requirements and the integration of current and future radio access technologies, is also highlighted. Additionally, this paper discusses the current technical trends and possible improvements of the packet scheduling algorithms with CA for LTE-A.
{"title":"An investigation of packet scheduling algorithms for Long Term Evolution-Advanced","authors":"H. Ramli, M. Hasan, A. F. Ismail, A. Abdalla, K. Abdullah","doi":"10.1109/ICOS.2012.6417630","DOIUrl":"https://doi.org/10.1109/ICOS.2012.6417630","url":null,"abstract":"Carrier Aggregation (CA) is a potential technique to increase the band width in LTE-Advanced system. However, the introduction of CA brings a number challenges to the traditional Radio Resource Management (RRM) mechanisms. This paper investigates the current algorithms of CA and packet scheduling forthe Long Term Evolution-Advanced (LTE-A). In the beginning, the evolution from third generation (3G) to fourth generation (4G) is illustrated in terms of featured performance requirements and the integration of current and future radio access technologies, is also highlighted. Additionally, this paper discusses the current technical trends and possible improvements of the packet scheduling algorithms with CA for LTE-A.","PeriodicalId":319770,"journal":{"name":"2012 IEEE Conference on Open Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126343641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}