Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353185
M. S. Khan, M. S. Awan, E. Leitgeb, F. Nadeem, I. Hussain
Free Space Optics (FSO) is a broadband access technology finding numerous applications in next generation networks (NGN). The biggest disadvantage of this technology is its dependence on local weather conditions in the free-space atmospheric channel. This requires a thorough investigation of different attenuating factors and their influence on FSO transmissions. In this paper we analyze the behavior of dense continental fog on FSO links by investigating different probability distribution functions for measured optical attenuations. Several probability distributions are used; to observe reasonable probability distribution which best suits to describe the characteristics of FSO links under dense continental fog conditions. Further, Kolmogorov-Smirnov non-parametric test is used for goodness of fit of the selected probability distributions. The results of Kolmogorov-Smirnov test suggests that Wakeby probability distribution function is reasonable to describe the behavior of optical attenuation data measured in a dense continental fog environment.
{"title":"Selecting a distribution function for optical attenuation in dense continental fog conditions","authors":"M. S. Khan, M. S. Awan, E. Leitgeb, F. Nadeem, I. Hussain","doi":"10.1109/ICET.2009.5353185","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353185","url":null,"abstract":"Free Space Optics (FSO) is a broadband access technology finding numerous applications in next generation networks (NGN). The biggest disadvantage of this technology is its dependence on local weather conditions in the free-space atmospheric channel. This requires a thorough investigation of different attenuating factors and their influence on FSO transmissions. In this paper we analyze the behavior of dense continental fog on FSO links by investigating different probability distribution functions for measured optical attenuations. Several probability distributions are used; to observe reasonable probability distribution which best suits to describe the characteristics of FSO links under dense continental fog conditions. Further, Kolmogorov-Smirnov non-parametric test is used for goodness of fit of the selected probability distributions. The results of Kolmogorov-Smirnov test suggests that Wakeby probability distribution function is reasonable to describe the behavior of optical attenuation data measured in a dense continental fog environment.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126944133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353213
Asif Zafar
Pakistan is a densely populated country with major rural urban disparity in healthcare delivery. Use of Information Technology to improve the efficiency of existing healthcare services of Pakistan in the form of Telemedicine has proved its role beyond any doubts. Its advantages include better utilization of healthcare resources, early intervention, provision of expert advice at remote sites and distance education. Telemedicine/E-Health training center at Holy Family Hospital has played the pioneering role in implementation of National Telemedicine initiatives and enhancing Human Resources component in the field by structured Training programs. Another initiative at the center is establishment of Virtual Trainer Lab (VTL) to improve Minimal Access Surgery (MAS) skills of the Post Graduate Trainees and young surgeons. MAS is an emerging technique in surgery which reduces post operative complications and has a better patient outcome. Shifting from traditional OPEN SURGERY to MAS needs training and orientation to this new technology. Training of these skills in operation theaters is not safe for patients. VTL is equipped with advance training tools like box trainers, virtual realty simulators and full procedural simulators to train young surgeons in those skills without harming the patient. This Lab is a way to fast and safe skill acquisition in controlled environment.
{"title":"New innovations in healthcare delivery and laparoscopic surgery in Pakistan","authors":"Asif Zafar","doi":"10.1109/ICET.2009.5353213","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353213","url":null,"abstract":"Pakistan is a densely populated country with major rural urban disparity in healthcare delivery. Use of Information Technology to improve the efficiency of existing healthcare services of Pakistan in the form of Telemedicine has proved its role beyond any doubts. Its advantages include better utilization of healthcare resources, early intervention, provision of expert advice at remote sites and distance education. Telemedicine/E-Health training center at Holy Family Hospital has played the pioneering role in implementation of National Telemedicine initiatives and enhancing Human Resources component in the field by structured Training programs. Another initiative at the center is establishment of Virtual Trainer Lab (VTL) to improve Minimal Access Surgery (MAS) skills of the Post Graduate Trainees and young surgeons. MAS is an emerging technique in surgery which reduces post operative complications and has a better patient outcome. Shifting from traditional OPEN SURGERY to MAS needs training and orientation to this new technology. Training of these skills in operation theaters is not safe for patients. VTL is equipped with advance training tools like box trainers, virtual realty simulators and full procedural simulators to train young surgeons in those skills without harming the patient. This Lab is a way to fast and safe skill acquisition in controlled environment.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"52 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114042549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353143
H. A. Mustafa
This paper presents the method how the parameters of Pseudo Random Noise Generator (PRNG) can be detected. The parameters to be detected include the primitive polynomial and the initial seed of the PRNG. Different test vectors of scrambled sequences are generated by changing the parameters of Linear Feedback Shift Register (LFSR). The test vectors are then passed one by one through the detection algorithm which calculates the weight difference for a range of primitive trinomials. Subsequently, the weight difference is calculated between the alphabets over GF(2) of decoded periods of LFSR The maximum weight difference thus obtained for specific polynomial depicts the most probable primitive trinomial. The initial seed of LFSR is then found by fixing the trinomial and calculating the weight difference for a range of seeds. The correct initial seed is depicted by the maximum weight difference.
{"title":"Detection of Pseudo Random Noise Generator's parameters for link analysis","authors":"H. A. Mustafa","doi":"10.1109/ICET.2009.5353143","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353143","url":null,"abstract":"This paper presents the method how the parameters of Pseudo Random Noise Generator (PRNG) can be detected. The parameters to be detected include the primitive polynomial and the initial seed of the PRNG. Different test vectors of scrambled sequences are generated by changing the parameters of Linear Feedback Shift Register (LFSR). The test vectors are then passed one by one through the detection algorithm which calculates the weight difference for a range of primitive trinomials. Subsequently, the weight difference is calculated between the alphabets over GF(2) of decoded periods of LFSR The maximum weight difference thus obtained for specific polynomial depicts the most probable primitive trinomial. The initial seed of LFSR is then found by fixing the trinomial and calculating the weight difference for a range of seeds. The correct initial seed is depicted by the maximum weight difference.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121547364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353167
Abdul Rehman Abbasi, N. Afzulpurkar, T. Uno
An emotionally-personalized computer that could empathize a student, learning through a tutorial or a software program, would be an excellent application of affective computing. Towards development of this potentially beneficial technology, we describe two related evaluations of a student mental state prediction model that not only predicts student's mental state from his/her visually observable behavior but also detects his/her personality. In the first set of evaluations, we model the assumed cause-effect relationships between student's mental states and the body gestures using a two-layered dynamic Bayesian network (DBN). We used the data obtained earlier from four students, in a highly-contextualized interaction, i.e. students attending a classroom lecture. We train and test this DBN using data from each individual student. A maximum a posteriori classifier based on the DBN model gives an average accuracy of 87.6% over all four individual student cases. In the second set of evaluations, we extend the model to a three-layered DBN by including the personality attribute in the network, and then, we train the network using data from all four students. At test time, the network successfully detects the personality of each test student. The results demonstrate the feasibility of our approach.
{"title":"Towards emotionally-personalized computing: Dynamic prediction of student mental states from self-manipulatory body movements","authors":"Abdul Rehman Abbasi, N. Afzulpurkar, T. Uno","doi":"10.1109/ICET.2009.5353167","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353167","url":null,"abstract":"An emotionally-personalized computer that could empathize a student, learning through a tutorial or a software program, would be an excellent application of affective computing. Towards development of this potentially beneficial technology, we describe two related evaluations of a student mental state prediction model that not only predicts student's mental state from his/her visually observable behavior but also detects his/her personality. In the first set of evaluations, we model the assumed cause-effect relationships between student's mental states and the body gestures using a two-layered dynamic Bayesian network (DBN). We used the data obtained earlier from four students, in a highly-contextualized interaction, i.e. students attending a classroom lecture. We train and test this DBN using data from each individual student. A maximum a posteriori classifier based on the DBN model gives an average accuracy of 87.6% over all four individual student cases. In the second set of evaluations, we extend the model to a three-layered DBN by including the personality attribute in the network, and then, we train the network using data from all four students. At test time, the network successfully detects the personality of each test student. The results demonstrate the feasibility of our approach.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353135
Shafqat Ali, O. Maqbool
Software systems require gradual changes to survive in an environment where they are implemented. Several reasons are a cause of change in software e.g. error fixing, enhancement in functionality, performance improvement. This behaviour of gradual change in software is known as software evolution. The study of software evolution is an active area of research. Researchers have monitored software evolution in different ways. The method of monitoring evolution is a key point, because different methods may reflect different evolutionary picture of software. In this paper, we studied changes that occurred in software systems for software evolution. Our experimental study focuses on three different types of changes i.e. addition, deletion and modification, and is helpful for detailed analysis of software evolution. Furthermore, on the basis of different type of changes, we investigated Lehman's 5th Law (Conservation of Familiarity) for small scale open source software systems. Our experimental study shows that different measures reflect different evolutionary picture of the software systems.
{"title":"Monitoring software evolution using multiple types of changes","authors":"Shafqat Ali, O. Maqbool","doi":"10.1109/ICET.2009.5353135","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353135","url":null,"abstract":"Software systems require gradual changes to survive in an environment where they are implemented. Several reasons are a cause of change in software e.g. error fixing, enhancement in functionality, performance improvement. This behaviour of gradual change in software is known as software evolution. The study of software evolution is an active area of research. Researchers have monitored software evolution in different ways. The method of monitoring evolution is a key point, because different methods may reflect different evolutionary picture of software. In this paper, we studied changes that occurred in software systems for software evolution. Our experimental study focuses on three different types of changes i.e. addition, deletion and modification, and is helpful for detailed analysis of software evolution. Furthermore, on the basis of different type of changes, we investigated Lehman's 5th Law (Conservation of Familiarity) for small scale open source software systems. Our experimental study shows that different measures reflect different evolutionary picture of the software systems.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130984752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353134
Ahmad Shahzad, Sajjad Raza, M. Azam, K. Bilal, Inam-ul-Haq, S. Shamail
Increased diversity and complexity of software systems derived the need for test automation. Test Automation is the use of software for automatic execution of tests, comparison of results with expected outcome, setting up preconditions for test and test reporting functions. Model based testing is a test automation approach that generates and maintains more useful and flexible tests from explicit descriptions of the application. Graph theory techniques have been an important part of model based testing and several graph theory techniques have been proposed in the literature. We used a famous graph theory technique called maximum network flows for generating minimum number of test cases covering all features of a system. We used a web based case study to describe the working of the proposed optimum path finding algorithm. We found certain constraints on web navigation graph in order to completely reflect the system in the form of a graph. The resulting web navigation graph is given as input to the algorithm that we implemented, that returns the optimal test cases for the web application system. We then graphically showed the optimality and feature coverage of the algorithm with respect to the case study.
{"title":"Automated optimum test case generation using web navigation graphs","authors":"Ahmad Shahzad, Sajjad Raza, M. Azam, K. Bilal, Inam-ul-Haq, S. Shamail","doi":"10.1109/ICET.2009.5353134","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353134","url":null,"abstract":"Increased diversity and complexity of software systems derived the need for test automation. Test Automation is the use of software for automatic execution of tests, comparison of results with expected outcome, setting up preconditions for test and test reporting functions. Model based testing is a test automation approach that generates and maintains more useful and flexible tests from explicit descriptions of the application. Graph theory techniques have been an important part of model based testing and several graph theory techniques have been proposed in the literature. We used a famous graph theory technique called maximum network flows for generating minimum number of test cases covering all features of a system. We used a web based case study to describe the working of the proposed optimum path finding algorithm. We found certain constraints on web navigation graph in order to completely reflect the system in the form of a graph. The resulting web navigation graph is given as input to the algorithm that we implemented, that returns the optimal test cases for the web application system. We then graphically showed the optimality and feature coverage of the algorithm with respect to the case study.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124144560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353144
S. Khayal, A. Khan, N. Bibi, T. Ashraf
Password is a key to secret authentication data and is most widely used for security purposes therefore it is open to attacks such as phishing attack. Phishing is a form of internet fraud, which phisher applies to steal online consumer's personal identity data and financial account credentials. In this paper, we analyze a technique of password hashing, to compute secure passwords. Using this mechanism, we can obtain hash value by applying a cryptographic hash function to a string consisting of the submitted password and, usually, another value known as a salt. The salt value consists of current parameters of the system and prevents attackers from building a list of hash values for common passwords. MD5 and SHA1 are frequently used cryptographic hash functions. We implemented these algorithms and found that SHA-1 is more secure but slow in execution as SHA-1 includes more rounds than MD5 in calculating hashes.
{"title":"Analysis of password login phishing based protocols for security improvements","authors":"S. Khayal, A. Khan, N. Bibi, T. Ashraf","doi":"10.1109/ICET.2009.5353144","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353144","url":null,"abstract":"Password is a key to secret authentication data and is most widely used for security purposes therefore it is open to attacks such as phishing attack. Phishing is a form of internet fraud, which phisher applies to steal online consumer's personal identity data and financial account credentials. In this paper, we analyze a technique of password hashing, to compute secure passwords. Using this mechanism, we can obtain hash value by applying a cryptographic hash function to a string consisting of the submitted password and, usually, another value known as a salt. The salt value consists of current parameters of the system and prevents attackers from building a list of hash values for common passwords. MD5 and SHA1 are frequently used cryptographic hash functions. We implemented these algorithms and found that SHA-1 is more secure but slow in execution as SHA-1 includes more rounds than MD5 in calculating hashes.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"454 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124307367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353202
Shawana Jamil, Rashda Ibrahim
The explosive growth of information in recent years has brought up a new question/challenge of disseminating information from a huge plethora of data. This has led to an upscale development of interest in Data warehousing. Different indexing techniques have been developed which are being used for fast data retrieval in Data warehouse environment, but it is quite difficult to find an appropriate technique for a specific query type. So an investigation is needed to find an indexing technique for a specific query type. The objective of this paper is to compare indexing technique and to identify the factors which are to be considered in selecting a proper indexing technique for data warehouse applications, and to evaluate indexing techniques on the basis of different types of data warehouse queries. This paper focuses on the performance evaluation of three data warehouse queries with three different indexing techniques and to observe the impact of variable size data with respect to time and space complexity.
{"title":"Performance analysis of indexing techniques in Data warehousing","authors":"Shawana Jamil, Rashda Ibrahim","doi":"10.1109/ICET.2009.5353202","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353202","url":null,"abstract":"The explosive growth of information in recent years has brought up a new question/challenge of disseminating information from a huge plethora of data. This has led to an upscale development of interest in Data warehousing. Different indexing techniques have been developed which are being used for fast data retrieval in Data warehouse environment, but it is quite difficult to find an appropriate technique for a specific query type. So an investigation is needed to find an indexing technique for a specific query type. The objective of this paper is to compare indexing technique and to identify the factors which are to be considered in selecting a proper indexing technique for data warehouse applications, and to evaluate indexing techniques on the basis of different types of data warehouse queries. This paper focuses on the performance evaluation of three data warehouse queries with three different indexing techniques and to observe the impact of variable size data with respect to time and space complexity.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114531996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353207
Amjad Ali, M. Khan
In Artificial Intelligence, knowledge representation is a combination of data structures and interpretive procedures that leads to knowledgeable behavior. Therefore, it is required to investigate such knowledge representation technique in which knowledge can be easily and efficiently represented in computer. This research paper compares various knowledge representation techniques and proves that predicate logic is a more efficient and more accurate knowledge representation scheme. The algorithm in this paper splits the English text/sentences into phrases/constituents and then represents these in predicate logic. This algorithm also generates the original sentences from the representation in order to check the accuracy of representation. The algorithm has been tested on real text/sentences of English. The algorithm has achieved an accuracy of 80%. If the text is in simple discourse units, then the algorithm accurately represents it in predicate logic. The algorithm also accurately retrieves the original text/sentences from such representation.
{"title":"Selecting predicate logic for knowledge representation by comparative study of knowledge representation schemes","authors":"Amjad Ali, M. Khan","doi":"10.1109/ICET.2009.5353207","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353207","url":null,"abstract":"In Artificial Intelligence, knowledge representation is a combination of data structures and interpretive procedures that leads to knowledgeable behavior. Therefore, it is required to investigate such knowledge representation technique in which knowledge can be easily and efficiently represented in computer. This research paper compares various knowledge representation techniques and proves that predicate logic is a more efficient and more accurate knowledge representation scheme. The algorithm in this paper splits the English text/sentences into phrases/constituents and then represents these in predicate logic. This algorithm also generates the original sentences from the representation in order to check the accuracy of representation. The algorithm has been tested on real text/sentences of English. The algorithm has achieved an accuracy of 80%. If the text is in simple discourse units, then the algorithm accurately represents it in predicate logic. The algorithm also accurately retrieves the original text/sentences from such representation.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116909367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353121
F. Nadeem, M. S. Awan, E. Leitgeb, M. S. Khan, S. Muhammed, G. Kandus
Free Space Optics (FSO) has the tremendous potential to provide multi-gigabits-per-second transmission links for future aerospace applications. However, the widespread growth of the technology is hampered by reduced availability issues related to weather influences on the link. Among different weather attenuations, clouds play an important role by causing link outage up to several hours. In this paper the cloud attenuations by different optical wavelengths are compared on the basis of Mie scattering theory so that the optical wavelengths having high immunity to cloud attenuations can be selected for optical wireless communication links.
{"title":"Comparing the cloud attenuation for different optical wavelengths","authors":"F. Nadeem, M. S. Awan, E. Leitgeb, M. S. Khan, S. Muhammed, G. Kandus","doi":"10.1109/ICET.2009.5353121","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353121","url":null,"abstract":"Free Space Optics (FSO) has the tremendous potential to provide multi-gigabits-per-second transmission links for future aerospace applications. However, the widespread growth of the technology is hampered by reduced availability issues related to weather influences on the link. Among different weather attenuations, clouds play an important role by causing link outage up to several hours. In this paper the cloud attenuations by different optical wavelengths are compared on the basis of Mie scattering theory so that the optical wavelengths having high immunity to cloud attenuations can be selected for optical wireless communication links.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123217423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}