Pub Date : 2008-07-29DOI: 10.1109/ICCCE.2008.4580793
Z. Elabdin, M.R. Islam, O. O. Khalifa, A. F. Ismail
The Lack of observation on microwave signal propagation during duststorm in Sahara region and Middle East stimulated the present work. Investigation on the characteristics of duststorm and its effect on microwave signal propagation have been presented. A physical models of duststorm from measured data proposed. This paper has presented a study on microwave propagation carried out in Sudan using existing microwave links monitoring by Marconi microwave monitoring system with the calibration of the meteorological data recorded in Sudan. The paper has also investigated and compared the measured attenuation with the value calculated by the mathematical prediction models.
{"title":"Duststorm measurements for the prediction of attenuation on microwave signals in Sudan","authors":"Z. Elabdin, M.R. Islam, O. O. Khalifa, A. F. Ismail","doi":"10.1109/ICCCE.2008.4580793","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580793","url":null,"abstract":"The Lack of observation on microwave signal propagation during duststorm in Sahara region and Middle East stimulated the present work. Investigation on the characteristics of duststorm and its effect on microwave signal propagation have been presented. A physical models of duststorm from measured data proposed. This paper has presented a study on microwave propagation carried out in Sudan using existing microwave links monitoring by Marconi microwave monitoring system with the calibration of the meteorological data recorded in Sudan. The paper has also investigated and compared the measured attenuation with the value calculated by the mathematical prediction models.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130544720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-29DOI: 10.1109/ICCCE.2008.4580662
K. Wong, Komarudin
Ant Colony Optimization (ACO) has enjoyed development and improvement since it was introduced in 1990s. The development of ACO was primarily driven by its convergence problem. Balancing between intensification and diversification in the search space is an important factor to intelligently improve solutions and avoid premature convergence. Balancing intensification and diversification can be gained by controlling the parameters value (this is called parameter tuning). Unfortunately, only little research has been reported to adopt parameter tuning as a strategy to balance intensification and diversification in ACO. This paper review parameter tuning in ACO published in the literature.
{"title":"Parameter tuning for ant colony optimization: A review","authors":"K. Wong, Komarudin","doi":"10.1109/ICCCE.2008.4580662","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580662","url":null,"abstract":"Ant Colony Optimization (ACO) has enjoyed development and improvement since it was introduced in 1990s. The development of ACO was primarily driven by its convergence problem. Balancing between intensification and diversification in the search space is an important factor to intelligently improve solutions and avoid premature convergence. Balancing intensification and diversification can be gained by controlling the parameters value (this is called parameter tuning). Unfortunately, only little research has been reported to adopt parameter tuning as a strategy to balance intensification and diversification in ACO. This paper review parameter tuning in ACO published in the literature.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128590837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580796
A. Jantan, S.A. Hatem, A. Alsayh, S. Khatun, M. Rasid
Hierarchical reliable multicast transport protocols partition group members into local groups and allocate one local repair node for each local group to distribute the task of detecting and recovering lost packets. This repair node uses the data stored in its buffer to retransmit the requested packets to the requesting receivers. The problem is that they keep these packets for a long time until they get acknowledgments from all their children receivers of correctly receiving these packets. Keeping these packets creates a congestion problem which decreases the network throughput. This paper proposes a new scheme to solve this problem, by distributing the required packets between the repair node which we call it here the control receiver and some selected receives that have already received these packets correctly. The distribution of the packets decreases the number of packets in the repair node buffer, thus solve the congestion problem and increase the network throughput.
{"title":"A new scalable reliable multicast transport protocol using perfect buffer management","authors":"A. Jantan, S.A. Hatem, A. Alsayh, S. Khatun, M. Rasid","doi":"10.1109/ICCCE.2008.4580796","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580796","url":null,"abstract":"Hierarchical reliable multicast transport protocols partition group members into local groups and allocate one local repair node for each local group to distribute the task of detecting and recovering lost packets. This repair node uses the data stored in its buffer to retransmit the requested packets to the requesting receivers. The problem is that they keep these packets for a long time until they get acknowledgments from all their children receivers of correctly receiving these packets. Keeping these packets creates a congestion problem which decreases the network throughput. This paper proposes a new scheme to solve this problem, by distributing the required packets between the repair node which we call it here the control receiver and some selected receives that have already received these packets correctly. The distribution of the packets decreases the number of packets in the repair node buffer, thus solve the congestion problem and increase the network throughput.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116662964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580574
Z. H. Ahmad, O. Khalifa
Although text-to-speech (TTS) technology has gained some interest from amateur and professional researchers in developing a standard Malay (SM) text-to-speech synthesizer, however, up to this day, there is rarely any high intelligible TTS system which is freely accessible to be implemented and introduced to the community of SM speakers. Therefore, identification of the core components required for the development of SM TTS system especially in establishing the NLP module should be carried out intensively. This paper presents a rule-based text-to-speech synthesis system for standard Malay, named SMaTTS. An intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly graphical user interface (GUI) was developed. Result and suggestion for future improvements is discussed. The available Malay TTS synthesizers, the algorithms and speech engine in used, as well as their strong and weak points for each of the system are discussed in this paper. Assessment was made at all possible levels; phoneme, word and sentence level. The overall performance of the system is analyzed using categorical estimation (CE) for a comprehensive analysis. Result and suggestion for future improvements is discussed.
{"title":"Towards designing a high intelligibility rule based standard malay text-to-speech synthesis system","authors":"Z. H. Ahmad, O. Khalifa","doi":"10.1109/ICCCE.2008.4580574","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580574","url":null,"abstract":"Although text-to-speech (TTS) technology has gained some interest from amateur and professional researchers in developing a standard Malay (SM) text-to-speech synthesizer, however, up to this day, there is rarely any high intelligible TTS system which is freely accessible to be implemented and introduced to the community of SM speakers. Therefore, identification of the core components required for the development of SM TTS system especially in establishing the NLP module should be carried out intensively. This paper presents a rule-based text-to-speech synthesis system for standard Malay, named SMaTTS. An intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly graphical user interface (GUI) was developed. Result and suggestion for future improvements is discussed. The available Malay TTS synthesizers, the algorithms and speech engine in used, as well as their strong and weak points for each of the system are discussed in this paper. Assessment was made at all possible levels; phoneme, word and sentence level. The overall performance of the system is analyzed using categorical estimation (CE) for a comprehensive analysis. Result and suggestion for future improvements is discussed.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121269305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneous computing environment such as grid computing allows sharing and aggregation of a wide variety of geographically distributed computational resources (such as supercomputers, clusters, data sources, people and storage systems) and present them as a single, unified resource for solving large-scale and data-intensive computing applications. A common problem arising in grid computing is to select the most efficient resource to run a particular program. Also users are required to reserve in advance the resources needed to run their program on the grid. At present the execution time of any program submission depends on guesswork by the user. This leads to inefficient use of resources, incurring extra operation costs such as idling queues or machines. Thus a prediction module was designed and developed to aid the user. This module estimates the execution time of a program by using aspects of static analysis, analytical benchmarking and compiler based approach. It consists of 4 main stages; each with its own functionality. An incoming program is categorized accordingly, parsed and then broken down into smaller units known as tokens. The complexity and relationship amongst these tokens are then analyzed and finally the execution time is estimated for the entire program that was submitted.
{"title":"A prediction module to optimize scheduling in a grid computing environment","authors":"Maleeha Kiran, Aisha Hassan Abdalla, Yap Yee Jiun, Lim Mei Kuan","doi":"10.1109/ICCCE.2008.4580733","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580733","url":null,"abstract":"Heterogeneous computing environment such as grid computing allows sharing and aggregation of a wide variety of geographically distributed computational resources (such as supercomputers, clusters, data sources, people and storage systems) and present them as a single, unified resource for solving large-scale and data-intensive computing applications. A common problem arising in grid computing is to select the most efficient resource to run a particular program. Also users are required to reserve in advance the resources needed to run their program on the grid. At present the execution time of any program submission depends on guesswork by the user. This leads to inefficient use of resources, incurring extra operation costs such as idling queues or machines. Thus a prediction module was designed and developed to aid the user. This module estimates the execution time of a program by using aspects of static analysis, analytical benchmarking and compiler based approach. It consists of 4 main stages; each with its own functionality. An incoming program is categorized accordingly, parsed and then broken down into smaller units known as tokens. The complexity and relationship amongst these tokens are then analyzed and finally the execution time is estimated for the entire program that was submitted.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127117623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580565
A. F. Shamsudin, M.T. Azhar, F. Salih, H. M. Noon, A. R. Ahlan, M. Suhaimi, Z. Bal-Fagih, N. Ibrahim, F. Mohd. Badri
Natural to natural DNA sequencing has achieved high degree of accuracies in existing visualization and alignment techniques. This dementia brain research on animals used motif matching techniques in the natural to computer generated artificial DNA sequencing. The natural DNA sequence of dementia brain in a rat was extracted and visualized at the DNA analytical laboratory. The artificial DNA sequences are generated by a dementia-brain software at the brain-computing laboratory. The use of DNA thumbprint motif pattern matching showed significant levels of fitness in the natural-to-artificial as in the natural-to-natural DNA sequencing. The matching between the laboratory extracted DNA and the computer generated DNA will enhance the use of brain-computing in predicting neural degradation such as dementia.
{"title":"Pattern matching algorithm for artificial to natural DNA codes of a dementia brain","authors":"A. F. Shamsudin, M.T. Azhar, F. Salih, H. M. Noon, A. R. Ahlan, M. Suhaimi, Z. Bal-Fagih, N. Ibrahim, F. Mohd. Badri","doi":"10.1109/ICCCE.2008.4580565","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580565","url":null,"abstract":"Natural to natural DNA sequencing has achieved high degree of accuracies in existing visualization and alignment techniques. This dementia brain research on animals used motif matching techniques in the natural to computer generated artificial DNA sequencing. The natural DNA sequence of dementia brain in a rat was extracted and visualized at the DNA analytical laboratory. The artificial DNA sequences are generated by a dementia-brain software at the brain-computing laboratory. The use of DNA thumbprint motif pattern matching showed significant levels of fitness in the natural-to-artificial as in the natural-to-natural DNA sequencing. The matching between the laboratory extracted DNA and the computer generated DNA will enhance the use of brain-computing in predicting neural degradation such as dementia.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127540534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580614
H. N. Suma, S. Murali
The activity patterns in fMRI data represent execution of different physical and mental tasks. Each of these patterns is unique and located in specific location in the brain. The main aim of analyzing these datasets is to localize the areas of the brain that have been activated in a given experiment. The basic analysis involves carrying out a statistical test for activation at thousands of locations in the brain. In this paper an attempt is made to develop and train classifiers based on the subjectspsila fMRI sequences in order to predict the tasks performed. The fMRI data set is huge and also the data size for different tasks is dimensionally dissimilar. Dimensionality reduction of high dimensional data is useful for three general reasons; it reduces computational requirements for subsequent operations on the data, eliminates redundancies in the data, and, in cases where the feature data set dimensionality doesnpsilat match then a common dimension is to be arrived at with the available data. All three reasons apply here, and motivate the use of Principal Component Analysis (PCA), a standard method for creating uncorrelated variables from best-fitting linear combinations of the variables in the raw data. The depth information data is extracted using Statistical Parametric mapping (SPM). The templates comprising of principal components represent individual activity. These are then fed to the back propagation training algorithm. The trained network is capable of classifying the test pattern into the corresponding defined class.
{"title":"Neural network approach towards pattern classification using fMRI activation maps","authors":"H. N. Suma, S. Murali","doi":"10.1109/ICCCE.2008.4580614","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580614","url":null,"abstract":"The activity patterns in fMRI data represent execution of different physical and mental tasks. Each of these patterns is unique and located in specific location in the brain. The main aim of analyzing these datasets is to localize the areas of the brain that have been activated in a given experiment. The basic analysis involves carrying out a statistical test for activation at thousands of locations in the brain. In this paper an attempt is made to develop and train classifiers based on the subjectspsila fMRI sequences in order to predict the tasks performed. The fMRI data set is huge and also the data size for different tasks is dimensionally dissimilar. Dimensionality reduction of high dimensional data is useful for three general reasons; it reduces computational requirements for subsequent operations on the data, eliminates redundancies in the data, and, in cases where the feature data set dimensionality doesnpsilat match then a common dimension is to be arrived at with the available data. All three reasons apply here, and motivate the use of Principal Component Analysis (PCA), a standard method for creating uncorrelated variables from best-fitting linear combinations of the variables in the raw data. The depth information data is extracted using Statistical Parametric mapping (SPM). The templates comprising of principal components represent individual activity. These are then fed to the back propagation training algorithm. The trained network is capable of classifying the test pattern into the corresponding defined class.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124899129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580558
M. Jono, A. M. Yasin, N. Za'ba, P. Ramakrisnan, M. Isa
Reading comprehension is not an easy task especially for young readers. Due to this, the best way to simplify reading comprehension is to utilize the versatile possibilities of the computer and to expand the useful state of the art of multimedia technology. Therefore a learning strategy using 3D animation is required. A framework will first be developed by concentrating on reading comprehension practice namely narrative, descriptive, dialogues and factual. The sources of the proposed framework are based on Gagnepsilas model, Gestalt theory and online instruction by Kolbo & Washington. In the process of designing the courseware, this framework it will enhance and overcome the problem of lack of proper guidance in delivering materials. From this it will provide an alternative development strategy for courseware designers as well producing a new learning strategy that will further motivate children and enhance their learning process.
{"title":"A framework for reading comprehension practice using interactive 3D animation","authors":"M. Jono, A. M. Yasin, N. Za'ba, P. Ramakrisnan, M. Isa","doi":"10.1109/ICCCE.2008.4580558","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580558","url":null,"abstract":"Reading comprehension is not an easy task especially for young readers. Due to this, the best way to simplify reading comprehension is to utilize the versatile possibilities of the computer and to expand the useful state of the art of multimedia technology. Therefore a learning strategy using 3D animation is required. A framework will first be developed by concentrating on reading comprehension practice namely narrative, descriptive, dialogues and factual. The sources of the proposed framework are based on Gagnepsilas model, Gestalt theory and online instruction by Kolbo & Washington. In the process of designing the courseware, this framework it will enhance and overcome the problem of lack of proper guidance in delivering materials. From this it will provide an alternative development strategy for courseware designers as well producing a new learning strategy that will further motivate children and enhance their learning process.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125018058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580737
N. A. Ismail, E. A. O'Brien
Retrieval process of both digital photos and physical photos has not been easy, especially when the collections grow into thousands. In this paper, we describe an interactive web-based photo retrieval system that enables personal digital photo users to accomplish photo browsing by using multimodal interaction. This system not only enables users to use mouse clicks input modalities but also speech input modality to browse their personal digital photos in the World Wide Web (WWW) environment. The prototype system and it architecture utilize web technology which was build using web programming scripting (JavaScript, XHTML, ASP, XML based markup language) and image database in order to achieve its objective. All prototype programs and data files including the userpsilas photo repository, profiles, dialogues, grammars, prompt, and retrieval engine are stored and located in the web server. Our approach also consists of human-computer speech dialogue based on photo browsing of image content by four main categories (Who? What? When? and Where?). Our user study with 20 digital photo users showed that the participants reacted positively to their experience with the system interactions.
{"title":"Enabling multimodal interaction in web-based personal digital photo browsing","authors":"N. A. Ismail, E. A. O'Brien","doi":"10.1109/ICCCE.2008.4580737","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580737","url":null,"abstract":"Retrieval process of both digital photos and physical photos has not been easy, especially when the collections grow into thousands. In this paper, we describe an interactive web-based photo retrieval system that enables personal digital photo users to accomplish photo browsing by using multimodal interaction. This system not only enables users to use mouse clicks input modalities but also speech input modality to browse their personal digital photos in the World Wide Web (WWW) environment. The prototype system and it architecture utilize web technology which was build using web programming scripting (JavaScript, XHTML, ASP, XML based markup language) and image database in order to achieve its objective. All prototype programs and data files including the userpsilas photo repository, profiles, dialogues, grammars, prompt, and retrieval engine are stored and located in the web server. Our approach also consists of human-computer speech dialogue based on photo browsing of image content by four main categories (Who? What? When? and Where?). Our user study with 20 digital photo users showed that the participants reacted positively to their experience with the system interactions.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125943843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-13DOI: 10.1109/ICCCE.2008.4580696
Sapiee Jamel, M. M. Deris
The importance of data confidentiality, integrity and availability in any data storage and transmission is undeniable especially with the development of highly integrated communication mediums such as the Internet. Disclosure of confidential or secret data to unauthorised users has become an important issue and indirectly give opportunity for ongoing research and development for reliable and strong cryptographic algorithms. Every modern cryptographic algorithm must have the elements of confusion and diffusion in it design. Research on finding effective and efficient diffusive element is still ongoing and highly discussed amongst researchers in the area of information security. In this paper, we investigate the diffusive property of three cryptographic algorithms: Rijndael, Twofish and Safer+ using a simple test vectors. Binary and decimal representation will be used to show the characteristic of each diffusive that play an important role as it will ensure any ciphertext generated from cryptographic algorithm are random and free from any predicted pattern which might be used by cryptanalyst to decipher the original message.
{"title":"Diffusive primitives in THE design of modern cryptographic algorithms","authors":"Sapiee Jamel, M. M. Deris","doi":"10.1109/ICCCE.2008.4580696","DOIUrl":"https://doi.org/10.1109/ICCCE.2008.4580696","url":null,"abstract":"The importance of data confidentiality, integrity and availability in any data storage and transmission is undeniable especially with the development of highly integrated communication mediums such as the Internet. Disclosure of confidential or secret data to unauthorised users has become an important issue and indirectly give opportunity for ongoing research and development for reliable and strong cryptographic algorithms. Every modern cryptographic algorithm must have the elements of confusion and diffusion in it design. Research on finding effective and efficient diffusive element is still ongoing and highly discussed amongst researchers in the area of information security. In this paper, we investigate the diffusive property of three cryptographic algorithms: Rijndael, Twofish and Safer+ using a simple test vectors. Binary and decimal representation will be used to show the characteristic of each diffusive that play an important role as it will ensure any ciphertext generated from cryptographic algorithm are random and free from any predicted pattern which might be used by cryptanalyst to decipher the original message.","PeriodicalId":274652,"journal":{"name":"2008 International Conference on Computer and Communication Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126035729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}