Pub Date : 2018-09-28DOI: 10.19101/ijacr.2018.838012
Lim Kah Seng, N. Ithnin, Syed Zainudeen Mohd Said
The web application security scanner is a computer program that assessed web application security with penetration testing technique. The benefit of automated web application penetration testing is huge, which web application security scanner not only reduced the time, cost, and resource required for web application penetration testing but also eliminate test engineer reliance on human knowledge. Nevertheless, web application security scanners are possessing weaknesses of low test coverage, and the scanners are generating inaccurate test results. Consequently, experimentations are frequently held to quantitatively quantify web application security scanner's quality to investigate the web application security scanner's strengths and limitations. However, there is a discovery that neither a standard methodology nor criterion is available for quantifying the web application security scanner's quality. Hence, in this paper systematic review is conducted and analysed the methodology and criterion used for quantifying web application security scanners' quality. In this survey, the experiment methodologies and criterions that had been used to quantify web application security scanner's quality is classified and review using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol. The objectives are to provide practitioners with the understanding of methodologies and criterions that available for measuring web application security scanners’ test coverage, attack coverage, and vulnerability detection rate, while provides the critical hint for development of the next testing framework, model, methodology, or criterions, to measure web application security scanner quality.
{"title":"The approaches to quantify web application security scanners quality: a review","authors":"Lim Kah Seng, N. Ithnin, Syed Zainudeen Mohd Said","doi":"10.19101/ijacr.2018.838012","DOIUrl":"https://doi.org/10.19101/ijacr.2018.838012","url":null,"abstract":"The web application security scanner is a computer program that assessed web application security with penetration testing technique. The benefit of automated web application penetration testing is huge, which web application security scanner not only reduced the time, cost, and resource required for web application penetration testing but also eliminate test engineer reliance on human knowledge. Nevertheless, web application security scanners are possessing weaknesses of low test coverage, and the scanners are generating inaccurate test results. Consequently, experimentations are frequently held to quantitatively quantify web application security scanner's quality to investigate the web application security scanner's strengths and limitations. However, there is a discovery that neither a standard methodology nor criterion is available for quantifying the web application security scanner's quality. Hence, in this paper systematic review is conducted and analysed the methodology and criterion used for quantifying web application security scanners' quality. In this survey, the experiment methodologies and criterions that had been used to quantify web application security scanner's quality is classified and review using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol. The objectives are to provide practitioners with the understanding of methodologies and criterions that available for measuring web application security scanners’ test coverage, attack coverage, and vulnerability detection rate, while provides the critical hint for development of the next testing framework, model, methodology, or criterions, to measure web application security scanner quality.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122422180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-28DOI: 10.19101/IJACR.2018.837025
A. Dubey, Kavita Choudhary
Most of the decisions in medical diagnosis are taken on the basis of experts’ opinions. In the case of heart diseases, however, the experts’ decisions do not always reach a consensus since the pattern of heart disorders varies considerably among patients. Researchers have been making continuous efforts to detect heart diseases at the primary stages by using different methodologies in order to increase the chances of curing a condition that has one of the highest mortality rates in the world. The three main objectives of this study were to analyze the global impact of heart diseases on the basis of mortality rates, to assess the risk of heart diseases in different age groups, and to discuss the advantages and disadvantages of methodologies that have been used previously for predicting heart disease at the primary stage. The mortality rate due to heart diseases was assessed according to attributes such as age, population group, clinical risk factors, and geographical locations. Different methodologies were analyzed on the basis of results obtained from literature searches in IEEE, Elsevier, Springer, and other publications. The percentage of deaths due to heart diseases increase with age, indicating that the risk of developing heart disease is directly proportional to age. The analysis of various methodological approaches indicated that data mining and the combination of optimization methods can be effective in predicting heart disease at the initial stages. The current data available on heart diseases can help design better frameworks for predicting new cases. The statistics of heart disease-related death shows a worrying trend worldwide. This study concludes that a framework based on hybrid approaches consisting of the combination of classification and clustering methods of data mining, along with biological system inspired algorithms, can prove to be a landmark in the field of heart disease prediction and detection.
{"title":"A systematic review and analysis of the heart disease prediction methodology","authors":"A. Dubey, Kavita Choudhary","doi":"10.19101/IJACR.2018.837025","DOIUrl":"https://doi.org/10.19101/IJACR.2018.837025","url":null,"abstract":"Most of the decisions in medical diagnosis are taken on the basis of experts’ opinions. In the case of heart diseases, however, the experts’ decisions do not always reach a consensus since the pattern of heart disorders varies considerably among patients. Researchers have been making continuous efforts to detect heart diseases at the primary stages by using different methodologies in order to increase the chances of curing a condition that has one of the highest mortality rates in the world. The three main objectives of this study were to analyze the global impact of heart diseases on the basis of mortality rates, to assess the risk of heart diseases in different age groups, and to discuss the advantages and disadvantages of methodologies that have been used previously for predicting heart disease at the primary stage. The mortality rate due to heart diseases was assessed according to attributes such as age, population group, clinical risk factors, and geographical locations. Different methodologies were analyzed on the basis of results obtained from literature searches in IEEE, Elsevier, Springer, and other publications. The percentage of deaths due to heart diseases increase with age, indicating that the risk of developing heart disease is directly proportional to age. The analysis of various methodological approaches indicated that data mining and the combination of optimization methods can be effective in predicting heart disease at the initial stages. The current data available on heart diseases can help design better frameworks for predicting new cases. The statistics of heart disease-related death shows a worrying trend worldwide. This study concludes that a framework based on hybrid approaches consisting of the combination of classification and clustering methods of data mining, along with biological system inspired algorithms, can prove to be a landmark in the field of heart disease prediction and detection.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"116 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-28DOI: 10.19101/ijacr.2018.838006
Sangeeta. M. Gangannavar, S. S. Navalgund, Satish S. Bhairannawar
The detection of objects is important in many computer vision applications. This paper proposes a reconfigurable architecture for object detection using adaptive threshold with an efficient algorithm for removal of salt and pepper noise from the colour and grayscale images. The main objective of this paper is to design an alternate architecture of object detection using adaptive threshold. In this paper, a type median filter is used to preserve the edges and to reduce the salt and pepper noise easily of the input and reference image is discussed. The pre-processed images are applied to 2D-discrete wavelet transform (2D-DWT) to remove variable illumination and to select appropriate sub-band, i.e., low-low (LL) band which contains maximum information of the original image. The modified background subtraction is used to remove the background from LL band of input and reference images to obtain a foreground image. The detected object is fed to median filter to remove any small amounts of noise which is still present in the image. The modified decision based partially trimmed global median (MDBPTGM) filter was used to give better results in terms of mean square error (MSE), peak signal to noise ratio (PSNR) and image enhancement factor (IEF). Hardware parameters such as slice registers and flip-flop pairs, latches, lookup table (LUT), shift registers and memory usage were compared with the existing techniques. Propose architecture used less number of hardware parameters. It means the proposed design reduces power and the area usage in comparison to the other techniques.
{"title":"A reconfigurable architecture for object detection using adaptive threshold","authors":"Sangeeta. M. Gangannavar, S. S. Navalgund, Satish S. Bhairannawar","doi":"10.19101/ijacr.2018.838006","DOIUrl":"https://doi.org/10.19101/ijacr.2018.838006","url":null,"abstract":"The detection of objects is important in many computer vision applications. This paper proposes a reconfigurable architecture for object detection using adaptive threshold with an efficient algorithm for removal of salt and pepper noise from the colour and grayscale images. The main objective of this paper is to design an alternate architecture of object detection using adaptive threshold. In this paper, a type median filter is used to preserve the edges and to reduce the salt and pepper noise easily of the input and reference image is discussed. The pre-processed images are applied to 2D-discrete wavelet transform (2D-DWT) to remove variable illumination and to select appropriate sub-band, i.e., low-low (LL) band which contains maximum information of the original image. The modified background subtraction is used to remove the background from LL band of input and reference images to obtain a foreground image. The detected object is fed to median filter to remove any small amounts of noise which is still present in the image. The modified decision based partially trimmed global median (MDBPTGM) filter was used to give better results in terms of mean square error (MSE), peak signal to noise ratio (PSNR) and image enhancement factor (IEF). Hardware parameters such as slice registers and flip-flop pairs, latches, lookup table (LUT), shift registers and memory usage were compared with the existing techniques. Propose architecture used less number of hardware parameters. It means the proposed design reduces power and the area usage in comparison to the other techniques.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.19101/IJACR.2018.837016
Bashir A. Muzakkari, M. A. Mohamed, M. F. A. Kadir, Zarina Mohamad, N. Jamil
Wireless sensor networks (WSNs) is a distribution of several tiny, low-cost sensor nodes, wirelessly connected altogether for the purpose of monitoring physical or environmental conditions. Due to the vast interest for WSN, a rapid technological breakthrough has been observed in sensor elements such as processor, operating system, radio, and battery. From the perspective of seven layer approach, the medium access control (MAC) protocols are identified as the most crucial element, being responsible for coordinating communication amongst the sensor nodes. In addition, the functionality of the WSN MAC protocol has a subtle influence on parameters such as battery consumption, packet collision, network lifetime and latency. In this paper, we survey some of the most recent WSN contention-based, scheduling-based, and hybrid MAC protocols by focusing on their underlying principle, various advantages and limitations and their applications. Treating energy saving as the benchmark, further examining the directed towards the treatment of quality of service (QoS) performance metrics within these particular protocols. The result shows that the majority of the protocols leaned towards energy conservation with other parameters are either supported partially or traded off. Latency, throughput, bandwidth utilization, channel utilization is not considered in the design of most of the protocols. Indeed, the energy domain has gotten a vital breakthrough with the advent of other modes of energy saving such as energy harvesting techniques. However, other parameters such as latency, throughput, packet loss, network and bandwidth availability that comes under QoS metrics also play a critical role in future development of MAC protocols for WSNs.
{"title":"Recent advances in energy efficient-QoS aware MAC protocols for wireless sensor network","authors":"Bashir A. Muzakkari, M. A. Mohamed, M. F. A. Kadir, Zarina Mohamad, N. Jamil","doi":"10.19101/IJACR.2018.837016","DOIUrl":"https://doi.org/10.19101/IJACR.2018.837016","url":null,"abstract":"Wireless sensor networks (WSNs) is a distribution of several tiny, low-cost sensor nodes, wirelessly connected altogether for the purpose of monitoring physical or environmental conditions. Due to the vast interest for WSN, a rapid technological breakthrough has been observed in sensor elements such as processor, operating system, radio, and battery. From the perspective of seven layer approach, the medium access control (MAC) protocols are identified as the most crucial element, being responsible for coordinating communication amongst the sensor nodes. In addition, the functionality of the WSN MAC protocol has a subtle influence on parameters such as battery consumption, packet collision, network lifetime and latency. In this paper, we survey some of the most recent WSN contention-based, scheduling-based, and hybrid MAC protocols by focusing on their underlying principle, various advantages and limitations and their applications. Treating energy saving as the benchmark, further examining the directed towards the treatment of quality of service (QoS) performance metrics within these particular protocols. The result shows that the majority of the protocols leaned towards energy conservation with other parameters are either supported partially or traded off. Latency, throughput, bandwidth utilization, channel utilization is not considered in the design of most of the protocols. Indeed, the energy domain has gotten a vital breakthrough with the advent of other modes of energy saving such as energy harvesting techniques. However, other parameters such as latency, throughput, packet loss, network and bandwidth availability that comes under QoS metrics also play a critical role in future development of MAC protocols for WSNs.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128683460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-31DOI: 10.19101/IJACR.2018.837004
Abderrazak Bakkas, A. E. Manouar
Many companies are aware of the need to integrate environmental and social practices into their overall strategy. This paper presents a new business intelligence (BI) model based on green IT and balanced scorecard (BSC). This will enable decision-makers to integrate societal and environmental concerns into the decision-making process as well as monitor the company's environmental performance and their interaction with customers, suppliers and employees. The model gives a new vision for business intelligence not just to focus on the economic aspects, but also to take into account the social, moral and environmental considerations. The aim of this paper is to design efficient BI to introduce and standardize key performance indicators of corporate social responsibility using the four perspectives of the BSC to contribute effectively the green IT. The proposed model gives a new generation model of the BI that is “Green BSC BI”.
{"title":"An efficient business intelligence (BI) model based on green IT and balanced scorecard (BSC)","authors":"Abderrazak Bakkas, A. E. Manouar","doi":"10.19101/IJACR.2018.837004","DOIUrl":"https://doi.org/10.19101/IJACR.2018.837004","url":null,"abstract":"Many companies are aware of the need to integrate environmental and social practices into their overall strategy. This paper presents a new business intelligence (BI) model based on green IT and balanced scorecard (BSC). This will enable decision-makers to integrate societal and environmental concerns into the decision-making process as well as monitor the company's environmental performance and their interaction with customers, suppliers and employees. The model gives a new vision for business intelligence not just to focus on the economic aspects, but also to take into account the social, moral and environmental considerations. The aim of this paper is to design efficient BI to introduce and standardize key performance indicators of corporate social responsibility using the four perspectives of the BSC to contribute effectively the green IT. The proposed model gives a new generation model of the BI that is “Green BSC BI”.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114951557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-31DOI: 10.19101/ijacr.2018.837020
J. Shalini, Y. Manjunatha
and also describes the working of the general wireless communication system using the Abstract The orthogonal frequency division multiplexing (OFDM) is most commonly used in the area of communication where a large amount of data needs to be transmitted through a wired or wireless channel. The main application OFDM lies in wireless network, internet model and digital video/audio broadcasting. Data need to be divided over a number of orthogonal channels to minimize the interference between each transmission channel and commonly performed using analog circuitry method. However, this method is less stable and bulky. In this paper, an efficient reconfigurable architecture for advanced OFDM transmission has been proposed. The architecture consists of 31 subcarrier channel, OFDM system using 64 point modified coordinate rotation digital computer (CORDIC) based inverse fast Fourier transform (IFFT) and novel 4 point quadrature amplitude modulation (QAM) is used for modulations of each channel. The input data is converted from serial to parallel, encoded using Hermitian symmetry, cyclic prefix, and converted serial to parallel. The comparison result shows that the proposed architecture is better than existing in terms of hardware utilizations. The proposed OFDM transmitter requires almost 35% lesser hardware resources with respect to existing techniques.
{"title":"Efficient reconfigurable architecture for advanced orthogonal frequency division multiplexing (AOFDM) transmitter","authors":"J. Shalini, Y. Manjunatha","doi":"10.19101/ijacr.2018.837020","DOIUrl":"https://doi.org/10.19101/ijacr.2018.837020","url":null,"abstract":"and also describes the working of the general wireless communication system using the Abstract The orthogonal frequency division multiplexing (OFDM) is most commonly used in the area of communication where a large amount of data needs to be transmitted through a wired or wireless channel. The main application OFDM lies in wireless network, internet model and digital video/audio broadcasting. Data need to be divided over a number of orthogonal channels to minimize the interference between each transmission channel and commonly performed using analog circuitry method. However, this method is less stable and bulky. In this paper, an efficient reconfigurable architecture for advanced OFDM transmission has been proposed. The architecture consists of 31 subcarrier channel, OFDM system using 64 point modified coordinate rotation digital computer (CORDIC) based inverse fast Fourier transform (IFFT) and novel 4 point quadrature amplitude modulation (QAM) is used for modulations of each channel. The input data is converted from serial to parallel, encoded using Hermitian symmetry, cyclic prefix, and converted serial to parallel. The comparison result shows that the proposed architecture is better than existing in terms of hardware utilizations. The proposed OFDM transmitter requires almost 35% lesser hardware resources with respect to existing techniques.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129872334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-31DOI: 10.19101/ijacr.2018.835001
P. B. Shola, Asaju La'aro Bolaji
Discrete optimization is a class of computational expensive problems that are of practical interest and consequently have attracted the attention of many researchers over the years. Yet no single method has been found that could solve all instances of the problem. The no free launch theorem that confirms that no such general method (that can solve all the instances) could be found, has limited research activities in developing method for a specific class of instances of the problem. In this paper an algorithm for solving discrete optimization is presented. The algorithm is obtained from a hybrid continuous optimization algorithm using a technique devised by Clerc for particle swarm optimization (PSO). In the method, the addition, subtraction and multiplication operators are redefined to support discrete domain. The effectiveness of the algorithm was investigated on the flowshop problem using the makespan as the performance measure and the Taillard benchmark problem instances as the dataset. The result of the investigation is presented in this paper and compared with those from some existing algorithms, including genetic algorithm (GA), ant colony optimization (ACO), simulated annealing (SA), firefly and cockroach algorithms. Based on the experimental results, the algorithm is proposed as a competitive or a viable alternative for solving flowshop problems and possibly discrete optimization problems in general.
{"title":"A metaheuristic for solving flowshop problem","authors":"P. B. Shola, Asaju La'aro Bolaji","doi":"10.19101/ijacr.2018.835001","DOIUrl":"https://doi.org/10.19101/ijacr.2018.835001","url":null,"abstract":"Discrete optimization is a class of computational expensive problems that are of practical interest and consequently have attracted the attention of many researchers over the years. Yet no single method has been found that could solve all instances of the problem. The no free launch theorem that confirms that no such general method (that can solve all the instances) could be found, has limited research activities in developing method for a specific class of instances of the problem. In this paper an algorithm for solving discrete optimization is presented. The algorithm is obtained from a hybrid continuous optimization algorithm using a technique devised by Clerc for particle swarm optimization (PSO). In the method, the addition, subtraction and multiplication operators are redefined to support discrete domain. The effectiveness of the algorithm was investigated on the flowshop problem using the makespan as the performance measure and the Taillard benchmark problem instances as the dataset. The result of the investigation is presented in this paper and compared with those from some existing algorithms, including genetic algorithm (GA), ant colony optimization (ACO), simulated annealing (SA), firefly and cockroach algorithms. Based on the experimental results, the algorithm is proposed as a competitive or a viable alternative for solving flowshop problems and possibly discrete optimization problems in general.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116617489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-31DOI: 10.19101/IJACR.2018.836022
A. Khandelwal, Y. K. Jain
A wireless sensor network (WSN) offers the aggregation of data for the communication and processing in the exterior area or the base station. The main purpose of this study was to efficiently select the cluster heads (CHs) and carry out the synchronous data sink operation for the efficient energy and time utilization. An efficient approach based on the k-means algorithm for the cluster head selection has been proposed. It also includes simple additive weighting (SAW) and weighted product method (WPM) for the data sink operation priority by the decision performance ranking. In this approach, weights are assigned and pre-processed on the basis of the node operations or the attribute values. These values are used for clustering of the nodes. K-means have been applied for the clustering. The resultant data are then processed with the decision performance ranking methods. We have used SAW and WPM for the selection of CHs from the clusters. The variations in SAW and WPM results are minor and these approaches are efficient in providing the proper CHs selection from the obtained clusters. The result of the random selection priority scale also offers an energy efficient system. The proposed approach results in less delay in packet delivery and offers efficient energy consumption in contrast to the traditional method.
{"title":"An efficient k-means algorithm for the cluster head selection based on SAW and WPM","authors":"A. Khandelwal, Y. K. Jain","doi":"10.19101/IJACR.2018.836022","DOIUrl":"https://doi.org/10.19101/IJACR.2018.836022","url":null,"abstract":"A wireless sensor network (WSN) offers the aggregation of data for the communication and processing in the exterior area or the base station. The main purpose of this study was to efficiently select the cluster heads (CHs) and carry out the synchronous data sink operation for the efficient energy and time utilization. An efficient approach based on the k-means algorithm for the cluster head selection has been proposed. It also includes simple additive weighting (SAW) and weighted product method (WPM) for the data sink operation priority by the decision performance ranking. In this approach, weights are assigned and pre-processed on the basis of the node operations or the attribute values. These values are used for clustering of the nodes. K-means have been applied for the clustering. The resultant data are then processed with the decision performance ranking methods. We have used SAW and WPM for the selection of CHs from the clusters. The variations in SAW and WPM results are minor and these approaches are efficient in providing the proper CHs selection from the obtained clusters. The result of the random selection priority scale also offers an energy efficient system. The proposed approach results in less delay in packet delivery and offers efficient energy consumption in contrast to the traditional method.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123131200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-31DOI: 10.19101/IJACR.2018.836020
M. A. Tadlaoui, Rommel N. Carvalho, Mohamed Khaldi
The aim of this paper is to present a probabilistic and dynamic learner model in adaptive hypermedia educational systems based on multi-entity Bayesian networks (MEBN) and artificial intelligence. There are several methods and models for modelling the learner in adaptive hypermedia educational systems, but they’re based on the initial profile of the learner created in his entry into the learning situation. They do not handle the uncertainty in the dynamic modelling of the learner based on the actions of the learner. The main hypothesis of this paper is the management of the learner model based on MEBN and artificial intelligence, taking into accounts the different action that the learner could take during his/her whole learning path. In this paper, the use of the notion of fragments and MEBN theory (MTheory) to lead to a Bayesian multi-entity network has been proposed. The use of this Bayesian method can handle the whole course of a learner as well as all of its shares in an adaptive educational hypermedia. The approach that we followed during this paper is marked initially by modelling the learner model in three levels: we started with the conceptual level of modelling with the unified modelling language, followed by the model based on Bayesian networks to be able to achieve probabilistic modelling in the three phases of learner modelling.
{"title":"A learner model based on multi-entity Bayesian networks and artificial intelligence in adaptive hypermedia educational systems","authors":"M. A. Tadlaoui, Rommel N. Carvalho, Mohamed Khaldi","doi":"10.19101/IJACR.2018.836020","DOIUrl":"https://doi.org/10.19101/IJACR.2018.836020","url":null,"abstract":"The aim of this paper is to present a probabilistic and dynamic learner model in adaptive hypermedia educational systems based on multi-entity Bayesian networks (MEBN) and artificial intelligence. There are several methods and models for modelling the learner in adaptive hypermedia educational systems, but they’re based on the initial profile of the learner created in his entry into the learning situation. They do not handle the uncertainty in the dynamic modelling of the learner based on the actions of the learner. The main hypothesis of this paper is the management of the learner model based on MEBN and artificial intelligence, taking into accounts the different action that the learner could take during his/her whole learning path. In this paper, the use of the notion of fragments and MEBN theory (MTheory) to lead to a Bayesian multi-entity network has been proposed. The use of this Bayesian method can handle the whole course of a learner as well as all of its shares in an adaptive educational hypermedia. The approach that we followed during this paper is marked initially by modelling the learner model in three levels: we started with the conceptual level of modelling with the unified modelling language, followed by the model based on Bayesian networks to be able to achieve probabilistic modelling in the three phases of learner modelling.","PeriodicalId":273530,"journal":{"name":"International Journal of Advanced Computer Research","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124159909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}