Intrusion detection has paramount importance in network security. Intrusion detection depends on energy dissipation, whereas trust remains a hectic factor. In this paper, a trust-aware scheme is proposed to detect intrusion in Mobile Ad Hoc Networking (MANET). The proposed method uses Piecewise Fuzzy C-Means Clustering (pifCM) and fuzzy Naive Bayes (fuzzy NB) for the intrusion detection in the network. The pifCM helps to determine the cluster heads from the clusters. After the selection of cluster heads, the intrusion in the network is determined using fuzzy Naive Bayes with the help of node trust table. The node trust table contains the updated trust factors of all the nodes and the presence of intruded nodes are found with the help of the trust table. After the intrusion is detected, they are eliminated and this reduces the delay in transmission. The effectiveness of the proposed method is analyzed based on the metrics, such as throughput, detection rate, delay, and energy. The proposed method has the delay at the rate of 0.003, energy dissipation of 0.657, the detection rate of 9.85, and throughput of 0.659.
{"title":"Intrusion Detection Based on Piecewise Fuzzy C-Means Clustering and Fuzzy Naïve Bayes Rule","authors":"N. Veeraiah","doi":"10.46253/j.mr.v1i1.a4","DOIUrl":"https://doi.org/10.46253/j.mr.v1i1.a4","url":null,"abstract":"Intrusion detection has paramount importance in network security. Intrusion detection depends on energy dissipation, whereas trust remains a hectic factor. In this paper, a trust-aware scheme is proposed to detect intrusion in Mobile Ad Hoc Networking (MANET). The proposed method uses Piecewise Fuzzy C-Means Clustering (pifCM) and fuzzy Naive Bayes (fuzzy NB) for the intrusion detection in the network. The pifCM helps to determine the cluster heads from the clusters. After the selection of cluster heads, the intrusion in the network is determined using fuzzy Naive Bayes with the help of node trust table. The node trust table contains the updated trust factors of all the nodes and the presence of intruded nodes are found with the help of the trust table. After the intrusion is detected, they are eliminated and this reduces the delay in transmission. The effectiveness of the proposed method is analyzed based on the metrics, such as throughput, detection rate, delay, and energy. The proposed method has the delay at the rate of 0.003, energy dissipation of 0.657, the detection rate of 9.85, and throughput of 0.659.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115503277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Information is rapidly growing in online documents and social media in all languages. Retrieval of information from a language is a high-level task. However, Information Retrieval has become more important in research and commercial development. Presently only a few tools were available in the market for retrieval. Each language has its unique way of pronunciation and language structure. Arabic has a complex morphology. This made it difficult in the advancement of this field. A typical IR model is required to understand similar words in the matching process. In this paper, we presented a comparative study on recent approaches in Arabic Information Retrieval. We implemented and compared all existing approaches for Arabic IR with Arabic datasets. The information retrieval used an Arabic dataset. We also introduced a dictionary, an Arabic Lemmatizer.It contains Arabic words collected from several Arabic books and websites. We compare the performance of different lemmatization techniques. Then we conduct a series of experiments to compare different approaches to Arabic IR. Furthermore, Arabic BERT examined the superior performance with the existing approach's performance. The experimental result showed BM25 and multilingual BERT ranked most for tasks. The Large Arabic Dataset scored an accuracy of 89% in information retrieval.
{"title":"BERT Representation for Arabic Information Retrieval: A Comparative Study","authors":"Moulay Abdellah, Kassimi, Abdessalam Essayad","doi":"10.46253/j.mr.v6i3.a1","DOIUrl":"https://doi.org/10.46253/j.mr.v6i3.a1","url":null,"abstract":": Information is rapidly growing in online documents and social media in all languages. Retrieval of information from a language is a high-level task. However, Information Retrieval has become more important in research and commercial development. Presently only a few tools were available in the market for retrieval. Each language has its unique way of pronunciation and language structure. Arabic has a complex morphology. This made it difficult in the advancement of this field. A typical IR model is required to understand similar words in the matching process. In this paper, we presented a comparative study on recent approaches in Arabic Information Retrieval. We implemented and compared all existing approaches for Arabic IR with Arabic datasets. The information retrieval used an Arabic dataset. We also introduced a dictionary, an Arabic Lemmatizer.It contains Arabic words collected from several Arabic books and websites. We compare the performance of different lemmatization techniques. Then we conduct a series of experiments to compare different approaches to Arabic IR. Furthermore, Arabic BERT examined the superior performance with the existing approach's performance. The experimental result showed BM25 and multilingual BERT ranked most for tasks. The Large Arabic Dataset scored an accuracy of 89% in information retrieval.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131062315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to introduce an improved model for Diabetic Recognition (DR) recognition. Accordingly, the proposed model is executed under two stages, the initial one is the blood vessel segmentation and next step is the DR recognition. Using tophat by reconstruction of red portions in the green plane image, the two thresholds binary images are obtained in vessel segmentation. The areas that are found similar to two binary images are extracted as the main vessels. Additionally, the residual pixels in both the binary images are integrated in order to form a vessel sub-image i.e. facilitated to a classification of Gaussian Mixture Model (GMM). As a result, the complete pixels in the sub-image that are classified as vessels are amalgamated with the main vessels to obtain the segmented vasculature. Moreover, from the segmented blood vessel, the extraction of GLRM and Gray-Level Co-Occurrence Matrix (GLCM) features is performed that are subsequently classified by exploiting Neural Network. To enhance the accurateness, training is performed using Enhanced Crow Search with Levy Flight (ECS-LF) algorithm, so the error among actual output and predicted must be least.
本文旨在介绍一种改进的糖尿病识别模型。因此,该模型分两个阶段进行,首先是血管分割,然后是DR识别。利用tophat对绿色平面图像中的红色部分进行重构,得到两个阈值二值图像进行血管分割。提取与两幅二值图像相似的区域作为主血管。此外,将两个二值图像中的残差像素进行整合,形成一个容器子图像,即便于高斯混合模型(GMM)的分类。将子图像中被分类为血管的完整像素与主血管合并,得到分割后的血管系统。此外,从分割的血管中提取GLRM和灰度共生矩阵(GLCM)特征,然后利用神经网络对其进行分类。为了提高准确率,训练采用了Enhanced Crow Search with Levy Flight (ECS-LF)算法,因此实际输出与预测之间的误差必须最小。
{"title":"Diabetic Retinopathy Recognition using Enhanced Crow Search with Levy Flight Algorithm","authors":"A. T. Nair","doi":"10.46253/j.mr.v2i4.a5","DOIUrl":"https://doi.org/10.46253/j.mr.v2i4.a5","url":null,"abstract":"This paper aims to introduce an improved model for Diabetic Recognition (DR) recognition. Accordingly, the proposed model is executed under two stages, the initial one is the blood vessel segmentation and next step is the DR recognition. Using tophat by reconstruction of red portions in the green plane image, the two thresholds binary images are obtained in vessel segmentation. The areas that are found similar to two binary images are extracted as the main vessels. Additionally, the residual pixels in both the binary images are integrated in order to form a vessel sub-image i.e. facilitated to a classification of Gaussian Mixture Model (GMM). As a result, the complete pixels in the sub-image that are classified as vessels are amalgamated with the main vessels to obtain the segmented vasculature. Moreover, from the segmented blood vessel, the extraction of GLRM and Gray-Level Co-Occurrence Matrix (GLCM) features is performed that are subsequently classified by exploiting Neural Network. To enhance the accurateness, training is performed using Enhanced Crow Search with Levy Flight (ECS-LF) algorithm, so the error among actual output and predicted must be least.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping Potential Fishing Zones Using Remote Sensing Data and GIS: A Case Study of Moroccan Waters","authors":"Younes Oulad Sayad","doi":"10.46253/j.mr.v6i2.a1","DOIUrl":"https://doi.org/10.46253/j.mr.v6i2.a1","url":null,"abstract":"","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122097705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Binary Artificial Fish Swarm Algorithm for Diagnosis of Thyroid Disease","authors":"","doi":"10.46253/j.mr.v5i1.a5","DOIUrl":"https://doi.org/10.46253/j.mr.v5i1.a5","url":null,"abstract":"","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128602765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Soil fertility plays a vital role in the agricultural production for small holder farmers in Shebedno District. The fertile soil can provide nutritious production, healthy plants without pest, high yield and profit, and improves farmers livelihood. To inverse this situation, inorganic fertilizers are introduced to improve the land productivity. However, the low-income countries with less fertile soil results in low-production and food scarcity that leads to poverty. This impacts towards improvement depends on various socio-economic factors. This study examines the factor that influences fertilizer adoption on crop production in the case of Shebedno District Sidama Region. Here the data was collected from the primary sources in the cropping season. A Purposive sampling procedure is adapted to select seven kebeles and a total of 121 respondents from seven kebeles. Descriptive and inferential statistics are used to describe the socioeconomic and institutional characteristics of the respondent through percentages in both fertilizer-adopted and non-fertilizer-adopted farmers. Logit models were employed to identify factors influencing fertilizer adoption on crop production Regression results revealed that three explanatory variables such as extension services distance from credit office and family size are statically significant in affecting crop productivity. Therefore, contact with the extension agent, distance from the credit office, and family size are some of the important areas for the successful future intervention strategies aimed for the sustainable development in the agricultural sector and to increase the agricultural production .
{"title":"Determinants of Fertilizer Adoption in Crop Production; a Case Study in Shebedneo District, Sidama Region, Ethiopia","authors":"Birhan Densisa","doi":"10.46253/j.mr.v6i3.a5","DOIUrl":"https://doi.org/10.46253/j.mr.v6i3.a5","url":null,"abstract":": Soil fertility plays a vital role in the agricultural production for small holder farmers in Shebedno District. The fertile soil can provide nutritious production, healthy plants without pest, high yield and profit, and improves farmers livelihood. To inverse this situation, inorganic fertilizers are introduced to improve the land productivity. However, the low-income countries with less fertile soil results in low-production and food scarcity that leads to poverty. This impacts towards improvement depends on various socio-economic factors. This study examines the factor that influences fertilizer adoption on crop production in the case of Shebedno District Sidama Region. Here the data was collected from the primary sources in the cropping season. A Purposive sampling procedure is adapted to select seven kebeles and a total of 121 respondents from seven kebeles. Descriptive and inferential statistics are used to describe the socioeconomic and institutional characteristics of the respondent through percentages in both fertilizer-adopted and non-fertilizer-adopted farmers. Logit models were employed to identify factors influencing fertilizer adoption on crop production Regression results revealed that three explanatory variables such as extension services distance from credit office and family size are statically significant in affecting crop productivity. Therefore, contact with the extension agent, distance from the credit office, and family size are some of the important areas for the successful future intervention strategies aimed for the sustainable development in the agricultural sector and to increase the agricultural production .","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127015288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: The travel salesman problem is a combinatorial optimization problem that is very well-known in graph theory. The travel salesman problem is categorized as a difficult problem when viewed from a computational point of view. Also includes the classic "NP-Complete" problem because it has been studied for decades. TSP can be viewed as a matter of finding the shortest route that must be taken by someone who departs from his hometown to visit each city exactly once and then returns to his hometown of departure. In the travel salesman problem, the colony can coordinate through a very simple interaction, through this interaction, the colony is known to be able to solve very difficult problems. So, the method used to solve this TSP problem, using the Ant System algorithm is modified to the Ant Colony System Algorithm, to improve its performance on larger TSP problems. The main principle used in the AS algorithm is still used in the Ant Colony System algorithm, namely the use of positive feedback through the use of pheromones. A pheromone placed along the route is intended, so that the ants are more interested in taking that route. So that the best solution later, has a high concentration of pheromones. In order not to get trapped in the local optimal, negative feedback is used in the form of pheromone evaporation. While the main differences between the Ant System and Ant Colony System algorithms are different state transition rules, different global pheromone renewal rules, and the addition of local pheromone renewal rules. With this modification, the optimization results on the TSP obtained will be better, and get the shortest route in the minimum possible time. Based on the results of the system trials that have been carried out, it shows that the ant algorithm, both Ant Colony System and Ant System can be applied to the Travel Salesmen Problem. The Ant Colony System algorithm still has a faster search time than the Ant System algorithm and the difference is quite large.
{"title":"Optimization of the Ant Colony System Algorithm to Search for Distance and Shortest Routes on Travel Salesman Problems","authors":"Paryati","doi":"10.46253/j.mr.v6i3.a3","DOIUrl":"https://doi.org/10.46253/j.mr.v6i3.a3","url":null,"abstract":": The travel salesman problem is a combinatorial optimization problem that is very well-known in graph theory. The travel salesman problem is categorized as a difficult problem when viewed from a computational point of view. Also includes the classic \"NP-Complete\" problem because it has been studied for decades. TSP can be viewed as a matter of finding the shortest route that must be taken by someone who departs from his hometown to visit each city exactly once and then returns to his hometown of departure. In the travel salesman problem, the colony can coordinate through a very simple interaction, through this interaction, the colony is known to be able to solve very difficult problems. So, the method used to solve this TSP problem, using the Ant System algorithm is modified to the Ant Colony System Algorithm, to improve its performance on larger TSP problems. The main principle used in the AS algorithm is still used in the Ant Colony System algorithm, namely the use of positive feedback through the use of pheromones. A pheromone placed along the route is intended, so that the ants are more interested in taking that route. So that the best solution later, has a high concentration of pheromones. In order not to get trapped in the local optimal, negative feedback is used in the form of pheromone evaporation. While the main differences between the Ant System and Ant Colony System algorithms are different state transition rules, different global pheromone renewal rules, and the addition of local pheromone renewal rules. With this modification, the optimization results on the TSP obtained will be better, and get the shortest route in the minimum possible time. Based on the results of the system trials that have been carried out, it shows that the ant algorithm, both Ant Colony System and Ant System can be applied to the Travel Salesmen Problem. The Ant Colony System algorithm still has a faster search time than the Ant System algorithm and the difference is quite large.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, Watermarking is the process of hiding the concealed message into multimedia sources, like image, video and audio. Video watermarking is mostly concentrated in the robustness of the system rather than other steganography. In this paper, the multiobjective cost function is proposed for video watermarking. At first, the cover image (video frame) is subjected into cost function computation. Subsequently, the cost function is recently proposed and modeled by various constraints, like energy, intensity, coverage, edge, as well as brightness. Then, the Haar Wavelet Transform is applied to the original frame, which attains a wavelet coefficient on the basis of the video frame. Concurrently, by exploiting the bit plane technique the concealed message is partitioned into binary images. In the embedding phase, the message bit is embedded into the wavelet coefficients according to the cost value. The concealed message is retrieved in the extraction phase. At last, the simulation results are examined, and performance is evaluated by exploiting metrics like Peak Signal Noise Ratio (PSNR) and correlation coefficients.
{"title":"Haar Wavelet Transform and Multiobjective Cost Function for Video Watermarking","authors":"A. U. Wagdarikar","doi":"10.46253/j.mr.v2i4.a4","DOIUrl":"https://doi.org/10.46253/j.mr.v2i4.a4","url":null,"abstract":"Generally, Watermarking is the process of hiding the concealed message into multimedia sources, like image, video and audio. Video watermarking is mostly concentrated in the robustness of the system rather than other steganography. In this paper, the multiobjective cost function is proposed for video watermarking. At first, the cover image (video frame) is subjected into cost function computation. Subsequently, the cost function is recently proposed and modeled by various constraints, like energy, intensity, coverage, edge, as well as brightness. Then, the Haar Wavelet Transform is applied to the original frame, which attains a wavelet coefficient on the basis of the video frame. Concurrently, by exploiting the bit plane technique the concealed message is partitioned into binary images. In the embedding phase, the message bit is embedded into the wavelet coefficients according to the cost value. The concealed message is retrieved in the extraction phase. At last, the simulation results are examined, and performance is evaluated by exploiting metrics like Peak Signal Noise Ratio (PSNR) and correlation coefficients.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125254705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}