This paper presented a new focused crawler that efficiently supports the Turkish language. The developed architecture was divided into multiple units: a control unit, crawler unit, link extractor unit, link sorter unit, and natural language processing unit. The crawler's units can work in parallel to process the massive amount of published websites. Also, the proposed Convolutional Neural Network (CNN) based natural language processing unit can professionally classifying Turkish text and web pages. Extensive experiments using three datasets have been performed to illustrate the performance of the developed approach. The first dataset contains 50,000 Turkish web pages downloaded by the developed crawler, while the other two are publicly available and consist of “28,567” and “22,431” Turkish web pages, respectively. In addition, the Vector Space Model (VSM) in general and word embedding state-of-the-art techniques, in particular, were investigated to find the most suitable one for the Turkish language. Overall, results indicated that the developed approach had achieved good performance, robustness, and stability when processing the Turkish language. Also, Bidirectional Encoder Representations from Transformer (BERT) was found to be the most appropriate embedding for building an efficient Turkish language classification system. Finally, our experiments showed superior performance of the developed natural language processing unit against seven state-of-the-art CNN classification systems. Where accuracy improvement compared to the second-best is 10% and 47% compared to the lowest performance.
{"title":"A topic-specific web crawler using deep convolutional networks","authors":"Saed Alqaraleh, Hatice Meltem Nergiz Sirin","doi":"10.34028/iajit/20/3/3","DOIUrl":"https://doi.org/10.34028/iajit/20/3/3","url":null,"abstract":"This paper presented a new focused crawler that efficiently supports the Turkish language. The developed architecture was divided into multiple units: a control unit, crawler unit, link extractor unit, link sorter unit, and natural language processing unit. The crawler's units can work in parallel to process the massive amount of published websites. Also, the proposed Convolutional Neural Network (CNN) based natural language processing unit can professionally classifying Turkish text and web pages. Extensive experiments using three datasets have been performed to illustrate the performance of the developed approach. The first dataset contains 50,000 Turkish web pages downloaded by the developed crawler, while the other two are publicly available and consist of “28,567” and “22,431” Turkish web pages, respectively. In addition, the Vector Space Model (VSM) in general and word embedding state-of-the-art techniques, in particular, were investigated to find the most suitable one for the Turkish language. Overall, results indicated that the developed approach had achieved good performance, robustness, and stability when processing the Turkish language. Also, Bidirectional Encoder Representations from Transformer (BERT) was found to be the most appropriate embedding for building an efficient Turkish language classification system. Finally, our experiments showed superior performance of the developed natural language processing unit against seven state-of-the-art CNN classification systems. Where accuracy improvement compared to the second-best is 10% and 47% compared to the lowest performance.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"8 1","pages":"310-318"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82471511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online public opinion usually spreads rapidly and widely, thus a small incident probably evolves into a large social crisis in a very short time, and results in a heavy loss in credit or economic aspects. We propose a method to rate the crisis of online public opinion based on a multi-level index system to evaluate the impact of events objectively. Firstly, the dissemination mechanism of online public opinion is explained from the perspective of information ecology. According to the mechanism, some evaluation indexes are selected through correlation analysis and principal component analysis. Then, a classification model of text emotion is created via the training by deep learning to achieve the accurate quantification of the emotional indexes in the index system. Finally, based on the multi-level evaluation index system and grey correlation analysis, we propose a method to rate the crisis of online public opinion. The experiment with the real-time incident show that this method can objectively evaluate the emotional tendency of Internet users and rate the crisis in different dissemination stages of online public opinion. It is helpful to realizing the crisis warning of online public opinion and timely blocking the further spread of the crisis.
{"title":"Rating the Crisis of Online Public Opinion Using a Multi-Level Index System","authors":"Fanqi Meng, Xixi Xiao, Jingdong Wang","doi":"10.34028/iajit/19/4/4","DOIUrl":"https://doi.org/10.34028/iajit/19/4/4","url":null,"abstract":"Online public opinion usually spreads rapidly and widely, thus a small incident probably evolves into a large social crisis in a very short time, and results in a heavy loss in credit or economic aspects. We propose a method to rate the crisis of online public opinion based on a multi-level index system to evaluate the impact of events objectively. Firstly, the dissemination mechanism of online public opinion is explained from the perspective of information ecology. According to the mechanism, some evaluation indexes are selected through correlation analysis and principal component analysis. Then, a classification model of text emotion is created via the training by deep learning to achieve the accurate quantification of the emotional indexes in the index system. Finally, based on the multi-level evaluation index system and grey correlation analysis, we propose a method to rate the crisis of online public opinion. The experiment with the real-time incident show that this method can objectively evaluate the emotional tendency of Internet users and rate the crisis in different dissemination stages of online public opinion. It is helpful to realizing the crisis warning of online public opinion and timely blocking the further spread of the crisis.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"19 1","pages":"597-608"},"PeriodicalIF":0.0,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89586111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is the most widespread cancer that affects females all over the world. The Computer-aided Detection Systems (CADs) could assist radiologists’ in locating and classifying the breast tissues into normal and abnormal, however the absolute decisions are still made by the radiologist. In general, CAD system consists of four stages: Pre-processing, segmentation, feature extraction, and classification. This research work focuses on the segmentation step, where the abnormal tissues are segmented from the normal tissues. There are numerous approaches presented in the literature for mammogram segmentation. The major limitation of these methods is that they have to test each and every pixel of the image at least once, which is computationally expensive. This research work focuses on detection of microcalcifications from the digital mammograms using a novel segmentation approach based on novel Ant Clustering approach called Ant System based Contour Clustering (ASCC) that simulates the ants’ foraging behavior. The performance of the ASCC based segmentation algorithm is investigated with the mammogram images received from Mammographic Image Analysis Society (MIAS) database.
{"title":"Segmentation of mammogram abnormalities using ant system based contour clustering algorithm","authors":"S. Subramanian, G. R. Thevar","doi":"10.37896/pd91.4/91426","DOIUrl":"https://doi.org/10.37896/pd91.4/91426","url":null,"abstract":"Breast cancer is the most widespread cancer that affects females all over the world. The Computer-aided Detection Systems (CADs) could assist radiologists’ in locating and classifying the breast tissues into normal and abnormal, however the absolute decisions are still made by the radiologist. In general, CAD system consists of four stages: Pre-processing, segmentation, feature extraction, and classification. This research work focuses on the segmentation step, where the abnormal tissues are segmented from the normal tissues. There are numerous approaches presented in the literature for mammogram segmentation. The major limitation of these methods is that they have to test each and every pixel of the image at least once, which is computationally expensive. This research work focuses on detection of microcalcifications from the digital mammograms using a novel segmentation approach based on novel Ant Clustering approach called Ant System based Contour Clustering (ASCC) that simulates the ants’ foraging behavior. The performance of the ASCC based segmentation algorithm is investigated with the mammogram images received from Mammographic Image Analysis Society (MIAS) database.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"1303 1","pages":"319-330"},"PeriodicalIF":0.0,"publicationDate":"2022-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74145788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iris research has become an inevitable trend in the application of identity recognition due to its uniqueness, stability, non-aggression and other advantages. In this paper, an improved iris localization method is presented. When the iris inner boundary is located, a method for extracting the iris inner boundary based on morphology operations with multi-structural elements is proposed. Firstly, the iris image is pre-processed, and then the circular connected region in the pre-processed image is determined, the parameters of the circular connected region is extracted, finally the center and the radius of the circular connected region is obtained, .i.e., the iris inner boundary is excavated. When the iris outer boundary is located, a method for locating iris outer boundary based on annular region and improved Hough transform is proposed. The iris image is first filtered, and then the filtered image is reduced and an annular region is intercepted, finally Hough transform is used to search the circle within the annular region, i.e., the center and the radius of the iris outer boundary is obtained. The experimental results show that the location accuracy rate of this proposed method is at least 95% and the average running time is increased by 46.2% even higher. Therefore, this proposed method has the advantages of high speed, high accuracy, strong robustness and practicability.
{"title":"An improved iris localization method","authors":"Meisen Pan, Qi Xiong","doi":"10.34028/iajit/19/2/4","DOIUrl":"https://doi.org/10.34028/iajit/19/2/4","url":null,"abstract":"Iris research has become an inevitable trend in the application of identity recognition due to its uniqueness, stability, non-aggression and other advantages. In this paper, an improved iris localization method is presented. When the iris inner boundary is located, a method for extracting the iris inner boundary based on morphology operations with multi-structural elements is proposed. Firstly, the iris image is pre-processed, and then the circular connected region in the pre-processed image is determined, the parameters of the circular connected region is extracted, finally the center and the radius of the circular connected region is obtained, .i.e., the iris inner boundary is excavated. When the iris outer boundary is located, a method for locating iris outer boundary based on annular region and improved Hough transform is proposed. The iris image is first filtered, and then the filtered image is reduced and an annular region is intercepted, finally Hough transform is used to search the circle within the annular region, i.e., the center and the radius of the iris outer boundary is obtained. The experimental results show that the location accuracy rate of this proposed method is at least 95% and the average running time is increased by 46.2% even higher. Therefore, this proposed method has the advantages of high speed, high accuracy, strong robustness and practicability.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"51 1","pages":"173-185"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73565008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The analysis of secure video file transfer with military application for video steganography on cloud computing is essential role. Video steganography is the process of hiding the secret data which is presented in the video and it is based on the reversible and irreversible schemes. The reversible scheme has the capability to insert the secret data into a video and then recover the video without any failure of information when the secret data is extracted. Irreversible methods on video steganography often deal with sensitive information, making embedded payload an important concern in the design of these data hiding systems. In video steganography, irreversible contrast mapping is considered for extracting the secret data during the process of hiding the data. During this extraction process, high quality data hiding is carried in video steganography. The analysis consequences of the proposed method Video Steganography Cloud Security (VSCS) shows that the structure for secure communication and augments the confidentiality and security in cloud. This result of the proposed method shows the better security level.
{"title":"Analysis of video steganography in military applications on cloud","authors":"Umadevi Ramamoorthy, Aruna Loganathan","doi":"10.34028/iajit/19/6/7","DOIUrl":"https://doi.org/10.34028/iajit/19/6/7","url":null,"abstract":"The analysis of secure video file transfer with military application for video steganography on cloud computing is essential role. Video steganography is the process of hiding the secret data which is presented in the video and it is based on the reversible and irreversible schemes. The reversible scheme has the capability to insert the secret data into a video and then recover the video without any failure of information when the secret data is extracted. Irreversible methods on video steganography often deal with sensitive information, making embedded payload an important concern in the design of these data hiding systems. In video steganography, irreversible contrast mapping is considered for extracting the secret data during the process of hiding the data. During this extraction process, high quality data hiding is carried in video steganography. The analysis consequences of the proposed method Video Steganography Cloud Security (VSCS) shows that the structure for secure communication and augments the confidentiality and security in cloud. This result of the proposed method shows the better security level.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"98 1","pages":"897-903"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79482046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Target coverage algorithms have considerable attention for monitoring the target point by dividing sensor nodes into cover groups, with each sensor cover group containing the target points. When the number of sensors is restricted, optimal sensor node placement becomes a key task. By placing sensors in the ideal position, the quality of maximum target coverage and node connectivity can be increased. In this paper, a novel genetic algorithm based on the 2-D discrete Daubechies 4 (db4) lifting wavelet transform is proposed for determining the optimal sensor position. Initially, the genetic algorithm identifies the population-based sensor location and 2-D discrete db4 lifting adjusts the sensor location into an optimal position where each sensor can cover a maximum number of targets that are connected to another sensor. To demonstrate that the suggested model outperforms the existing method, A series of experiments are carried out using various situations to achieve maximum target point coverage, node interconnectivity, and network lifetime with a limited number of sensor nodes.
{"title":"A Novel Genetic Algorithm with db4 Lifting for Optimal Sensor Node Placements","authors":"Ganesan Thangavel, P. Rajarajeswari","doi":"10.34028/iajit/19/5/12","DOIUrl":"https://doi.org/10.34028/iajit/19/5/12","url":null,"abstract":"Target coverage algorithms have considerable attention for monitoring the target point by dividing sensor nodes into cover groups, with each sensor cover group containing the target points. When the number of sensors is restricted, optimal sensor node placement becomes a key task. By placing sensors in the ideal position, the quality of maximum target coverage and node connectivity can be increased. In this paper, a novel genetic algorithm based on the 2-D discrete Daubechies 4 (db4) lifting wavelet transform is proposed for determining the optimal sensor position. Initially, the genetic algorithm identifies the population-based sensor location and 2-D discrete db4 lifting adjusts the sensor location into an optimal position where each sensor can cover a maximum number of targets that are connected to another sensor. To demonstrate that the suggested model outperforms the existing method, A series of experiments are carried out using various situations to achieve maximum target point coverage, node interconnectivity, and network lifetime with a limited number of sensor nodes.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"14 1","pages":"802-811"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76259702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fen He, Kimia Rezaei Kalantrai, A. Ebrahimnejad, H. Motameni
Software rejuvenation is an effective technique to counteract software aging in continuously-running application such as web service based systems. In client-server applications, where the server is intended to run perpetually, rejuvenation of the server process periodically during the server idle times increases the availability of that service. In these systems, web services are allocated based on the user’s requirements and server’s facilities. Since the selection of a service among candidates while maintaining the optimal quality of service is an Non-Deterministic Polynomial (NP)-hard problem, Meta-heuristics seems to be suitable. In this paper, we proposed dynamic software rejuvenation as a proactive fault-tolerance technique based on a combination of Cuckoo Search (CS) and Particle Swarm Optimization (PSO) algorithms called Computer Program Deviation Request (CPDR). Simulation results on Web Site Dream (WS-DREAM) dataset revealed that our strategy can decrease the failure rate of web services on average 38.6 percent in comparison with Genetic Algorithm (GA), Decision-Tree (DT) and Whale Optimization Algorithm (WOA) strategies.
软件再生是一种有效的技术,以防止软件老化的持续运行的应用程序,如基于web服务的系统。在客户机-服务器应用程序中,服务器打算永久运行,在服务器空闲期间定期恢复服务器进程可以提高该服务的可用性。在这些系统中,web服务是根据用户的需求和服务器的设施来分配的。由于在候选服务中选择服务同时保持最佳服务质量是一个非确定性多项式(NP)难题,因此元启发式似乎是合适的。本文提出了一种基于布谷鸟搜索(CS)和粒子群优化(PSO)算法的主动容错技术,即计算机程序偏差请求(CPDR)。在Web Site Dream (WS-DREAM)数据集上的仿真结果表明,与遗传算法(GA)、决策树(DT)和鲸鱼优化算法(WOA)策略相比,我们的策略可以将Web服务的故障率平均降低38.6%。
{"title":"An effective fault-tolerance technique in web services: an approach based on hybrid optimization algorithm of PSO and cuckoo search","authors":"Fen He, Kimia Rezaei Kalantrai, A. Ebrahimnejad, H. Motameni","doi":"10.34028/iajit/19/2/10","DOIUrl":"https://doi.org/10.34028/iajit/19/2/10","url":null,"abstract":"Software rejuvenation is an effective technique to counteract software aging in continuously-running application such as web service based systems. In client-server applications, where the server is intended to run perpetually, rejuvenation of the server process periodically during the server idle times increases the availability of that service. In these systems, web services are allocated based on the user’s requirements and server’s facilities. Since the selection of a service among candidates while maintaining the optimal quality of service is an Non-Deterministic Polynomial (NP)-hard problem, Meta-heuristics seems to be suitable. In this paper, we proposed dynamic software rejuvenation as a proactive fault-tolerance technique based on a combination of Cuckoo Search (CS) and Particle Swarm Optimization (PSO) algorithms called Computer Program Deviation Request (CPDR). Simulation results on Web Site Dream (WS-DREAM) dataset revealed that our strategy can decrease the failure rate of web services on average 38.6 percent in comparison with Genetic Algorithm (GA), Decision-Tree (DT) and Whale Optimization Algorithm (WOA) strategies.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"6 1","pages":"230-236"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79979723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most common cancer disease among all women is breast cancer. This type of disease is caused due to genetic mutation of ageing and lack of awareness. The tumour that occurred may be a benign type which is a non-dangerous and malignant type that is dangerous. The Mammography technique utilizes the early detection of breast cancer. A Novel Deep Learning technique that combines the deep convolutional neural networks and the random forest classifier is proposed to detect and categorize Breast cancer. The feature extraction is carried over by the AlexNet model of the Deep Convolutional Neural Network and the classifier precision is increased by Random Forest Classifier. The images are collected from the various Mammogram images of predefined datasets. The performance results confirm that the projected scheme has enhanced performance compared with the state-of-art schemes.
{"title":"A hybrid deep learning based assist system for detection and classification of breast cancer from mammogram images","authors":"K. Narayanan, R. Krishnan, Y. H. Robinson","doi":"10.34028/iajit/19/6/15","DOIUrl":"https://doi.org/10.34028/iajit/19/6/15","url":null,"abstract":"The most common cancer disease among all women is breast cancer. This type of disease is caused due to genetic mutation of ageing and lack of awareness. The tumour that occurred may be a benign type which is a non-dangerous and malignant type that is dangerous. The Mammography technique utilizes the early detection of breast cancer. A Novel Deep Learning technique that combines the deep convolutional neural networks and the random forest classifier is proposed to detect and categorize Breast cancer. The feature extraction is carried over by the AlexNet model of the Deep Convolutional Neural Network and the classifier precision is increased by Random Forest Classifier. The images are collected from the various Mammogram images of predefined datasets. The performance results confirm that the projected scheme has enhanced performance compared with the state-of-art schemes.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"29 1","pages":"965-974"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76752802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting human faces in low-resolution images is more difficult than high quality images because people appear smaller and facial features are not as clear as high resolution face images. Furthermore, the regions of interest are often impoverished or blurred due to the large distance between the camera and the objects which can decrease detection rate and increase false alarms. As a result, the performance of face detection (detection rate and the number of false positives) in low-resolution images can affect directly subsequent applications such as face recognition or face tracking. In this paper, a novel method, based on cascade Adaboost and Histogram of Oriented Gradients (HOG), is proposed to improve face detection performance in low resolution images, while most of researches have been done and tested on high quality images. The focus of this work is to improve the performance of face detection by increasing the detection rate and at the same time decreasing the number of false alarms. The concept behind the proposed combination is based on the a-priori rejection of false positives for a more accurate detection. In other words in order to increase human face detection performance, the first stage (cascade Adaboost) removes the majority of the false alarms while keeping the detection rate high, however many false alarms still exist in the final output. To remove existing false alarms, a stage (HOG+SVM) is added to the first stage to act as a verification module for more accurate detection. The method has been extensively tested on the Carnegie Melon University (CMU) database and the low-resolution images database. The results show better performance compared with existing techniques.
在低分辨率图像中检测人脸比在高质量图像中检测人脸更困难,因为人看起来更小,面部特征也不如高分辨率图像清晰。此外,由于相机与目标之间的距离较大,感兴趣的区域往往会变得贫瘠或模糊,从而降低检测率并增加误报。因此,低分辨率图像中人脸检测的性能(检测率和误报次数)可以直接影响后续的应用,如人脸识别或人脸跟踪。本文提出了一种基于级联Adaboost和定向梯度直方图(Histogram of Oriented Gradients, HOG)的新方法来提高低分辨率图像下的人脸检测性能,而大多数研究都是在高分辨率图像上进行的。本文的工作重点是在提高检测率的同时减少误报的数量,从而提高人脸检测的性能。提出的组合背后的概念是基于先验地拒绝假阳性,以获得更准确的检测。换句话说,为了提高人脸检测性能,第一阶段(级联Adaboost)在保持高检测率的同时,消除了大部分的假警报,但最终输出中仍然存在许多假警报。为了消除已有的虚警,在第一级的基础上增加一级(HOG+SVM)作为验证模块,实现更准确的检测。该方法已在卡内基梅隆大学(CMU)数据库和低分辨率图像数据库上进行了广泛的测试。结果表明,与现有技术相比,该方法具有更好的性能。
{"title":"A Fusion Approach Based on HOG and Adaboost Algorithm for Face Detection under Low-Resolution Images","authors":"Farhad Navabifar, M. Emadi","doi":"10.34028/iajit/19/5/4","DOIUrl":"https://doi.org/10.34028/iajit/19/5/4","url":null,"abstract":"Detecting human faces in low-resolution images is more difficult than high quality images because people appear smaller and facial features are not as clear as high resolution face images. Furthermore, the regions of interest are often impoverished or blurred due to the large distance between the camera and the objects which can decrease detection rate and increase false alarms. As a result, the performance of face detection (detection rate and the number of false positives) in low-resolution images can affect directly subsequent applications such as face recognition or face tracking. In this paper, a novel method, based on cascade Adaboost and Histogram of Oriented Gradients (HOG), is proposed to improve face detection performance in low resolution images, while most of researches have been done and tested on high quality images. The focus of this work is to improve the performance of face detection by increasing the detection rate and at the same time decreasing the number of false alarms. The concept behind the proposed combination is based on the a-priori rejection of false positives for a more accurate detection. In other words in order to increase human face detection performance, the first stage (cascade Adaboost) removes the majority of the false alarms while keeping the detection rate high, however many false alarms still exist in the final output. To remove existing false alarms, a stage (HOG+SVM) is added to the first stage to act as a verification module for more accurate detection. The method has been extensively tested on the Carnegie Melon University (CMU) database and the low-resolution images database. The results show better performance compared with existing techniques.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"4 1","pages":"728-735"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81867568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we discuss the solution to the vehicle routing problem for a heterogeneous fleet with a depot and a time window satisfied by meeting customer demands with various constraints. A 3-stage hierarchical method consisting of transportation, routing, and linear correction steps is proposed for the solution. In the first stage, customer demands have the shortest routing. They were clustered using the annealing simulation algorithm and assigned vehicles of appropriate type and equipment. In the second stage, a genetic algorithm was used to find the optimal solution that satisfies both the requirements of the transported goods and the customer requirements. In the third stage, an attempt was made to increase the optimality by linear correction of the optimal solution found in the second stage. The unique feature of the application is the variety of constraints addressed by the problem and the close proximity to real logistics practice.
{"title":"Compatibility Themed Solution of the Vehicle Routing Problem on the Heterogeneous Fleet","authors":"Metin Bilgin, N. Bulut","doi":"10.34028/iajit/19/5/9","DOIUrl":"https://doi.org/10.34028/iajit/19/5/9","url":null,"abstract":"In this study, we discuss the solution to the vehicle routing problem for a heterogeneous fleet with a depot and a time window satisfied by meeting customer demands with various constraints. A 3-stage hierarchical method consisting of transportation, routing, and linear correction steps is proposed for the solution. In the first stage, customer demands have the shortest routing. They were clustered using the annealing simulation algorithm and assigned vehicles of appropriate type and equipment. In the second stage, a genetic algorithm was used to find the optimal solution that satisfies both the requirements of the transported goods and the customer requirements. In the third stage, an attempt was made to increase the optimality by linear correction of the optimal solution found in the second stage. The unique feature of the application is the variety of constraints addressed by the problem and the close proximity to real logistics practice.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"53 1","pages":"774-784"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87617412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}