This paper proposes an Intrusion Detection Technique (IDT) using an Artificial Immune System (AIS) based on Negative Selection Algorithm (NSA) to distinguish the self and non-self (intrusion) in computer networks. The novelties of the work are 1) use of Stacked Autoencoders (SAEs) and random forest for dimensionality reduction of data, 2) use of AIS to exploit its feature like self-learning, distributed, self-adaption, self-regulation with self and non-self-distinguishing capability, 3) implementation of two algorithms i.e., NSA based on Cosine distance (NSA_CD) and NSA based on Pearson Distance (NSA_PD) to explore their intrusion detection capabilities, and iv) development of a new ensemble voting based Intrusion Detection Technique (IDT-NSAEV) to detect and test the anomalies in the system. The proposed IDT-NSAEV technique combines the power of NSA_CD, NSA_PD and NSA based on Euclidean distance (NSA_ED) algorithms to enhance the detection rate by reducing the false alarm rate. The performance of the proposed technique is tested on standard benchmark NSL-KDD dataset and the results are compared with the state-of-the-art techniques. The results are in the favour of the proposed technique.
{"title":"Ensemble voting based intrusion detection technique using negative selection algorithm","authors":"Kuldeep Singh, L. Kaur, R. Maini","doi":"10.34028/iajit/20/2/1","DOIUrl":"https://doi.org/10.34028/iajit/20/2/1","url":null,"abstract":"This paper proposes an Intrusion Detection Technique (IDT) using an Artificial Immune System (AIS) based on Negative Selection Algorithm (NSA) to distinguish the self and non-self (intrusion) in computer networks. The novelties of the work are 1) use of Stacked Autoencoders (SAEs) and random forest for dimensionality reduction of data, 2) use of AIS to exploit its feature like self-learning, distributed, self-adaption, self-regulation with self and non-self-distinguishing capability, 3) implementation of two algorithms i.e., NSA based on Cosine distance (NSA_CD) and NSA based on Pearson Distance (NSA_PD) to explore their intrusion detection capabilities, and iv) development of a new ensemble voting based Intrusion Detection Technique (IDT-NSAEV) to detect and test the anomalies in the system. The proposed IDT-NSAEV technique combines the power of NSA_CD, NSA_PD and NSA based on Euclidean distance (NSA_ED) algorithms to enhance the detection rate by reducing the false alarm rate. The performance of the proposed technique is tested on standard benchmark NSL-KDD dataset and the results are compared with the state-of-the-art techniques. The results are in the favour of the proposed technique.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"17 1","pages":"151-158"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78804660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramzi Zouari, Dalila Othmen, H. Boubaker, M. Kherallah
In this study, we developed a new system for online Arabic handwriting recognition based on temporal residual networks with multi-head attention model. The main idea behind the application of attention mechanism was to focus on the most relevant parts of the data by a weighted combination of all input sequences. Moreover, we applied beta elliptic approach to represent both kinematic and geometric aspects of the handwriting motion. This approach consists of representing the neuromuscular impulses involving during the writing act. In the dynamic profile, the curvilinear velocity can be fitted by an algebraic sum of overlapped beta functions, while the original trajectory can be rebuilt by elliptic arcs delimited between successive extremum velocity instants. The experiments were conducted on LMCA database containing the trajectory coordinates of 23141 Arabic handwriting letters, and showed very promising results that achieved the recognition rate of 97,12%
{"title":"Temporal residual network based multi-head attention model for arabic handwriting recognition","authors":"Ramzi Zouari, Dalila Othmen, H. Boubaker, M. Kherallah","doi":"10.34028/iajit/20/3a/4","DOIUrl":"https://doi.org/10.34028/iajit/20/3a/4","url":null,"abstract":"In this study, we developed a new system for online Arabic handwriting recognition based on temporal residual networks with multi-head attention model. The main idea behind the application of attention mechanism was to focus on the most relevant parts of the data by a weighted combination of all input sequences. Moreover, we applied beta elliptic approach to represent both kinematic and geometric aspects of the handwriting motion. This approach consists of representing the neuromuscular impulses involving during the writing act. In the dynamic profile, the curvilinear velocity can be fitted by an algebraic sum of overlapped beta functions, while the original trajectory can be rebuilt by elliptic arcs delimited between successive extremum velocity instants. The experiments were conducted on LMCA database containing the trajectory coordinates of 23141 Arabic handwriting letters, and showed very promising results that achieved the recognition rate of 97,12%","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"18 1","pages":"469-476"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84508900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedestrian detection is one of the important areas in computer vision. This work is about detecting the multi-directional pedestrian’s left, right, and the front movements. On recognizing the direction of movement, the system can be alerted depending on the environmental circumstances. Since multiple pedestrians moving in different directions may be present in a single image, Convolutional Neural Network (CNN) is not suitable for recognizing the multi-directional movement of the pedestrians. Moreover, the Faster R-CNN (FR-CNN) gives faster response output compared to other detection algorithms. In this work, a modified Faster Recurrent Convolutional Neural Network (MFR-CNN), a cognitive approach is proposed for detecting the direction of movement of the pedestrians and it can be deployed in real-time. A fine-tuning of the convolutional layers is performed to extract more information about the image contained in the feature map. The anchors used in the detection process are modified to focus the pedestrians present within a range, which is the major concern for such automated systems. The proposed model reduced the execution time and obtained an accuracy of 88%. The experimental evaluation indicates that the proposed novel model can outperform the other methods by tagging each pedestrian individually in the direction in which they move.
{"title":"A cognitive approach to predict the multi-directional trajectory of pedestrians","authors":"Jayachitra Virupakshipuram Panneerselvam, Bharanidharan Subramaniam, Mathangi Meenakshisundaram","doi":"10.34028/iajit/20/2/11","DOIUrl":"https://doi.org/10.34028/iajit/20/2/11","url":null,"abstract":"Pedestrian detection is one of the important areas in computer vision. This work is about detecting the multi-directional pedestrian’s left, right, and the front movements. On recognizing the direction of movement, the system can be alerted depending on the environmental circumstances. Since multiple pedestrians moving in different directions may be present in a single image, Convolutional Neural Network (CNN) is not suitable for recognizing the multi-directional movement of the pedestrians. Moreover, the Faster R-CNN (FR-CNN) gives faster response output compared to other detection algorithms. In this work, a modified Faster Recurrent Convolutional Neural Network (MFR-CNN), a cognitive approach is proposed for detecting the direction of movement of the pedestrians and it can be deployed in real-time. A fine-tuning of the convolutional layers is performed to extract more information about the image contained in the feature map. The anchors used in the detection process are modified to focus the pedestrians present within a range, which is the major concern for such automated systems. The proposed model reduced the execution time and obtained an accuracy of 88%. The experimental evaluation indicates that the proposed novel model can outperform the other methods by tagging each pedestrian individually in the direction in which they move.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"52 1","pages":"242-252"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85580182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.
{"title":"Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Language","authors":"Jeyalakshmi Chelliah, KiranBala Benny, Revathi Arunachalam, Viswanathan Balasubramanian","doi":"10.34028/iajit/20/1/11","DOIUrl":"https://doi.org/10.34028/iajit/20/1/11","url":null,"abstract":"Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"22 1","pages":"102-112"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76331931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hand gesture recognition is a preferred way for human-robot interactions. Conventional approaches are generally based on image processing and recognition of hand poses with simple backgrounds. In this paper, we propose deep learning models, and humanoid robot integration for offline and online (real-time) recognition and control using hand gestures. One thousand and two hundred of hand images belonging to four participants are collected to construct the hand gesture database. Five class (forward, backward, right, left and stop) images in six sophisticated backgrounds with different illumination levels are obtained for four participants, and then one participant's images are kept as testing data. A lightweight Convolutional Neural Network (CNN), and transfer learning techniques using VGG16, and Mobilenetv2 are performed on this database to evaluate user independent performance of the hand gesture system. After offline training, real-time implementation is designed using a mobile phone (Wi-Fi and camera), Wi-Fi router, computer with embedded deep learning algorithms, and NAO humanoid robot. Streamed video by the mobile phone is processed and recognized using the proposed deep algorithm in the computer, and then command is transferred to robot via TCP/IP protocol. Thus, the NAO humanoid robot control using hand gesture in RGB and HSV color spaces is evaluated in sophisticated background, and the implementation of the system is presented. In our simulations, 95% and 100% accuracy rates are yielded for the lightweight CNN, and transfer learning, respectively.
{"title":"Convolutional neural network based hand gesture recognition in sophisticated background for humanoid robot control","authors":"Ali Yildiz, N. G. Adar, A. Mert","doi":"10.34028/iajit/20/3/9","DOIUrl":"https://doi.org/10.34028/iajit/20/3/9","url":null,"abstract":"Hand gesture recognition is a preferred way for human-robot interactions. Conventional approaches are generally based on image processing and recognition of hand poses with simple backgrounds. In this paper, we propose deep learning models, and humanoid robot integration for offline and online (real-time) recognition and control using hand gestures. One thousand and two hundred of hand images belonging to four participants are collected to construct the hand gesture database. Five class (forward, backward, right, left and stop) images in six sophisticated backgrounds with different illumination levels are obtained for four participants, and then one participant's images are kept as testing data. A lightweight Convolutional Neural Network (CNN), and transfer learning techniques using VGG16, and Mobilenetv2 are performed on this database to evaluate user independent performance of the hand gesture system. After offline training, real-time implementation is designed using a mobile phone (Wi-Fi and camera), Wi-Fi router, computer with embedded deep learning algorithms, and NAO humanoid robot. Streamed video by the mobile phone is processed and recognized using the proposed deep algorithm in the computer, and then command is transferred to robot via TCP/IP protocol. Thus, the NAO humanoid robot control using hand gesture in RGB and HSV color spaces is evaluated in sophisticated background, and the implementation of the system is presented. In our simulations, 95% and 100% accuracy rates are yielded for the lightweight CNN, and transfer learning, respectively.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"54 1","pages":"368-375"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83396019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.
{"title":"Performance Evaluation of Keyword Extraction Techniques and Stop Word Lists on Speech-To-Text Corpus","authors":"Blessed Guda, B. Nuhu, J. Agajo, I. Aliyu","doi":"10.34028/iajit/20/1/14","DOIUrl":"https://doi.org/10.34028/iajit/20/1/14","url":null,"abstract":"The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"173 1","pages":"134-140"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79576822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi
Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.
尽管深度学习(DL)算法在许多计算机视觉(CV)任务中取得了最新进展,具有很高的准确性,但在视频流中检测人类仍然是一个具有挑战性的问题。因此,一些研究集中在正则化技术上,以防止过度拟合问题,这是机器学习(ML)领域最基本的问题之一。同样,本文也对这些技术进行了全面的研究,提出了一种基于修改后的神经网络和调整后的超参数文件配置的改进的You Only Look Once (YOLO)v3-tiny。实验结果表明,该方法比YOLOv3-tiny模型更有效。第一次只包含数据增强技术的测试表明,该方法比原始的YOLOv3-tiny模型具有更高的准确率。事实上,与初始模型相比,视觉对象类(VOC)测试数据集的准确率提高了32.54%。结合三个任务的第二次测试表明,所采用的组合方法优于现有模型。例如,与数据增强模型相比,标记的人群测试数据集准确率提高了22.7%。
{"title":"Improved YOLOv3-tiny for silhouette detection using regularisation techniques","authors":"D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi","doi":"10.34028/iajit/20/2/14","DOIUrl":"https://doi.org/10.34028/iajit/20/2/14","url":null,"abstract":"Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"10 1","pages":"270-281"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89757719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Lin, Zhengxing Man, Yongchun Cao, Haijun Wang
Single Photon Emission Computed Tomography (SPECT) imaging has the potential to acquire information about areas of concerns in a non-invasive manner. Until now, however, deep learning based classification of SPECT images is still not studied yet. To examine the ability of convolutional neural networks on classifying whole-body SPECT bone scan images, in this work, we propose three different two-class classifiers based on the classical Visual Geometry Group (VGG) model. The proposed classifiers are able to automatically identify that whether or not a SPECT image include lesions via classifying this image into categories. Specifically, a pre-processing method is proposed to convert each SPECT file into an image via balancing difference of the detected uptake between SPECT files, normalizing elements of each file into an interval, and splitting an image into batches. Second, different strategies were introduced into the classical VGG 16 model to develop classifiers by minimizing the number of parameters as many as possible. Lastly, a group of clinical whole-body SPECT bone scan files were utilized to evaluate the developed classifiers. Experiment results show that our classifiers are workable for automated classification of SPECT images, obtaining the best values of 0.838, 0.929, 0.966, 0.908 and 0.875 for accuracy, precision, recall, F-1 score and AUC value, respectively.
{"title":"Automated Classification of Whole-Body SPECT Bone Scan Images with VGG-Based Deep Networks","authors":"Qiang Lin, Zhengxing Man, Yongchun Cao, Haijun Wang","doi":"10.34028/iajit/20/1/1","DOIUrl":"https://doi.org/10.34028/iajit/20/1/1","url":null,"abstract":"Single Photon Emission Computed Tomography (SPECT) imaging has the potential to acquire information about areas of concerns in a non-invasive manner. Until now, however, deep learning based classification of SPECT images is still not studied yet. To examine the ability of convolutional neural networks on classifying whole-body SPECT bone scan images, in this work, we propose three different two-class classifiers based on the classical Visual Geometry Group (VGG) model. The proposed classifiers are able to automatically identify that whether or not a SPECT image include lesions via classifying this image into categories. Specifically, a pre-processing method is proposed to convert each SPECT file into an image via balancing difference of the detected uptake between SPECT files, normalizing elements of each file into an interval, and splitting an image into batches. Second, different strategies were introduced into the classical VGG 16 model to develop classifiers by minimizing the number of parameters as many as possible. Lastly, a group of clinical whole-body SPECT bone scan files were utilized to evaluate the developed classifiers. Experiment results show that our classifiers are workable for automated classification of SPECT images, obtaining the best values of 0.838, 0.929, 0.966, 0.908 and 0.875 for accuracy, precision, recall, F-1 score and AUC value, respectively.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"10 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90321332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The research work presented in this paper aims to review Digital Forensics (DF) techniques and trends. As computer technology advances day by day, the chances of data being misused and tampered with are also growing daily. The advancement in technology results in various cyber-attacks on computers and mobile devices. DF plays a vital role in the investigation and prevention of cyber-attacks. DF can be used to find the shreds of evidence and prevent attacks from happening in the future. Earlier presented reviews highlighted specific issues in DF only. This paper explores deeply DF issues by highlighting domain-specific issues and possible helpful areas for DF. This article highlights the investigation process framework and related approaches for the digital investigation process. The cognitive and human factors that affect the DF process are also presented to strengthen the investigation process. Nowadays, many DF tools are available in the industry that helps in DF investigation. A comparative analysis of the four DF tools is also presented. Finally DF performance is discussed. The submitted work may help the researchers go deeper into DF and apply the best tools and models according to their requirements
{"title":"Digital forensics techniques and trends: a review","authors":"Himanshu Dubey, Shobha Bhatt, Lokesh Negi","doi":"10.34028/iajit/20/4/11","DOIUrl":"https://doi.org/10.34028/iajit/20/4/11","url":null,"abstract":"The research work presented in this paper aims to review Digital Forensics (DF) techniques and trends. As computer technology advances day by day, the chances of data being misused and tampered with are also growing daily. The advancement in technology results in various cyber-attacks on computers and mobile devices. DF plays a vital role in the investigation and prevention of cyber-attacks. DF can be used to find the shreds of evidence and prevent attacks from happening in the future. Earlier presented reviews highlighted specific issues in DF only. This paper explores deeply DF issues by highlighting domain-specific issues and possible helpful areas for DF. This article highlights the investigation process framework and related approaches for the digital investigation process. The cognitive and human factors that affect the DF process are also presented to strengthen the investigation process. Nowadays, many DF tools are available in the industry that helps in DF investigation. A comparative analysis of the four DF tools is also presented. Finally DF performance is discussed. The submitted work may help the researchers go deeper into DF and apply the best tools and models according to their requirements","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"17 1","pages":"644-654"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84431585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new linear algorithm to tackle a specific class of unrelated machine scheduling problem, considered as an important real-life situation, which we called Batch Scheduling on Unrelated Machine (BSUM), where we have to schedule a batch of identical and non-preemptive jobs on unrelated parallel machines. The objective is to minimize the makespan (Cmax) of the whole schedule. For this, a mathematical formulation is made and a lower bound is computed based on the potential properties of the problem in order to reduce the search space size and thus accelerate the algorithm. Another property is also deducted to design our algorithm that solves this problem. The latter is considered as a particular case of RmCmax family problems known as strongly NP-hard, therefore, a polynomial reduction should realize a significant efficiency to treat them. As we will show, Batch BSUM is omnipresent in several kind of applications as manufacturing, transportation, logistic and routing. It is of major importance in several company activities. The problem complexity and the optimality of the algorithm are reported, proven and discussed.
本文提出了一种新的线性算法来解决一类特殊的不相关机器调度问题,我们称之为不相关机器上的批量调度(Batch scheduling on non- emptive machine, BSUM),即在不相关的并行机器上调度一批相同且非抢占的作业。目标是最小化整个计划的最大完工时间(Cmax)。为此,根据问题的潜在性质,建立数学公式并计算下界,以减小搜索空间大小,从而加快算法的速度。我们还推导了另一个性质来设计解决这个问题的算法。后者被认为是RmCmax族问题的一种特殊情况,被称为强NP-hard,因此,多项式约简应该实现显著的效率来处理它们。正如我们将展示的,批量BSUM在制造、运输、物流和路由等几种应用中无处不在。这在公司的一些活动中是非常重要的。对问题的复杂性和算法的最优性进行了报告、证明和讨论。
{"title":"Exact algorithm for batch scheduling on unrelated machine","authors":"Hemmak Allaoua","doi":"10.34028/iajit/20/4/8","DOIUrl":"https://doi.org/10.34028/iajit/20/4/8","url":null,"abstract":"In this paper, we propose a new linear algorithm to tackle a specific class of unrelated machine scheduling problem, considered as an important real-life situation, which we called Batch Scheduling on Unrelated Machine (BSUM), where we have to schedule a batch of identical and non-preemptive jobs on unrelated parallel machines. The objective is to minimize the makespan (Cmax) of the whole schedule. For this, a mathematical formulation is made and a lower bound is computed based on the potential properties of the problem in order to reduce the search space size and thus accelerate the algorithm. Another property is also deducted to design our algorithm that solves this problem. The latter is considered as a particular case of RmCmax family problems known as strongly NP-hard, therefore, a polynomial reduction should realize a significant efficiency to treat them. As we will show, Batch BSUM is omnipresent in several kind of applications as manufacturing, transportation, logistic and routing. It is of major importance in several company activities. The problem complexity and the optimality of the algorithm are reported, proven and discussed.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"16 1","pages":"618-623"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81600193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}