首页 > 最新文献

Int. Arab J. Inf. Technol.最新文献

英文 中文
Ensemble voting based intrusion detection technique using negative selection algorithm 基于集合投票的负选择算法入侵检测技术
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/2/1
Kuldeep Singh, L. Kaur, R. Maini
This paper proposes an Intrusion Detection Technique (IDT) using an Artificial Immune System (AIS) based on Negative Selection Algorithm (NSA) to distinguish the self and non-self (intrusion) in computer networks. The novelties of the work are 1) use of Stacked Autoencoders (SAEs) and random forest for dimensionality reduction of data, 2) use of AIS to exploit its feature like self-learning, distributed, self-adaption, self-regulation with self and non-self-distinguishing capability, 3) implementation of two algorithms i.e., NSA based on Cosine distance (NSA_CD) and NSA based on Pearson Distance (NSA_PD) to explore their intrusion detection capabilities, and iv) development of a new ensemble voting based Intrusion Detection Technique (IDT-NSAEV) to detect and test the anomalies in the system. The proposed IDT-NSAEV technique combines the power of NSA_CD, NSA_PD and NSA based on Euclidean distance (NSA_ED) algorithms to enhance the detection rate by reducing the false alarm rate. The performance of the proposed technique is tested on standard benchmark NSL-KDD dataset and the results are compared with the state-of-the-art techniques. The results are in the favour of the proposed technique.
提出了一种基于负选择算法(NSA)的人工免疫系统(AIS)入侵检测技术(IDT),以区分计算机网络中的自我和非自我(入侵)。本研究的新颖之处在于:1)利用叠置自编码器(sae)和随机森林对数据进行降维;2)利用AIS的自学习、分布式、自适应、自调节等特征,具有自区分和非自区分能力;3)实现基于余弦距离的NSA (NSA_CD)和基于Pearson距离的NSA (NSA_PD)两种算法,探索其入侵检测能力。iv)开发一种新的基于集成投票的入侵检测技术(IDT-NSAEV)来检测和测试系统中的异常。本文提出的IDT-NSAEV技术结合了基于欧几里得距离(NSA_ED)算法的NSA_CD、NSA_PD和NSA的功能,通过降低虚警率来提高检测率。在标准基准NSL-KDD数据集上测试了所提出技术的性能,并将结果与最新技术进行了比较。结果支持所提出的技术。
{"title":"Ensemble voting based intrusion detection technique using negative selection algorithm","authors":"Kuldeep Singh, L. Kaur, R. Maini","doi":"10.34028/iajit/20/2/1","DOIUrl":"https://doi.org/10.34028/iajit/20/2/1","url":null,"abstract":"This paper proposes an Intrusion Detection Technique (IDT) using an Artificial Immune System (AIS) based on Negative Selection Algorithm (NSA) to distinguish the self and non-self (intrusion) in computer networks. The novelties of the work are 1) use of Stacked Autoencoders (SAEs) and random forest for dimensionality reduction of data, 2) use of AIS to exploit its feature like self-learning, distributed, self-adaption, self-regulation with self and non-self-distinguishing capability, 3) implementation of two algorithms i.e., NSA based on Cosine distance (NSA_CD) and NSA based on Pearson Distance (NSA_PD) to explore their intrusion detection capabilities, and iv) development of a new ensemble voting based Intrusion Detection Technique (IDT-NSAEV) to detect and test the anomalies in the system. The proposed IDT-NSAEV technique combines the power of NSA_CD, NSA_PD and NSA based on Euclidean distance (NSA_ED) algorithms to enhance the detection rate by reducing the false alarm rate. The performance of the proposed technique is tested on standard benchmark NSL-KDD dataset and the results are compared with the state-of-the-art techniques. The results are in the favour of the proposed technique.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"17 1","pages":"151-158"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78804660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal residual network based multi-head attention model for arabic handwriting recognition 基于时间残差网络的阿拉伯文手写识别多头注意模型
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/3a/4
Ramzi Zouari, Dalila Othmen, H. Boubaker, M. Kherallah
In this study, we developed a new system for online Arabic handwriting recognition based on temporal residual networks with multi-head attention model. The main idea behind the application of attention mechanism was to focus on the most relevant parts of the data by a weighted combination of all input sequences. Moreover, we applied beta elliptic approach to represent both kinematic and geometric aspects of the handwriting motion. This approach consists of representing the neuromuscular impulses involving during the writing act. In the dynamic profile, the curvilinear velocity can be fitted by an algebraic sum of overlapped beta functions, while the original trajectory can be rebuilt by elliptic arcs delimited between successive extremum velocity instants. The experiments were conducted on LMCA database containing the trajectory coordinates of 23141 Arabic handwriting letters, and showed very promising results that achieved the recognition rate of 97,12%
在本研究中,我们开发了一种基于多头注意模型的时间残差网络在线阿拉伯文手写识别系统。注意机制的应用背后的主要思想是通过所有输入序列的加权组合来关注数据中最相关的部分。此外,我们应用β椭圆方法来表示手写运动的运动学和几何方面。这种方法包括表现写作行为中涉及的神经肌肉冲动。在动态剖面中,曲线速度可以通过重叠beta函数的代数和来拟合,而原始轨迹可以通过在连续速度极值时刻之间划分的椭圆弧来重建。在包含23141个阿拉伯文手写字母轨迹坐标的LMCA数据库上进行了实验,取得了令人满意的结果,识别率达到了97,12%
{"title":"Temporal residual network based multi-head attention model for arabic handwriting recognition","authors":"Ramzi Zouari, Dalila Othmen, H. Boubaker, M. Kherallah","doi":"10.34028/iajit/20/3a/4","DOIUrl":"https://doi.org/10.34028/iajit/20/3a/4","url":null,"abstract":"In this study, we developed a new system for online Arabic handwriting recognition based on temporal residual networks with multi-head attention model. The main idea behind the application of attention mechanism was to focus on the most relevant parts of the data by a weighted combination of all input sequences. Moreover, we applied beta elliptic approach to represent both kinematic and geometric aspects of the handwriting motion. This approach consists of representing the neuromuscular impulses involving during the writing act. In the dynamic profile, the curvilinear velocity can be fitted by an algebraic sum of overlapped beta functions, while the original trajectory can be rebuilt by elliptic arcs delimited between successive extremum velocity instants. The experiments were conducted on LMCA database containing the trajectory coordinates of 23141 Arabic handwriting letters, and showed very promising results that achieved the recognition rate of 97,12%","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"18 1","pages":"469-476"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84508900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cognitive approach to predict the multi-directional trajectory of pedestrians 一种预测行人多向轨迹的认知方法
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/2/11
Jayachitra Virupakshipuram Panneerselvam, Bharanidharan Subramaniam, Mathangi Meenakshisundaram
Pedestrian detection is one of the important areas in computer vision. This work is about detecting the multi-directional pedestrian’s left, right, and the front movements. On recognizing the direction of movement, the system can be alerted depending on the environmental circumstances. Since multiple pedestrians moving in different directions may be present in a single image, Convolutional Neural Network (CNN) is not suitable for recognizing the multi-directional movement of the pedestrians. Moreover, the Faster R-CNN (FR-CNN) gives faster response output compared to other detection algorithms. In this work, a modified Faster Recurrent Convolutional Neural Network (MFR-CNN), a cognitive approach is proposed for detecting the direction of movement of the pedestrians and it can be deployed in real-time. A fine-tuning of the convolutional layers is performed to extract more information about the image contained in the feature map. The anchors used in the detection process are modified to focus the pedestrians present within a range, which is the major concern for such automated systems. The proposed model reduced the execution time and obtained an accuracy of 88%. The experimental evaluation indicates that the proposed novel model can outperform the other methods by tagging each pedestrian individually in the direction in which they move.
行人检测是计算机视觉研究的重要领域之一。这项工作是关于检测多向行人的左,右和前运动。在识别运动方向后,系统可以根据环境情况发出警报。由于单幅图像中可能存在多个不同方向的行人,卷积神经网络(CNN)不适合用于识别行人的多向运动。此外,与其他检测算法相比,更快的R-CNN (FR-CNN)给出了更快的响应输出。在这项工作中,提出了一种改进的快速循环卷积神经网络(MFR-CNN),一种用于检测行人运动方向的认知方法,并且可以实时部署。对卷积层进行微调,以提取有关特征映射中包含的图像的更多信息。检测过程中使用的锚被修改以聚焦在一定范围内的行人,这是此类自动化系统的主要关注点。该模型减少了执行时间,获得了88%的准确率。实验结果表明,该模型可以根据行人的移动方向对其进行单独标记,从而优于其他方法。
{"title":"A cognitive approach to predict the multi-directional trajectory of pedestrians","authors":"Jayachitra Virupakshipuram Panneerselvam, Bharanidharan Subramaniam, Mathangi Meenakshisundaram","doi":"10.34028/iajit/20/2/11","DOIUrl":"https://doi.org/10.34028/iajit/20/2/11","url":null,"abstract":"Pedestrian detection is one of the important areas in computer vision. This work is about detecting the multi-directional pedestrian’s left, right, and the front movements. On recognizing the direction of movement, the system can be alerted depending on the environmental circumstances. Since multiple pedestrians moving in different directions may be present in a single image, Convolutional Neural Network (CNN) is not suitable for recognizing the multi-directional movement of the pedestrians. Moreover, the Faster R-CNN (FR-CNN) gives faster response output compared to other detection algorithms. In this work, a modified Faster Recurrent Convolutional Neural Network (MFR-CNN), a cognitive approach is proposed for detecting the direction of movement of the pedestrians and it can be deployed in real-time. A fine-tuning of the convolutional layers is performed to extract more information about the image contained in the feature map. The anchors used in the detection process are modified to focus the pedestrians present within a range, which is the major concern for such automated systems. The proposed model reduced the execution time and obtained an accuracy of 88%. The experimental evaluation indicates that the proposed novel model can outperform the other methods by tagging each pedestrian individually in the direction in which they move.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"52 1","pages":"242-252"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85580182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Language 基于深度学习网络的母语语音鲁棒性听障说话人识别
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/1/11
Jeyalakshmi Chelliah, KiranBala Benny, Revathi Arunachalam, Viswanathan Balasubramanian
Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.
近年来,由于语音识别在安全、刑事侦查等重要领域的广泛应用,语音识别领域的研究日益增多。说话人的身份是通过他们说话的方式来体现的,而不是通过他们说的话。因此,从他们的言语中识别听障人士是一项具有挑战性的任务,因为他们的言语是高度扭曲的。本文介绍了一种利用泰米尔语语音作为生物特征识别听障人士的新方法。虽然他们的语言很难被父母和老师识别,但我们的系统通过对他们的语言进行适应性增强来准确识别他们。由于他们的话语种类繁多,所以不使用原始语音的频谱图,而是从语音中导出Mel频率倒谱系数特征,并将其作为频谱图应用到卷积神经网络(CNN)中,这对于普通说话人来说是不必要的。在提出的识别HI扬声器的系统中,使用深度学习网络作为建模技术来评估系统的性能,该深度学习网络提供了80%的准确率,并且系统不那么复杂。使用自动关联神经网络(Auto Associative Neural Network, AANN)作为建模技术,AANN的准确率仅为9%,并且发现CNN在识别HI说话者方面的表现优于AANN。因此,该系统对听力受损说话者的生物识别系统和其他安全相关应用非常有用。
{"title":"Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Language","authors":"Jeyalakshmi Chelliah, KiranBala Benny, Revathi Arunachalam, Viswanathan Balasubramanian","doi":"10.34028/iajit/20/1/11","DOIUrl":"https://doi.org/10.34028/iajit/20/1/11","url":null,"abstract":"Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"22 1","pages":"102-112"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76331931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional neural network based hand gesture recognition in sophisticated background for humanoid robot control 基于卷积神经网络的复杂背景下仿人机器人手势识别
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/3/9
Ali Yildiz, N. G. Adar, A. Mert
Hand gesture recognition is a preferred way for human-robot interactions. Conventional approaches are generally based on image processing and recognition of hand poses with simple backgrounds. In this paper, we propose deep learning models, and humanoid robot integration for offline and online (real-time) recognition and control using hand gestures. One thousand and two hundred of hand images belonging to four participants are collected to construct the hand gesture database. Five class (forward, backward, right, left and stop) images in six sophisticated backgrounds with different illumination levels are obtained for four participants, and then one participant's images are kept as testing data. A lightweight Convolutional Neural Network (CNN), and transfer learning techniques using VGG16, and Mobilenetv2 are performed on this database to evaluate user independent performance of the hand gesture system. After offline training, real-time implementation is designed using a mobile phone (Wi-Fi and camera), Wi-Fi router, computer with embedded deep learning algorithms, and NAO humanoid robot. Streamed video by the mobile phone is processed and recognized using the proposed deep algorithm in the computer, and then command is transferred to robot via TCP/IP protocol. Thus, the NAO humanoid robot control using hand gesture in RGB and HSV color spaces is evaluated in sophisticated background, and the implementation of the system is presented. In our simulations, 95% and 100% accuracy rates are yielded for the lightweight CNN, and transfer learning, respectively.
手势识别是人机交互的首选方式。传统的方法通常基于简单背景下的图像处理和手部姿势识别。在本文中,我们提出了深度学习模型和人形机器人集成,用于离线和在线(实时)识别和使用手势控制。收集4名参与者的1200张手部图像,构建手势数据库。对4名参与者在6个不同光照水平的复杂背景下获得5类(前、后、右、左、停)图像,然后保留1名参与者的图像作为测试数据。在此数据库上使用轻量级卷积神经网络(CNN)和使用VGG16和Mobilenetv2的迁移学习技术来评估手势系统的用户独立性能。线下培训后,使用手机(Wi-Fi和摄像头)、Wi-Fi路由器、内置深度学习算法的计算机、NAO类人机器人进行实时实现设计。在计算机中对手机流媒体视频进行深度处理和识别,然后通过TCP/IP协议将命令传递给机器人。在此基础上,对复杂背景下RGB和HSV色彩空间下的NAO类人机器人手势控制进行了评估,并给出了系统的实现。在我们的模拟中,轻量级CNN和迁移学习的准确率分别达到95%和100%。
{"title":"Convolutional neural network based hand gesture recognition in sophisticated background for humanoid robot control","authors":"Ali Yildiz, N. G. Adar, A. Mert","doi":"10.34028/iajit/20/3/9","DOIUrl":"https://doi.org/10.34028/iajit/20/3/9","url":null,"abstract":"Hand gesture recognition is a preferred way for human-robot interactions. Conventional approaches are generally based on image processing and recognition of hand poses with simple backgrounds. In this paper, we propose deep learning models, and humanoid robot integration for offline and online (real-time) recognition and control using hand gestures. One thousand and two hundred of hand images belonging to four participants are collected to construct the hand gesture database. Five class (forward, backward, right, left and stop) images in six sophisticated backgrounds with different illumination levels are obtained for four participants, and then one participant's images are kept as testing data. A lightweight Convolutional Neural Network (CNN), and transfer learning techniques using VGG16, and Mobilenetv2 are performed on this database to evaluate user independent performance of the hand gesture system. After offline training, real-time implementation is designed using a mobile phone (Wi-Fi and camera), Wi-Fi router, computer with embedded deep learning algorithms, and NAO humanoid robot. Streamed video by the mobile phone is processed and recognized using the proposed deep algorithm in the computer, and then command is transferred to robot via TCP/IP protocol. Thus, the NAO humanoid robot control using hand gesture in RGB and HSV color spaces is evaluated in sophisticated background, and the implementation of the system is presented. In our simulations, 95% and 100% accuracy rates are yielded for the lightweight CNN, and transfer learning, respectively.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"54 1","pages":"368-375"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83396019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of Keyword Extraction Techniques and Stop Word Lists on Speech-To-Text Corpus 基于语音到文本语料库的关键字提取技术和停止词表性能评价
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/1/14
Blessed Guda, B. Nuhu, J. Agajo, I. Aliyu
The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.
人类通过语音音频与计算机进行交流的会话用户界面的曙光已经到来。因此,自然语言处理(NLP)技术不仅需要关注文本,还需要关注音频演讲。关键词提取是一种从文档中提取关键短语的技术,它可以提供文档的摘要,并用于文本分类。现有的关键字提取技术通常只用于文本/类型化数据集。随着来自语音识别引擎的文本数据的出现,其准确性低于输入文本,关键字提取的适用性受到质疑。本文评估了传统关键字提取方法在语音到文本语料库上的适用性。利用万维网语料库收集了一个用于关键字提取的新音频数据集。在原始输入语料库和相应的语音到文本(STT)语料库上使用不同的stopplists对快速自动关键字提取(RAKE)和文本检索(TextRank)的性能进行了评价。评估考虑了精度、召回率和F1分数等指标。从得到的结果来看,使用FOX stopplist的TextRank在文本和音频语料库上的表现都是最高的,F1得分分别为16.59%和14.22%。尽管落后于文本语料库,但使用音频语料库的TextRank技术记录的F1分数足够显著,可以在音频会话中使用。然而,STT中标点符号的缺失影响了所有技术的F1分数。
{"title":"Performance Evaluation of Keyword Extraction Techniques and Stop Word Lists on Speech-To-Text Corpus","authors":"Blessed Guda, B. Nuhu, J. Agajo, I. Aliyu","doi":"10.34028/iajit/20/1/14","DOIUrl":"https://doi.org/10.34028/iajit/20/1/14","url":null,"abstract":"The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"173 1","pages":"134-140"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79576822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improved YOLOv3-tiny for silhouette detection using regularisation techniques 改进的YOLOv3-tiny轮廓检测使用正则化技术
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/2/14
D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi
Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.
尽管深度学习(DL)算法在许多计算机视觉(CV)任务中取得了最新进展,具有很高的准确性,但在视频流中检测人类仍然是一个具有挑战性的问题。因此,一些研究集中在正则化技术上,以防止过度拟合问题,这是机器学习(ML)领域最基本的问题之一。同样,本文也对这些技术进行了全面的研究,提出了一种基于修改后的神经网络和调整后的超参数文件配置的改进的You Only Look Once (YOLO)v3-tiny。实验结果表明,该方法比YOLOv3-tiny模型更有效。第一次只包含数据增强技术的测试表明,该方法比原始的YOLOv3-tiny模型具有更高的准确率。事实上,与初始模型相比,视觉对象类(VOC)测试数据集的准确率提高了32.54%。结合三个任务的第二次测试表明,所采用的组合方法优于现有模型。例如,与数据增强模型相比,标记的人群测试数据集准确率提高了22.7%。
{"title":"Improved YOLOv3-tiny for silhouette detection using regularisation techniques","authors":"D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi","doi":"10.34028/iajit/20/2/14","DOIUrl":"https://doi.org/10.34028/iajit/20/2/14","url":null,"abstract":"Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"10 1","pages":"270-281"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89757719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Classification of Whole-Body SPECT Bone Scan Images with VGG-Based Deep Networks 基于vgg深度网络的全身SPECT骨扫描图像自动分类
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/1/1
Qiang Lin, Zhengxing Man, Yongchun Cao, Haijun Wang
Single Photon Emission Computed Tomography (SPECT) imaging has the potential to acquire information about areas of concerns in a non-invasive manner. Until now, however, deep learning based classification of SPECT images is still not studied yet. To examine the ability of convolutional neural networks on classifying whole-body SPECT bone scan images, in this work, we propose three different two-class classifiers based on the classical Visual Geometry Group (VGG) model. The proposed classifiers are able to automatically identify that whether or not a SPECT image include lesions via classifying this image into categories. Specifically, a pre-processing method is proposed to convert each SPECT file into an image via balancing difference of the detected uptake between SPECT files, normalizing elements of each file into an interval, and splitting an image into batches. Second, different strategies were introduced into the classical VGG 16 model to develop classifiers by minimizing the number of parameters as many as possible. Lastly, a group of clinical whole-body SPECT bone scan files were utilized to evaluate the developed classifiers. Experiment results show that our classifiers are workable for automated classification of SPECT images, obtaining the best values of 0.838, 0.929, 0.966, 0.908 and 0.875 for accuracy, precision, recall, F-1 score and AUC value, respectively.
单光子发射计算机断层扫描(SPECT)成像具有以非侵入性方式获取有关关注区域信息的潜力。然而,到目前为止,基于深度学习的SPECT图像分类还没有得到研究。为了检验卷积神经网络对全身SPECT骨扫描图像的分类能力,本文基于经典视觉几何群(VGG)模型,提出了三种不同的两类分类器。所提出的分类器能够自动识别SPECT图像是否包含病变,通过将该图像分类。具体来说,提出了一种将每个SPECT文件转换成图像的预处理方法,通过平衡SPECT文件之间检测到的摄取差异,将每个文件的元素归一化到一个区间,并将图像分成批次。其次,在经典的VGG - 16模型中引入不同的策略,通过尽可能多地减少参数数量来开发分类器。最后,利用一组临床全身SPECT骨扫描文件对所开发的分类器进行评估。实验结果表明,该分类器可用于SPECT图像的自动分类,准确率、精密度、召回率、F-1分数和AUC值分别为0.838、0.929、0.966、0.908和0.875。
{"title":"Automated Classification of Whole-Body SPECT Bone Scan Images with VGG-Based Deep Networks","authors":"Qiang Lin, Zhengxing Man, Yongchun Cao, Haijun Wang","doi":"10.34028/iajit/20/1/1","DOIUrl":"https://doi.org/10.34028/iajit/20/1/1","url":null,"abstract":"Single Photon Emission Computed Tomography (SPECT) imaging has the potential to acquire information about areas of concerns in a non-invasive manner. Until now, however, deep learning based classification of SPECT images is still not studied yet. To examine the ability of convolutional neural networks on classifying whole-body SPECT bone scan images, in this work, we propose three different two-class classifiers based on the classical Visual Geometry Group (VGG) model. The proposed classifiers are able to automatically identify that whether or not a SPECT image include lesions via classifying this image into categories. Specifically, a pre-processing method is proposed to convert each SPECT file into an image via balancing difference of the detected uptake between SPECT files, normalizing elements of each file into an interval, and splitting an image into batches. Second, different strategies were introduced into the classical VGG 16 model to develop classifiers by minimizing the number of parameters as many as possible. Lastly, a group of clinical whole-body SPECT bone scan files were utilized to evaluate the developed classifiers. Experiment results show that our classifiers are workable for automated classification of SPECT images, obtaining the best values of 0.838, 0.929, 0.966, 0.908 and 0.875 for accuracy, precision, recall, F-1 score and AUC value, respectively.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"10 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90321332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Digital forensics techniques and trends: a review 数字取证技术与趋势综述
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/4/11
Himanshu Dubey, Shobha Bhatt, Lokesh Negi
The research work presented in this paper aims to review Digital Forensics (DF) techniques and trends. As computer technology advances day by day, the chances of data being misused and tampered with are also growing daily. The advancement in technology results in various cyber-attacks on computers and mobile devices. DF plays a vital role in the investigation and prevention of cyber-attacks. DF can be used to find the shreds of evidence and prevent attacks from happening in the future. Earlier presented reviews highlighted specific issues in DF only. This paper explores deeply DF issues by highlighting domain-specific issues and possible helpful areas for DF. This article highlights the investigation process framework and related approaches for the digital investigation process. The cognitive and human factors that affect the DF process are also presented to strengthen the investigation process. Nowadays, many DF tools are available in the industry that helps in DF investigation. A comparative analysis of the four DF tools is also presented. Finally DF performance is discussed. The submitted work may help the researchers go deeper into DF and apply the best tools and models according to their requirements
本文提出的研究工作旨在回顾数字取证(DF)技术和趋势。随着计算机技术的日益进步,数据被滥用和篡改的可能性也日益增加。技术的进步导致了对计算机和移动设备的各种网络攻击。DF在调查和预防网络攻击中起着至关重要的作用。DF可以用来寻找证据碎片,防止未来发生攻击。先前提出的审查只强调了DF中的具体问题。本文通过强调特定于领域的问题和可能对DF有帮助的领域,深入探讨了DF问题。本文重点介绍了数字化侦查过程的侦查过程框架和相关方法。提出了影响DF过程的认知因素和人为因素,以加强调查过程。如今,行业中有许多DF工具可以帮助进行DF调查。并对四种DF工具进行了比较分析。最后对DF性能进行了讨论。提交的工作可以帮助研究人员更深入地了解DF,并根据他们的需求应用最好的工具和模型
{"title":"Digital forensics techniques and trends: a review","authors":"Himanshu Dubey, Shobha Bhatt, Lokesh Negi","doi":"10.34028/iajit/20/4/11","DOIUrl":"https://doi.org/10.34028/iajit/20/4/11","url":null,"abstract":"The research work presented in this paper aims to review Digital Forensics (DF) techniques and trends. As computer technology advances day by day, the chances of data being misused and tampered with are also growing daily. The advancement in technology results in various cyber-attacks on computers and mobile devices. DF plays a vital role in the investigation and prevention of cyber-attacks. DF can be used to find the shreds of evidence and prevent attacks from happening in the future. Earlier presented reviews highlighted specific issues in DF only. This paper explores deeply DF issues by highlighting domain-specific issues and possible helpful areas for DF. This article highlights the investigation process framework and related approaches for the digital investigation process. The cognitive and human factors that affect the DF process are also presented to strengthen the investigation process. Nowadays, many DF tools are available in the industry that helps in DF investigation. A comparative analysis of the four DF tools is also presented. Finally DF performance is discussed. The submitted work may help the researchers go deeper into DF and apply the best tools and models according to their requirements","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"17 1","pages":"644-654"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84431585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exact algorithm for batch scheduling on unrelated machine 无关机器上批调度的精确算法
Pub Date : 2023-01-01 DOI: 10.34028/iajit/20/4/8
Hemmak Allaoua
In this paper, we propose a new linear algorithm to tackle a specific class of unrelated machine scheduling problem, considered as an important real-life situation, which we called Batch Scheduling on Unrelated Machine (BSUM), where we have to schedule a batch of identical and non-preemptive jobs on unrelated parallel machines. The objective is to minimize the makespan (Cmax) of the whole schedule. For this, a mathematical formulation is made and a lower bound is computed based on the potential properties of the problem in order to reduce the search space size and thus accelerate the algorithm. Another property is also deducted to design our algorithm that solves this problem. The latter is considered as a particular case of RmCmax family problems known as strongly NP-hard, therefore, a polynomial reduction should realize a significant efficiency to treat them. As we will show, Batch BSUM is omnipresent in several kind of applications as manufacturing, transportation, logistic and routing. It is of major importance in several company activities. The problem complexity and the optimality of the algorithm are reported, proven and discussed.
本文提出了一种新的线性算法来解决一类特殊的不相关机器调度问题,我们称之为不相关机器上的批量调度(Batch scheduling on non- emptive machine, BSUM),即在不相关的并行机器上调度一批相同且非抢占的作业。目标是最小化整个计划的最大完工时间(Cmax)。为此,根据问题的潜在性质,建立数学公式并计算下界,以减小搜索空间大小,从而加快算法的速度。我们还推导了另一个性质来设计解决这个问题的算法。后者被认为是RmCmax族问题的一种特殊情况,被称为强NP-hard,因此,多项式约简应该实现显著的效率来处理它们。正如我们将展示的,批量BSUM在制造、运输、物流和路由等几种应用中无处不在。这在公司的一些活动中是非常重要的。对问题的复杂性和算法的最优性进行了报告、证明和讨论。
{"title":"Exact algorithm for batch scheduling on unrelated machine","authors":"Hemmak Allaoua","doi":"10.34028/iajit/20/4/8","DOIUrl":"https://doi.org/10.34028/iajit/20/4/8","url":null,"abstract":"In this paper, we propose a new linear algorithm to tackle a specific class of unrelated machine scheduling problem, considered as an important real-life situation, which we called Batch Scheduling on Unrelated Machine (BSUM), where we have to schedule a batch of identical and non-preemptive jobs on unrelated parallel machines. The objective is to minimize the makespan (Cmax) of the whole schedule. For this, a mathematical formulation is made and a lower bound is computed based on the potential properties of the problem in order to reduce the search space size and thus accelerate the algorithm. Another property is also deducted to design our algorithm that solves this problem. The latter is considered as a particular case of RmCmax family problems known as strongly NP-hard, therefore, a polynomial reduction should realize a significant efficiency to treat them. As we will show, Batch BSUM is omnipresent in several kind of applications as manufacturing, transportation, logistic and routing. It is of major importance in several company activities. The problem complexity and the optimality of the algorithm are reported, proven and discussed.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"16 1","pages":"618-623"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81600193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Int. Arab J. Inf. Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1