Pub Date : 2022-10-01DOI: 10.33166/aetic.2022.04.005
Rasel Iqbal Emon, Md. Mehedi Hassan Onik, A. Hussain, Toufiq Ahmed Tanna, Md. Akhtaruzzaman Emon, Muhammad Al Amin Rifat, Mahdi H. Miraz
A distributed ledger technology, embedded with privacy and security by architecture, provides a transparent application developing platform. Additionally, edge technology is trending rapidly which brings the computing and data storing facility closer to the user end (device), in order to overcome network bottlenecks. This study, therefore, utilises the transparency, security, efficiency of blockchain technology along with the computing and storing facility at the edge level to establish privacy preserved storing and tracking schemes for electronic health records (EHRs). Since the EHR stored in a block is accessible by the peer-to-peer (P2P) nodes, privacy has always been a matter of great concern for any blockchain-based activities. Therefore, to address this privacy issue, multilevel blockchain, which can enforce and preserve complete privacy and security of any blockchain-based application or environment, has become one of the recent blockchain research trends. In this article, we propose an EHR sharing architecture consisting of three different interrelated multilevel or hierarchical chains confined within three different network layers using edge computing. Furthermore, since EHRs are sensitive, a specific data de-identification or anonymisation strategy is also applied to further strengthen the privacy and security of the data shared.
{"title":"Privacy-preserved Secure Medical Data Sharing Using Hierarchical Blockchain in Edge Computing","authors":"Rasel Iqbal Emon, Md. Mehedi Hassan Onik, A. Hussain, Toufiq Ahmed Tanna, Md. Akhtaruzzaman Emon, Muhammad Al Amin Rifat, Mahdi H. Miraz","doi":"10.33166/aetic.2022.04.005","DOIUrl":"https://doi.org/10.33166/aetic.2022.04.005","url":null,"abstract":"A distributed ledger technology, embedded with privacy and security by architecture, provides a transparent application developing platform. Additionally, edge technology is trending rapidly which brings the computing and data storing facility closer to the user end (device), in order to overcome network bottlenecks. This study, therefore, utilises the transparency, security, efficiency of blockchain technology along with the computing and storing facility at the edge level to establish privacy preserved storing and tracking schemes for electronic health records (EHRs). Since the EHR stored in a block is accessible by the peer-to-peer (P2P) nodes, privacy has always been a matter of great concern for any blockchain-based activities. Therefore, to address this privacy issue, multilevel blockchain, which can enforce and preserve complete privacy and security of any blockchain-based application or environment, has become one of the recent blockchain research trends. In this article, we propose an EHR sharing architecture consisting of three different interrelated multilevel or hierarchical chains confined within three different network layers using edge computing. Furthermore, since EHRs are sensitive, a specific data de-identification or anonymisation strategy is also applied to further strengthen the privacy and security of the data shared.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47143619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.33166/aetic.2022.04.004
Y. H. Chow, K. Ooi, Mohammad Arif Sobhan Bhuiyan, M. Reaz, C. W. Yuen
The advent of modern computational tools in field of transportation can help to forecast the optimized vehicular routes and traffic network topology, using traffic conditions from real world data as inputs. In this study, the topologies of one-way and two-way street networks are analysed using microscopic traffic simulations implemented on the SUMO (Simulation of Urban MObility) platform were performed to analyse the effect of street conversion in Downtown Brickfields, Kuala Lumpur. It was found that one-way streets perform better at the onset of traffic congestion due to their higher capacity, but on average, the four-fold longer travel times make it harder to clear traffic by getting vehicles to their destinations than two-way streets. As time progresses, one-way streets' congestion may become doubly worse than that of two-way streets. This study may contribute to a more holistic assessment of traffic circulation plans designed for smart and liveable cities.
{"title":"Computation and Optimization of Traffic Network Topologies Using Eclipse SUMO","authors":"Y. H. Chow, K. Ooi, Mohammad Arif Sobhan Bhuiyan, M. Reaz, C. W. Yuen","doi":"10.33166/aetic.2022.04.004","DOIUrl":"https://doi.org/10.33166/aetic.2022.04.004","url":null,"abstract":"The advent of modern computational tools in field of transportation can help to forecast the optimized vehicular routes and traffic network topology, using traffic conditions from real world data as inputs. In this study, the topologies of one-way and two-way street networks are analysed using microscopic traffic simulations implemented on the SUMO (Simulation of Urban MObility) platform were performed to analyse the effect of street conversion in Downtown Brickfields, Kuala Lumpur. It was found that one-way streets perform better at the onset of traffic congestion due to their higher capacity, but on average, the four-fold longer travel times make it harder to clear traffic by getting vehicles to their destinations than two-way streets. As time progresses, one-way streets' congestion may become doubly worse than that of two-way streets. This study may contribute to a more holistic assessment of traffic circulation plans designed for smart and liveable cities.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42479761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.33166/aetic.2022.04.003
Xiaomin Zhao, Qiang Tuo, Ruosi Guo, Tengteng Kong
The isolation of mixed music signals is beneficial to the extraction and identification of music signal features and to enhance music signal quality. This paper briefly introduced the mathematical model for separating blind source from mixed music signals and the traditional Independent Component Analysis (ICA) algorithm. The separation algorithm was optimized by the complex neural network. The traditional and optimized ICA algorithms were simulated in MATLAB software. It was found that the time-domain waveform of the signal isolated by the improved ICA-based separation algorithm was closer to the source signal. The similarity coefficient matrix, signal-to-interference ratio, performance index, and iteration time of the improved ICA-based algorithm was 62.3, 0.0011, and 0.87 s, respectively, which were all superior to the traditional ICA algorithm. The novelty of this paper is setting the initial iterative matrix of the ICA algorithm with the complex neural network.
{"title":"Research on Music Signal Processing Based on a Blind Source Separation Algorithm","authors":"Xiaomin Zhao, Qiang Tuo, Ruosi Guo, Tengteng Kong","doi":"10.33166/aetic.2022.04.003","DOIUrl":"https://doi.org/10.33166/aetic.2022.04.003","url":null,"abstract":"The isolation of mixed music signals is beneficial to the extraction and identification of music signal features and to enhance music signal quality. This paper briefly introduced the mathematical model for separating blind source from mixed music signals and the traditional Independent Component Analysis (ICA) algorithm. The separation algorithm was optimized by the complex neural network. The traditional and optimized ICA algorithms were simulated in MATLAB software. It was found that the time-domain waveform of the signal isolated by the improved ICA-based separation algorithm was closer to the source signal. The similarity coefficient matrix, signal-to-interference ratio, performance index, and iteration time of the improved ICA-based algorithm was 62.3, 0.0011, and 0.87 s, respectively, which were all superior to the traditional ICA algorithm. The novelty of this paper is setting the initial iterative matrix of the ICA algorithm with the complex neural network.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42105176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.002
Mei Zhang
Chunks play an important role in applied linguistics, such as Teaching English as a Second Language (TESL) and Computer-Aided Translation (CAT). Although corpora have already been widely used in the areas mentioned above, annotation and recognition of chunks are mainly done manually. Computer- and linguistic-based chunk recognition is significant in natural language processing (NLP). This paper briefly introduced the intelligent recognition of English chunks and applied the Recurrent Neural Network (RNN) to recognise chunks. To strengthen the RNN, it was improved by Long Short Term Memory (LSTM) for recognising English chunk. The LSTM-RNN was compared with support vector machine (SVM) and RNN in simulation experiments. The results suggested that the performance of the LSTM-RNN was always the highest when dealing with English texts, no matter whether it was trained using a general corpus or a corpus of specialised domain knowledge.
{"title":"Analysis of Intelligent English Chunk Recognition based on Knowledge Corpus","authors":"Mei Zhang","doi":"10.33166/aetic.2022.03.002","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.002","url":null,"abstract":"Chunks play an important role in applied linguistics, such as Teaching English as a Second Language (TESL) and Computer-Aided Translation (CAT). Although corpora have already been widely used in the areas mentioned above, annotation and recognition of chunks are mainly done manually. Computer- and linguistic-based chunk recognition is significant in natural language processing (NLP). This paper briefly introduced the intelligent recognition of English chunks and applied the Recurrent Neural Network (RNN) to recognise chunks. To strengthen the RNN, it was improved by Long Short Term Memory (LSTM) for recognising English chunk. The LSTM-RNN was compared with support vector machine (SVM) and RNN in simulation experiments. The results suggested that the performance of the LSTM-RNN was always the highest when dealing with English texts, no matter whether it was trained using a general corpus or a corpus of specialised domain knowledge.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45891338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.005
Ahmed Saleem Mahdi, S. A. Mahmood
Recently, an increasing demand is growing for installing a rapid response system in forest regions to enable an immediate and appropriate response to wildfires before they spread across vast areas. This paper introduces a multilevel system for early wildfire detection to support public authorities to immediately specify and attend to emergency demands. The presented work is designed and implemented within Edge Computing Infrastructure. At the first level; the dataset samples of wildfire represented by a set of video sequences are collected and labelled for training mode purposes. Then, YOLOv5 deep learning model is adopted in our framework to build a trained model for distinguishing the fire event against non-fire events in binary classification. The proposed system structure comprises IoT entities provided with camera sensor capabilities and NVIDIA Jetson Nano Developer kit as an edge computing environment. At the first level, a video camera is employed to assemble environment information received by the micro-controller middle level to handle and detect the possible fire event presenting in the interested area. The last level is characterized as making a decision by sending a text message and snapshot images to the cloud server. Meanwhile, a set of commands are sent to IoT nodes to operate the speakers and sprinklers, which are strategically assumed to place on the ground to give an alarm and prevent wildlife loss. The proposed system was tested and evaluated using a wildfire dataset constructed by our efforts. The experimental results exhibited 98% accurate detection of fire events in the video sequence. Further, a comparison study is performed in this research to confirm the results obtained from recent methods.
{"title":"An Edge Computing Environment for Early Wildfire Detection","authors":"Ahmed Saleem Mahdi, S. A. Mahmood","doi":"10.33166/aetic.2022.03.005","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.005","url":null,"abstract":"Recently, an increasing demand is growing for installing a rapid response system in forest regions to enable an immediate and appropriate response to wildfires before they spread across vast areas. This paper introduces a multilevel system for early wildfire detection to support public authorities to immediately specify and attend to emergency demands. The presented work is designed and implemented within Edge Computing Infrastructure. At the first level; the dataset samples of wildfire represented by a set of video sequences are collected and labelled for training mode purposes. Then, YOLOv5 deep learning model is adopted in our framework to build a trained model for distinguishing the fire event against non-fire events in binary classification. The proposed system structure comprises IoT entities provided with camera sensor capabilities and NVIDIA Jetson Nano Developer kit as an edge computing environment. At the first level, a video camera is employed to assemble environment information received by the micro-controller middle level to handle and detect the possible fire event presenting in the interested area. The last level is characterized as making a decision by sending a text message and snapshot images to the cloud server. Meanwhile, a set of commands are sent to IoT nodes to operate the speakers and sprinklers, which are strategically assumed to place on the ground to give an alarm and prevent wildlife loss. The proposed system was tested and evaluated using a wildfire dataset constructed by our efforts. The experimental results exhibited 98% accurate detection of fire events in the video sequence. Further, a comparison study is performed in this research to confirm the results obtained from recent methods.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46093068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.001
Jinfeng Li
Among antennas for Industrial, Scientific and Medical (ISM band) applications at 433 MHz, quarter-wave monopole is a reasonably good trade-off between size, gain, and cost. The electrical performance of the monopole is largely dependent on the quality of the ground plane (size and conductivity), which exhibits a practical limit on the achievable gain as most industrial user environments can provide only a finite ground plane of finite electrical conductivity. Establishing traceability in understanding the performance degradation due to such limits due to the grounding dimension and conductivity is becoming mandatory. To this end, this work leverages universal MATLAB in place of off-the-shelf software (HFSS or CST) for the quarter-wave monopole antenna simulation at 433 MHz parametrised with the ground plane’s dimension with respect to the wavelength (λ). Results indicate that by enlarging the ground plane’s size from 0.14 λ to 14 λ, the gain (directivity for PEC) from the 3D radiation pattern rises from 1.79 dBi, then starts levelling off at 6.7 dBi (5.78 λ), until saturating at 7.49 dBi (13 λ). The radiation efficiency and gain of various grounding conductivity scenarios (e.g., gold, aluminium, steel) are also quantified to inform antenna designers and engineers for commercial, industrial, defence and space applications.
{"title":"Performance Limits of 433 MHz Quarter-wave Monopole Antennas due to Grounding Dimension and Conductivity","authors":"Jinfeng Li","doi":"10.33166/aetic.2022.03.001","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.001","url":null,"abstract":"Among antennas for Industrial, Scientific and Medical (ISM band) applications at 433 MHz, quarter-wave monopole is a reasonably good trade-off between size, gain, and cost. The electrical performance of the monopole is largely dependent on the quality of the ground plane (size and conductivity), which exhibits a practical limit on the achievable gain as most industrial user environments can provide only a finite ground plane of finite electrical conductivity. Establishing traceability in understanding the performance degradation due to such limits due to the grounding dimension and conductivity is becoming mandatory. To this end, this work leverages universal MATLAB in place of off-the-shelf software (HFSS or CST) for the quarter-wave monopole antenna simulation at 433 MHz parametrised with the ground plane’s dimension with respect to the wavelength (λ). Results indicate that by enlarging the ground plane’s size from 0.14 λ to 14 λ, the gain (directivity for PEC) from the 3D radiation pattern rises from 1.79 dBi, then starts levelling off at 6.7 dBi (5.78 λ), until saturating at 7.49 dBi (13 λ). The radiation efficiency and gain of various grounding conductivity scenarios (e.g., gold, aluminium, steel) are also quantified to inform antenna designers and engineers for commercial, industrial, defence and space applications.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45481436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.006
J. Uddin
Wireless multimedia sensor networks (WMSNs) are getting used in numerous applications nowadays. Many robust energy-efficient routing protocols have been proposed to handle multimedia traffic-intensive data like images and videos in WMSNs. It is a common trend in the literature to facilitate a WMSN with numerous sinks allowing cluster heads (CHs) to distribute the collected data to the adjacent sink node for delivery overhead mitigation. Using multiple sink nodes can be expensive and may incur high complexity in routing. There are many single-sink cluster-based routing protocols for WMSNs that lack in introducing optimal path selection among CHs. As a result, they suffer from transmission and queueing delay due to high data volume. To address these two conflicting issues, we propose a data aggregation mechanism based on reinforcement learning (RL) for CHs (RL-CH) in WMSN. The proposed method can be integrated to any of the cluster-based routing protocol for intelligent data transmission to sink node via cooperative CHs. Proposed RL-CH protocol performs better in terms of energy-efficiency, end-to-end delay, packet delivery ratio, and network lifetime. It gains 17.6% decrease in average end-to-end delay and 7.7% increase in PDR along with a network lifetime increased to 3.2% compared to the evolutionary game-based routing protocol which has been used as baseline.
{"title":"A Novel Data Aggregation Mechanism using Reinforcement Learning for Cluster Heads in Wireless Multimedia Sensor Networks","authors":"J. Uddin","doi":"10.33166/aetic.2022.03.006","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.006","url":null,"abstract":"Wireless multimedia sensor networks (WMSNs) are getting used in numerous applications nowadays. Many robust energy-efficient routing protocols have been proposed to handle multimedia traffic-intensive data like images and videos in WMSNs. It is a common trend in the literature to facilitate a WMSN with numerous sinks allowing cluster heads (CHs) to distribute the collected data to the adjacent sink node for delivery overhead mitigation. Using multiple sink nodes can be expensive and may incur high complexity in routing. There are many single-sink cluster-based routing protocols for WMSNs that lack in introducing optimal path selection among CHs. As a result, they suffer from transmission and queueing delay due to high data volume. To address these two conflicting issues, we propose a data aggregation mechanism based on reinforcement learning (RL) for CHs (RL-CH) in WMSN. The proposed method can be integrated to any of the cluster-based routing protocol for intelligent data transmission to sink node via cooperative CHs. Proposed RL-CH protocol performs better in terms of energy-efficiency, end-to-end delay, packet delivery ratio, and network lifetime. It gains 17.6% decrease in average end-to-end delay and 7.7% increase in PDR along with a network lifetime increased to 3.2% compared to the evolutionary game-based routing protocol which has been used as baseline.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42515815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.003
Johan Note, Maaruf Ali
Attacks against computer networks, “cyber-attacks”, are now common place affecting almost every Internet connected device on a daily basis. Organisations are now using machine learning and deep learning to thwart these types of attacks for their effectiveness without the need for human intervention. Machine learning offers the biggest advantage in their ability to detect, curtail, prevent, recover and even deal with untrained types of attacks without being explicitly programmed. This research will show the many different types of algorithms that are employed to fight against the different types of cyber-attacks, which are also explained. The classification algorithms, their implementation, accuracy and testing time are presented. The algorithms employed for this experiment were the Gaussian Naïve-Bayes algorithm, Logistic Regression Algorithm, SVM (Support Vector Machine) Algorithm, Stochastic Gradient Descent Algorithm, Decision Tree Algorithm, Random Forest Algorithm, Gradient Boosting Algorithm, K-Nearest Neighbour Algorithm, ANN (Artificial Neural Network) (here we also employed the Multilevel Perceptron Algorithm), Convolutional Neural Network (CNN) Algorithm and the Recurrent Neural Network (RNN) Algorithm. The study concluded that amongst the various machine learning algorithms, the Logistic Regression and Decision tree classifiers all took a very short time to be implemented giving an accuracy of over 90% for malware detection inside various test datasets. The Gaussian Naïve-Bayes classifier, though fast to implement, only gave an accuracy between 51-88%. The Multilevel Perceptron, non-linear SVM and Gradient Boosting algorithms all took a very long time to be implemented. The algorithm that performed with the greatest accuracy was the Random Forest Classification algorithm.
{"title":"Comparative Analysis of Intrusion Detection System Using Machine Learning and Deep Learning Algorithms","authors":"Johan Note, Maaruf Ali","doi":"10.33166/aetic.2022.03.003","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.003","url":null,"abstract":"Attacks against computer networks, “cyber-attacks”, are now common place affecting almost every Internet connected device on a daily basis. Organisations are now using machine learning and deep learning to thwart these types of attacks for their effectiveness without the need for human intervention. Machine learning offers the biggest advantage in their ability to detect, curtail, prevent, recover and even deal with untrained types of attacks without being explicitly programmed. This research will show the many different types of algorithms that are employed to fight against the different types of cyber-attacks, which are also explained. The classification algorithms, their implementation, accuracy and testing time are presented. The algorithms employed for this experiment were the Gaussian Naïve-Bayes algorithm, Logistic Regression Algorithm, SVM (Support Vector Machine) Algorithm, Stochastic Gradient Descent Algorithm, Decision Tree Algorithm, Random Forest Algorithm, Gradient Boosting Algorithm, K-Nearest Neighbour Algorithm, ANN (Artificial Neural Network) (here we also employed the Multilevel Perceptron Algorithm), Convolutional Neural Network (CNN) Algorithm and the Recurrent Neural Network (RNN) Algorithm. The study concluded that amongst the various machine learning algorithms, the Logistic Regression and Decision tree classifiers all took a very short time to be implemented giving an accuracy of over 90% for malware detection inside various test datasets. The Gaussian Naïve-Bayes classifier, though fast to implement, only gave an accuracy between 51-88%. The Multilevel Perceptron, non-linear SVM and Gradient Boosting algorithms all took a very long time to be implemented. The algorithm that performed with the greatest accuracy was the Random Forest Classification algorithm.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44013966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.33166/aetic.2022.03.004
P. Chakraborty, A. Sarkar
The building of a re-configurable business process (BP) has gained importance in business organizations. It helps the organization to adapt to the agility in business goals. A proper context-driven re-configurable BP should be capable of integrating dynamic context information. However, this is absent in the existing studies. As a result, providing a suitable, expressive and re-configurable BP to the business organization stakeholders has become a challenging issue. The prevailing research works lack the proper consideration and suitable incorporation of the context-driven services to make a BP re-configurable. And then it can quickly respond and change its behavior to adapt to the rapid and unpredictable changing business environment. In addition, those methods hardly come up with any appropriate technique to use the set of specified goals to extract context-driven services. Those business goals are determined by the group of stakeholders of a business organization. This paper proposes a new method of re-configuring context-driven from a defined goal to sort out these vital challenges. Present context data is included in an existing BP to achieve a modified goal which immensely benefits the end-users. Thus, this approach is intrinsically highly user-centric, reusable, fast and inexpensive. To achieve this, an algorithm called Context-driven Re-configurable Business Process Achievement Algorithm (CDRBPA) is introduced and implemented. Based on Primary Context (PC), three software metrics, namely, Degree of re-usability (DRUPC), Degree of re-appropriation (DRAPC) and Degree of re-configurability (DRPC) have been proposed to measure the modifications done to the existing BP. In conclusion, various case studies with different complexities have been performed to show the strength of the proposed algorithm.
可重构业务流程(BP)的构建在企业组织中变得越来越重要。它帮助组织适应业务目标中的敏捷性。一个合适的上下文驱动的可重构BP应该能够集成动态上下文信息。然而,这在现有的研究中是缺失的。因此,向业务组织涉众提供合适的、可表达的和可重新配置的BP已成为一个具有挑战性的问题。主流的研究工作缺乏适当的考虑和适当的结合上下文驱动的服务,使BP可重新配置。然后,它可以快速响应和改变自己的行为,以适应快速和不可预测的变化的商业环境。此外,这些方法几乎没有提出任何适当的技术来使用指定的目标集来提取上下文驱动的服务。这些业务目标是由业务组织的利益相关者决定的。本文提出了一种从定义目标重新配置上下文驱动的新方法来整理这些重要挑战。当前上下文数据包含在现有BP中,以实现对最终用户极大有利的修改目标。因此,这种方法本质上是高度以用户为中心、可重用、快速和廉价的。为了实现这一点,引入并实现了一种称为上下文驱动的可重构业务流程实现算法(CDRBPA)的算法。基于初级上下文(Primary Context, PC),提出了可重用度(Degree of re-usability, DRUPC)、可重用度(Degree of re-appropriation, DRAPC)和可重构度(Degree of re-configurability, DRPC)三个软件度量指标来度量对现有BP的修改。总之,已经进行了不同复杂性的各种案例研究,以显示所提出算法的强度。
{"title":"Dynamic Context Driven Re-configurable Business Process","authors":"P. Chakraborty, A. Sarkar","doi":"10.33166/aetic.2022.03.004","DOIUrl":"https://doi.org/10.33166/aetic.2022.03.004","url":null,"abstract":"The building of a re-configurable business process (BP) has gained importance in business organizations. It helps the organization to adapt to the agility in business goals. A proper context-driven re-configurable BP should be capable of integrating dynamic context information. However, this is absent in the existing studies. As a result, providing a suitable, expressive and re-configurable BP to the business organization stakeholders has become a challenging issue. The prevailing research works lack the proper consideration and suitable incorporation of the context-driven services to make a BP re-configurable. And then it can quickly respond and change its behavior to adapt to the rapid and unpredictable changing business environment. In addition, those methods hardly come up with any appropriate technique to use the set of specified goals to extract context-driven services. Those business goals are determined by the group of stakeholders of a business organization. This paper proposes a new method of re-configuring context-driven from a defined goal to sort out these vital challenges. Present context data is included in an existing BP to achieve a modified goal which immensely benefits the end-users. Thus, this approach is intrinsically highly user-centric, reusable, fast and inexpensive. To achieve this, an algorithm called Context-driven Re-configurable Business Process Achievement Algorithm (CDRBPA) is introduced and implemented. Based on Primary Context (PC), three software metrics, namely, Degree of re-usability (DRUPC), Degree of re-appropriation (DRAPC) and Degree of re-configurability (DRPC) have been proposed to measure the modifications done to the existing BP. In conclusion, various case studies with different complexities have been performed to show the strength of the proposed algorithm.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41673484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.33166/aetic.2022.02.005
Afra Binth Osman, Faria Tabassum, M. Patwary, Ahmed Imteaj, Touhidul Alam, Mohammad Arif Sobhan Bhuiyan, Mahdi H. Miraz
Mental soundness is a condition of well-being wherein a person understands his/her potential, participates in his or her community and is able to deal effectively with the challenges and obstacles of everyday life. It circumscribes how an individual thinks, feels and responds to any circumstances. Mental strain is generally recognised as a social concern, potentially leading to a functional impairment at work. Chronic stress may also be linked with several physiological illnesses. The purpose of this research stands to examine existing research analysis of mental healthiness outcomes where diverse Deep Learning (DL) and Machine learning (ML) algorithms have been applied. Applying our exclusion and inclusion criteria, 52 articles were finally selected from the search results obtained from various research databases and repositories. This literatures on ML and mental health outcomes show an insight into the avant-garde techniques developed and employed in this domain. The review also compares and contrasts amongst various deep learning techniques for predicting a person's state of mind based on different types of data such as social media data, clinical data, etc. Finally, the open issues and future challenges of utilising Deep learning algorithms to better understand as well as diagnose mental state of any individual were discussed. From the literature survey, this is evident that the use of ML and DL in mental health has yielded significant attainment mostly in the areas of diagnosis, therapy, support, research and clinical governance.
{"title":"Examining Mental Disorder/Psychological Chaos through Various ML and DL Techniques: A Critical Review","authors":"Afra Binth Osman, Faria Tabassum, M. Patwary, Ahmed Imteaj, Touhidul Alam, Mohammad Arif Sobhan Bhuiyan, Mahdi H. Miraz","doi":"10.33166/aetic.2022.02.005","DOIUrl":"https://doi.org/10.33166/aetic.2022.02.005","url":null,"abstract":"Mental soundness is a condition of well-being wherein a person understands his/her potential, participates in his or her community and is able to deal effectively with the challenges and obstacles of everyday life. It circumscribes how an individual thinks, feels and responds to any circumstances. Mental strain is generally recognised as a social concern, potentially leading to a functional impairment at work. Chronic stress may also be linked with several physiological illnesses. The purpose of this research stands to examine existing research analysis of mental healthiness outcomes where diverse Deep Learning (DL) and Machine learning (ML) algorithms have been applied. Applying our exclusion and inclusion criteria, 52 articles were finally selected from the search results obtained from various research databases and repositories. This literatures on ML and mental health outcomes show an insight into the avant-garde techniques developed and employed in this domain. The review also compares and contrasts amongst various deep learning techniques for predicting a person's state of mind based on different types of data such as social media data, clinical data, etc. Finally, the open issues and future challenges of utilising Deep learning algorithms to better understand as well as diagnose mental state of any individual were discussed. From the literature survey, this is evident that the use of ML and DL in mental health has yielded significant attainment mostly in the areas of diagnosis, therapy, support, research and clinical governance.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43128741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}