Pub Date : 2023-06-05DOI: 10.1142/s1469026823500062
Nessrine Elloumi, Aicha Ben Makhlouf, Ayman Afli, B. Louhichi, M. Jaidane, J. M. R. Tavares
Over the last decades, facing the blooming growth of technological progress, interest in digital devices such as computed tomography (CT) as well as magnetic resource imaging which emerged in the 1970s has continued to grow. Such medical data can be invested in numerous visual recognition applications. In this context, these data may be segmented to generate a precise 3D representation of an organ that may be visualized and manipulated to aid surgeons during surgical interventions. Notably, the segmentation process is performed manually through the use of image processing software. Within this framework, multiple outstanding approaches were elaborated. However, the latter proved to be inefficient and required human intervention to opt for the segmentation area appropriately. Over the last few years, automatic methods which are based on deep learning approaches have outperformed the state-of-the-art segmentation approaches due to the use of the relying on Convolutional Neural Networks. In this paper, a segmentation of preoperative patients CT scans based on deep learning architecture was carried out to determine the target organ’s shape. As a result, the segmented 2D CT images are used to generate the patient-specific biomechanical 3D model. To assess the efficiency and reliability of the proposed approach, the 3DIRCADb dataset was invested. The segmentation results were obtained through the implementation of a U-net architecture with good accuracy.
{"title":"CT Images Segmentation Using a Deep Learning-Based Approach for Preoperative Projection of Human Organ Model Using Augmented Reality Technology","authors":"Nessrine Elloumi, Aicha Ben Makhlouf, Ayman Afli, B. Louhichi, M. Jaidane, J. M. R. Tavares","doi":"10.1142/s1469026823500062","DOIUrl":"https://doi.org/10.1142/s1469026823500062","url":null,"abstract":"Over the last decades, facing the blooming growth of technological progress, interest in digital devices such as computed tomography (CT) as well as magnetic resource imaging which emerged in the 1970s has continued to grow. Such medical data can be invested in numerous visual recognition applications. In this context, these data may be segmented to generate a precise 3D representation of an organ that may be visualized and manipulated to aid surgeons during surgical interventions. Notably, the segmentation process is performed manually through the use of image processing software. Within this framework, multiple outstanding approaches were elaborated. However, the latter proved to be inefficient and required human intervention to opt for the segmentation area appropriately. Over the last few years, automatic methods which are based on deep learning approaches have outperformed the state-of-the-art segmentation approaches due to the use of the relying on Convolutional Neural Networks. In this paper, a segmentation of preoperative patients CT scans based on deep learning architecture was carried out to determine the target organ’s shape. As a result, the segmented 2D CT images are used to generate the patient-specific biomechanical 3D model. To assess the efficiency and reliability of the proposed approach, the 3DIRCADb dataset was invested. The segmentation results were obtained through the implementation of a U-net architecture with good accuracy.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"34 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120849613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.1142/s1469026823500104
Kai Wang, Congwei Guo, Zhuang Zhao, Yongzhen Ke, Shuai Yang
Group photo images are everywhere and vary greatly by the shooting scene. Compared with common images, the Image Aesthetic Quality Assessment (IAQI) of group photo pays more attention to the relevant characteristics of the main population. Still, the existing methods do not make further special research on group photos. Therefore, we propose a new concept of group photo styling based on analyzing group photos and photographic theory. Besides that, by comparing and analyzing many group photos, we classify the group photos into five categories. In this paper, the main factors of the head and pose are considered simultaneously, and the method of Group Photo Styling Classification (GPSC) can classify different group photos automatically. To verify the effectiveness of our method, we collected a Group Photo Styling Dataset (GPSD). The dataset contains 998 group photo images, and each image’s group photo styling category is marked. The experimental results on GPSD show that the fusion of head features and pose features can classify different group photos well. The accuracy of GPSC reaches 93.9%, much higher than the previous classification model.
{"title":"Styling Classification of Group Photos Fusing Head and Pose Features","authors":"Kai Wang, Congwei Guo, Zhuang Zhao, Yongzhen Ke, Shuai Yang","doi":"10.1142/s1469026823500104","DOIUrl":"https://doi.org/10.1142/s1469026823500104","url":null,"abstract":"Group photo images are everywhere and vary greatly by the shooting scene. Compared with common images, the Image Aesthetic Quality Assessment (IAQI) of group photo pays more attention to the relevant characteristics of the main population. Still, the existing methods do not make further special research on group photos. Therefore, we propose a new concept of group photo styling based on analyzing group photos and photographic theory. Besides that, by comparing and analyzing many group photos, we classify the group photos into five categories. In this paper, the main factors of the head and pose are considered simultaneously, and the method of Group Photo Styling Classification (GPSC) can classify different group photos automatically. To verify the effectiveness of our method, we collected a Group Photo Styling Dataset (GPSD). The dataset contains 998 group photo images, and each image’s group photo styling category is marked. The experimental results on GPSD show that the fusion of head features and pose features can classify different group photos well. The accuracy of GPSC reaches 93.9%, much higher than the previous classification model.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117075261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-25DOI: 10.1142/s146902682341002x
S. Mercy, M. Jaiganesh, R. Nagaraja, G. Sudha
A cloud computing signifies a novel computing paradigm that endorses reactive delivery of resources and services. A distinctive cloud service of such data center deploys over many computing nodes requesting services from the data centers. The organization of resources and trustworthiness of client is a hot topic of research in cloud computing. One of the major threats in cloud computing is unauthorized access of hardware and their resources. To conquer the issue, this novel work proposes an Optimal Resource Trust line prediction using Genetic Algorithm (GAORTL). The main aim of the work is to find the allocated optimal resource utilization of clients through an evolutionary algorithm. Implementation is evaluated to prove the benefit of the algorithm. Subsequently, we perform a comprehensive investigation that the proposed GAORTL delivers a better prediction of trustworthiness in variety of client sizes for a big scale batch of occurrences.
{"title":"Genetic Algorithm-Based Optimal Resource Trust Line Prediction in Cloud Computing","authors":"S. Mercy, M. Jaiganesh, R. Nagaraja, G. Sudha","doi":"10.1142/s146902682341002x","DOIUrl":"https://doi.org/10.1142/s146902682341002x","url":null,"abstract":"A cloud computing signifies a novel computing paradigm that endorses reactive delivery of resources and services. A distinctive cloud service of such data center deploys over many computing nodes requesting services from the data centers. The organization of resources and trustworthiness of client is a hot topic of research in cloud computing. One of the major threats in cloud computing is unauthorized access of hardware and their resources. To conquer the issue, this novel work proposes an Optimal Resource Trust line prediction using Genetic Algorithm (GAORTL). The main aim of the work is to find the allocated optimal resource utilization of clients through an evolutionary algorithm. Implementation is evaluated to prove the benefit of the algorithm. Subsequently, we perform a comprehensive investigation that the proposed GAORTL delivers a better prediction of trustworthiness in variety of client sizes for a big scale batch of occurrences.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121935639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-06DOI: 10.1142/s1469026823410067
A. Mergin, Godwin Premi Maria Sebastin
Multi-modality medical image fusion (MMIF) methods were widely used in a variety of clinical settings. For specialists, MMIF could provide an image containing anatomical and physiological information that can help develop diagnostic procedures. Different models linked to MMIF were proposed previously. However, there would be a need to enhance the functionality of prior methodologies. In this proposed model, a unique fusion model depending upon optimal thresholding and deep learning approaches are presented. An enhanced monarch butterfly optimization (EMBO) determines an optimal threshold with fusion rules as in shearlet transform. The efficiency of the fusion process mainly depends on the fusion rule and the optimization of the fusion rule can improve the efficiency of the fusion. The extraction element of the deep learning approach was then utilized to fuse high- and low-frequency sub-bands. The fusion technique was carried out using a convolutional neural network (CNN). The studies were carried out for MRI and CT images. The fusion results were attained and the proposed model was proved to offer effective performance with reduced values of error and improved values of correlation.
{"title":"Shearlet Transform-Based Novel Method for Multimodality Medical Image Fusion Using Deep Learning","authors":"A. Mergin, Godwin Premi Maria Sebastin","doi":"10.1142/s1469026823410067","DOIUrl":"https://doi.org/10.1142/s1469026823410067","url":null,"abstract":"Multi-modality medical image fusion (MMIF) methods were widely used in a variety of clinical settings. For specialists, MMIF could provide an image containing anatomical and physiological information that can help develop diagnostic procedures. Different models linked to MMIF were proposed previously. However, there would be a need to enhance the functionality of prior methodologies. In this proposed model, a unique fusion model depending upon optimal thresholding and deep learning approaches are presented. An enhanced monarch butterfly optimization (EMBO) determines an optimal threshold with fusion rules as in shearlet transform. The efficiency of the fusion process mainly depends on the fusion rule and the optimization of the fusion rule can improve the efficiency of the fusion. The extraction element of the deep learning approach was then utilized to fuse high- and low-frequency sub-bands. The fusion technique was carried out using a convolutional neural network (CNN). The studies were carried out for MRI and CT images. The fusion results were attained and the proposed model was proved to offer effective performance with reduced values of error and improved values of correlation.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114984658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-06DOI: 10.1142/s1469026823410031
S. S. Priya, M. Mohanraj
Flying Ad-hoc Networks (FANET) allow for an ad-hoc networking among Unmanned Aerial Vehicles (UAV), have recently gained popularity in a variety of military and non-militant applications. The existing work used the Glowworm Swarm Optimization (GSO) algorithm to create a self-organization depending on clustering technique for FANET. Owing to UAV increased mobility, network topology might vary over time, providing route discovery and maintenance is one of the most difficult tasks. And also, the network throughput is still more worsened by the network congestion. To solve this problem, the proposed work designed an energy efficient clustering and fuzzy-based path selection for FANET. In this work, initially, the clustering is performed using the UAV distance. For efficient communication and energy consumption, optimal selection of Cluster Head (CH) is performed by using Adaptive Mutation with Teaching-Learning-Based Optimization (AMTLBO) algorithm. To improve the optimal selection of CH nodes, the best fitness values are calculated. The fitness function depends on Link capacity, remaining energy and neighbor UAV distance. Next to that, nodes begin communications as well as transmit their information to their CH. Improved Fuzzy-based Routing (IFR) is introduced for improving the route discovery process. The goal is to find routes that have a high level of flying autonomy, minimal mobility, and a higher Received Signal Strength Indicator (RSSI). As a result, the energy usage of network is decreased, as well as the cluster’s lifespan is extended. Finally, an adaptive and reliable congestion detection mechanism is introduced to transmit the packets with congestion free path. The experimental result shows that the proposed AMTLBO system attains higher performance compared to the existing system in terms of energy usage, throughput, delay, overhead and packet delivery ratio.
{"title":"An Energy-Efficient Clustering and Fuzzy-Based Path Selection for Flying Ad-Hoc Networks","authors":"S. S. Priya, M. Mohanraj","doi":"10.1142/s1469026823410031","DOIUrl":"https://doi.org/10.1142/s1469026823410031","url":null,"abstract":"Flying Ad-hoc Networks (FANET) allow for an ad-hoc networking among Unmanned Aerial Vehicles (UAV), have recently gained popularity in a variety of military and non-militant applications. The existing work used the Glowworm Swarm Optimization (GSO) algorithm to create a self-organization depending on clustering technique for FANET. Owing to UAV increased mobility, network topology might vary over time, providing route discovery and maintenance is one of the most difficult tasks. And also, the network throughput is still more worsened by the network congestion. To solve this problem, the proposed work designed an energy efficient clustering and fuzzy-based path selection for FANET. In this work, initially, the clustering is performed using the UAV distance. For efficient communication and energy consumption, optimal selection of Cluster Head (CH) is performed by using Adaptive Mutation with Teaching-Learning-Based Optimization (AMTLBO) algorithm. To improve the optimal selection of CH nodes, the best fitness values are calculated. The fitness function depends on Link capacity, remaining energy and neighbor UAV distance. Next to that, nodes begin communications as well as transmit their information to their CH. Improved Fuzzy-based Routing (IFR) is introduced for improving the route discovery process. The goal is to find routes that have a high level of flying autonomy, minimal mobility, and a higher Received Signal Strength Indicator (RSSI). As a result, the energy usage of network is decreased, as well as the cluster’s lifespan is extended. Finally, an adaptive and reliable congestion detection mechanism is introduced to transmit the packets with congestion free path. The experimental result shows that the proposed AMTLBO system attains higher performance compared to the existing system in terms of energy usage, throughput, delay, overhead and packet delivery ratio.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"17 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127004689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1142/s1469026823500049
Mohammed A. Jasim, T. Atia
Wireless video surveillance system is one of the cyber-physical security systems kinds, which transmits the signal of IP cameras through a wireless medium using a radio band. WVSSs are widely deployed with large systems for use in strategic places such as city centers, public transportation, public roads, airports, and play a significant role in critical infrastructure protection. WVSSs are vulnerable to jamming attacks creating an unwanted denial of service. Hence, it is essential to secure this system from jamming attacks. In this paper, three models of IoT-fuzzy inference system-based jamming detection system are proposed for detecting and countermeasure the presence of jamming by computing two jamming detection metrics; PDR and PLR, and based on the result, the system countermeasures this attack by storing the video feed locally in the subsystem nodes. FIS models are based on Mamdani, Tsukamoto, and Sugeno fuzzy logic which optimizes the jamming detection metrics for detecting the jamming attack. The efficiency of these proposed models is compared in detecting jamming signals. The experimental results show that the proposed Tsukamoto model detects jamming attacks with high accuracy and efficiency. Finally, the proposed IoT-Tsukamoto-based model was compared with the existing systems and proved to be superior to them in terms of central processing complexity, accuracy, and countermeasure for this attack.
{"title":"An IoT-Fuzzy-Based Jamming Detection and Recovery System in Wireless Video Surveillance System","authors":"Mohammed A. Jasim, T. Atia","doi":"10.1142/s1469026823500049","DOIUrl":"https://doi.org/10.1142/s1469026823500049","url":null,"abstract":"Wireless video surveillance system is one of the cyber-physical security systems kinds, which transmits the signal of IP cameras through a wireless medium using a radio band. WVSSs are widely deployed with large systems for use in strategic places such as city centers, public transportation, public roads, airports, and play a significant role in critical infrastructure protection. WVSSs are vulnerable to jamming attacks creating an unwanted denial of service. Hence, it is essential to secure this system from jamming attacks. In this paper, three models of IoT-fuzzy inference system-based jamming detection system are proposed for detecting and countermeasure the presence of jamming by computing two jamming detection metrics; PDR and PLR, and based on the result, the system countermeasures this attack by storing the video feed locally in the subsystem nodes. FIS models are based on Mamdani, Tsukamoto, and Sugeno fuzzy logic which optimizes the jamming detection metrics for detecting the jamming attack. The efficiency of these proposed models is compared in detecting jamming signals. The experimental results show that the proposed Tsukamoto model detects jamming attacks with high accuracy and efficiency. Finally, the proposed IoT-Tsukamoto-based model was compared with the existing systems and proved to be superior to them in terms of central processing complexity, accuracy, and countermeasure for this attack.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116802286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1142/s1469026823410092
K. Amrutha, P. Prabu
Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.
{"title":"Evaluating the Pertinence of Pose Estimation model for Sign Language Translation","authors":"K. Amrutha, P. Prabu","doi":"10.1142/s1469026823410092","DOIUrl":"https://doi.org/10.1142/s1469026823410092","url":null,"abstract":"Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124524813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1142/s1469026823500025
B. Lavanya, C. Shanthi
In recent years, the internet of services has been more responsive to access through the development of various application program interfaces (API). Accessing an HTTP uniform resource locator (URL) contains malicious software intended by the attacker to create security breaches through the use of APIs from various services on the internet. By default, the non-attentive URL downloads and installs malware in the background without the user’s knowledge. The host does not analyze the API-URL security certificate contract due to the feature access by the user. Therefore, the current Machine Learning (ML) techniques only check malware signatures and certificates rather than analyzing URL behaviour based on the impact of a URL accessed from the internet. To address this problem, we propose a novel intelligent malicious software based on URL-API intensity feature selection (IFS) and deep spectral neural classification (DSNC) for improving Host Security. Initially, the URL — successive certificate signing (SCS) of the user link accessibility is verified based on API download rate logs. This system identifies the best malware software. The Link Redirection Stability Rate (LRSR) is estimated based on the Redirection URL by accessing the direct link and redirect link. The domain transformation matrix (DTM) was created to create a pattern to access successive features. URL-API Intensity Feature Selection selects each estimated feature, and the selected features are based on soft-max logical activation with a recurrent neural network (RNN) optimized for deep learning. RNN is trained in the spectral domain for improving computation and efficiency. It predicts the class based on the risk of malicious weight to categorize class by reference. The proposed IFS-DSNC achieves accuracy of 95.6% than the other algorithms such as KNN, NB, CNN, LCS, GCRNC AGSCR. The experimental result shows that the proposed method provides better performance in finding malware software than the existing approaches, thereby improving the security against host breaching.
{"title":"Malicious Software Detection based on URL-API Intensity Feature Selection Using Deep Spectral Neural Classification for Improving Host Security","authors":"B. Lavanya, C. Shanthi","doi":"10.1142/s1469026823500025","DOIUrl":"https://doi.org/10.1142/s1469026823500025","url":null,"abstract":"In recent years, the internet of services has been more responsive to access through the development of various application program interfaces (API). Accessing an HTTP uniform resource locator (URL) contains malicious software intended by the attacker to create security breaches through the use of APIs from various services on the internet. By default, the non-attentive URL downloads and installs malware in the background without the user’s knowledge. The host does not analyze the API-URL security certificate contract due to the feature access by the user. Therefore, the current Machine Learning (ML) techniques only check malware signatures and certificates rather than analyzing URL behaviour based on the impact of a URL accessed from the internet. To address this problem, we propose a novel intelligent malicious software based on URL-API intensity feature selection (IFS) and deep spectral neural classification (DSNC) for improving Host Security. Initially, the URL — successive certificate signing (SCS) of the user link accessibility is verified based on API download rate logs. This system identifies the best malware software. The Link Redirection Stability Rate (LRSR) is estimated based on the Redirection URL by accessing the direct link and redirect link. The domain transformation matrix (DTM) was created to create a pattern to access successive features. URL-API Intensity Feature Selection selects each estimated feature, and the selected features are based on soft-max logical activation with a recurrent neural network (RNN) optimized for deep learning. RNN is trained in the spectral domain for improving computation and efficiency. It predicts the class based on the risk of malicious weight to categorize class by reference. The proposed IFS-DSNC achieves accuracy of 95.6% than the other algorithms such as KNN, NB, CNN, LCS, GCRNC AGSCR. The experimental result shows that the proposed method provides better performance in finding malware software than the existing approaches, thereby improving the security against host breaching.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123814384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1142/s1469026823410043
R. Lavanya, N. Shanmugapriya
Wireless Sensor Networks (WSNs) are made up of multiple source-restricted wireless sensor nodes that gather, process, and transmit information. Existing research work proposed energy competence with trust as well as Quality of Service (QoS) multipath routing protocol for improving network lifetime and other QoS parameters, selection criteria for multipath. However, this protocol has some limitations, such as scalability, data redundancy, bandwidth utilization, and network traffic. The most important challenge lies in managing the voluminous data produced by the network’s sensors. As a result of this study, Intelligent Data Fusion Techniques (IDFTs) were presented, which can greatly minimize redundant data, decrease the quantity of transmitting data, broaden the network life cycle, enhance bandwidth utilization, and therefore, resolve the energy and bandwidth usage bottleneck. This paper proposes Improved Whale Optimization Algorithms (IWOAs) for intelligent data fusion where the amount of data collected from sensor sources is reduced and the information offered is enhanced by duplicate data, which also increases data dependability. IWOAs are used to combine the actual information from the cluster’s sensor nodes at the sink node, resulting in increased information and the ability to make local judgments about the particular events. The sink node transmits local decisions to base station on a regular basis that combines the local decisions and provides the ultimate judgment, easing the pressure on the base station to evaluate all of the data. As per the results obtained, the proposed intelligent data fusion method significantly increases the network’s robustness and accuracy.
无线传感器网络(wsn)由多个受源限制的无线传感器节点组成,用于收集、处理和传输信息。现有的研究工作提出了具有信任的能量能力和服务质量(QoS)的多路径路由协议,以提高网络生存期和其他QoS参数,多路径选择标准。但是,该协议存在一些限制,如可伸缩性、数据冗余、带宽利用率和网络流量等。最重要的挑战在于如何管理网络传感器产生的海量数据。在此基础上,提出了智能数据融合技术(Intelligent Data Fusion Techniques, IDFTs),该技术可以极大地减少冗余数据,减少数据传输量,延长网络生命周期,提高带宽利用率,从而解决能源和带宽的使用瓶颈。本文提出了改进的鲸鱼优化算法(IWOAs)用于智能数据融合,该算法减少了从传感器源收集的数据量,并通过重复数据增强了提供的信息,从而提高了数据的可靠性。iwoa用于在汇聚节点上组合来自集群传感器节点的实际信息,从而增加信息并能够对特定事件做出本地判断。汇聚节点定期向基站发送本地决策,并结合本地决策提供最终判断,减轻了基站评估所有数据的压力。结果表明,所提出的智能数据融合方法显著提高了网络的鲁棒性和准确性。
{"title":"An Intelligent Data Fusion Technique for Improving the Data Transmission Rate in Wireless Sensor Networks","authors":"R. Lavanya, N. Shanmugapriya","doi":"10.1142/s1469026823410043","DOIUrl":"https://doi.org/10.1142/s1469026823410043","url":null,"abstract":"Wireless Sensor Networks (WSNs) are made up of multiple source-restricted wireless sensor nodes that gather, process, and transmit information. Existing research work proposed energy competence with trust as well as Quality of Service (QoS) multipath routing protocol for improving network lifetime and other QoS parameters, selection criteria for multipath. However, this protocol has some limitations, such as scalability, data redundancy, bandwidth utilization, and network traffic. The most important challenge lies in managing the voluminous data produced by the network’s sensors. As a result of this study, Intelligent Data Fusion Techniques (IDFTs) were presented, which can greatly minimize redundant data, decrease the quantity of transmitting data, broaden the network life cycle, enhance bandwidth utilization, and therefore, resolve the energy and bandwidth usage bottleneck. This paper proposes Improved Whale Optimization Algorithms (IWOAs) for intelligent data fusion where the amount of data collected from sensor sources is reduced and the information offered is enhanced by duplicate data, which also increases data dependability. IWOAs are used to combine the actual information from the cluster’s sensor nodes at the sink node, resulting in increased information and the ability to make local judgments about the particular events. The sink node transmits local decisions to base station on a regular basis that combines the local decisions and provides the ultimate judgment, easing the pressure on the base station to evaluate all of the data. As per the results obtained, the proposed intelligent data fusion method significantly increases the network’s robustness and accuracy.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124974976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1142/s1469026823410018
T. Saroja, Y. Kalpana
Chronic Kidney Disease (CKD) are a universal issue for the well-being of people as they result in morbidities and deaths with the onset of additional diseases. Because there are no clear early symptoms of CKD, people frequently miss them. Timely identification of CKD allows individuals to acquire proper medications to prevent the development of the diseases. Machine learning technique (MLT) can strongly assist doctors in achieving this aim due to their rapid and precise determination capabilities. Many MLT encounter inappropriate features in most databases that might lower the classifier’s performance. Missing values are filled using K-Nearest Neighbor (KNN). Adaptive Weight Dynamic Butterfly Optimization Algorithm (AWDBOA) are nature-inspired feature selection (FS) techniques with good explorations, exploitations, convergences, and do not get trapped in local optimums. Operators used in Local Search Algorithm-Based Mutation (LSAM) and Butterfly Optimization Algorithm (BOA) which use diversity and generations of adaptive weights to features for enhancing FS are modified in this work. Simultaneously, an adaptive weight value is added for FS from the database. Following the identification of features, six MLT are used in classification tasks namely Logistic Regressions (LOG), Random Forest (RF), Support Vector Machine (SVM), KNNs, Naive Baye (NB), and Feed Forward Neural Network (FFNN). The CKD databases were retrieved from MLT repository of UCI (University of California, Irvine). Precision, Recall, F1-Score, Sensitivity, Specificity, and accuracy are compared to assess this work’s classification framework with existing approaches.
慢性肾脏疾病(CKD)是一个普遍的问题,对人们的福祉,因为他们导致发病率和死亡与其他疾病的发作。由于CKD没有明确的早期症状,人们经常忽略它们。及时识别CKD可以让个人获得适当的药物来预防疾病的发展。机器学习技术(MLT)由于其快速和精确的检测能力,可以有力地帮助医生实现这一目标。许多MLT在大多数数据库中遇到不合适的特征,这可能会降低分类器的性能。缺失值使用k近邻(KNN)填充。自适应加权动态蝴蝶优化算法(AWDBOA)是一种受自然启发的特征选择(FS)技术,具有良好的探索、利用和收敛性,并且不会陷入局部最优。本文对基于局部搜索算法的突变算子(LSAM)和蝴蝶优化算法(BOA)中使用的算子进行了改进,这些算子利用特征的多样性和自适应权值的生成来增强FS。同时,从数据库中为FS添加自适应权重值。在特征识别之后,六种MLT被用于分类任务,即逻辑回归(LOG)、随机森林(RF)、支持向量机(SVM)、KNNs、朴素贝叶斯(NB)和前馈神经网络(FFNN)。CKD数据库从UCI (University of California, Irvine)的MLT存储库中检索。将精密度、召回率、f1评分、敏感性、特异性和准确性与现有方法进行比较,以评估本工作的分类框架。
{"title":"Adaptive Weight Dynamic Butterfly Optimization Algorithm (ADBOA)-Based Feature Selection and Classifier for Chronic Kidney Disease (CKD) Diagnosis","authors":"T. Saroja, Y. Kalpana","doi":"10.1142/s1469026823410018","DOIUrl":"https://doi.org/10.1142/s1469026823410018","url":null,"abstract":"Chronic Kidney Disease (CKD) are a universal issue for the well-being of people as they result in morbidities and deaths with the onset of additional diseases. Because there are no clear early symptoms of CKD, people frequently miss them. Timely identification of CKD allows individuals to acquire proper medications to prevent the development of the diseases. Machine learning technique (MLT) can strongly assist doctors in achieving this aim due to their rapid and precise determination capabilities. Many MLT encounter inappropriate features in most databases that might lower the classifier’s performance. Missing values are filled using K-Nearest Neighbor (KNN). Adaptive Weight Dynamic Butterfly Optimization Algorithm (AWDBOA) are nature-inspired feature selection (FS) techniques with good explorations, exploitations, convergences, and do not get trapped in local optimums. Operators used in Local Search Algorithm-Based Mutation (LSAM) and Butterfly Optimization Algorithm (BOA) which use diversity and generations of adaptive weights to features for enhancing FS are modified in this work. Simultaneously, an adaptive weight value is added for FS from the database. Following the identification of features, six MLT are used in classification tasks namely Logistic Regressions (LOG), Random Forest (RF), Support Vector Machine (SVM), KNNs, Naive Baye (NB), and Feed Forward Neural Network (FFNN). The CKD databases were retrieved from MLT repository of UCI (University of California, Irvine). Precision, Recall, F1-Score, Sensitivity, Specificity, and accuracy are compared to assess this work’s classification framework with existing approaches.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126402408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}