Pub Date : 2022-02-24DOI: 10.1142/s0219265921420184
Shanshan Yin, Liqiong Xu, Weihua Yang
An interconnection network is usually modeled by a connected graph in which vertices represent processors and edges represent links between processors. The connectivity is an important parameter to evaluate the fault tolerance of interconnection networks. A connected graph [Formula: see text] is maximally local-(edge-)connected if each pair vertices [Formula: see text] of [Formula: see text] is connected by min[Formula: see text] pairwise (edge-)disjoint paths between [Formula: see text] and [Formula: see text] in [Formula: see text]. A graph [Formula: see text] is called [Formula: see text]-fault-tolerant maximally local-(edge-)connected if [Formula: see text] is maximally local-(edge-)connected for any [Formula: see text] ([Formula: see text]) with [Formula: see text]. A graph [Formula: see text] is called [Formula: see text]-fault-tolerant maximally local-(edge-)connected of order [Formula: see text] if [Formula: see text] is maximally local-(edge-)connected for any [Formula: see text] with [Formula: see text], where [Formula: see text] is a conditional faulty vertex (edge) set of order [Formula: see text]. In this paper, we obtain the sufficient condition of connected graphs to be [Formula: see text]-edge-fault-tolerant maximally local-edge-connected. Moreover, we consider the sufficient condition of connected graphs to be [Formula: see text]-fault-tolerant maximally local-(edge-)connected of order [Formula: see text]. Some previous results in [Theor. Comput. Sci. 731 (2018) 50–67] and [Theor. Comput. Sci. 847 (2020) 39–48] are extended.
{"title":"The Sufficient Conditions of (δ(G) - 2)-(|F|-)Fault-Tolerant Maximal Local-(Edge-)Connectivity of Connected Graphs","authors":"Shanshan Yin, Liqiong Xu, Weihua Yang","doi":"10.1142/s0219265921420184","DOIUrl":"https://doi.org/10.1142/s0219265921420184","url":null,"abstract":"An interconnection network is usually modeled by a connected graph in which vertices represent processors and edges represent links between processors. The connectivity is an important parameter to evaluate the fault tolerance of interconnection networks. A connected graph [Formula: see text] is maximally local-(edge-)connected if each pair vertices [Formula: see text] of [Formula: see text] is connected by min[Formula: see text] pairwise (edge-)disjoint paths between [Formula: see text] and [Formula: see text] in [Formula: see text]. A graph [Formula: see text] is called [Formula: see text]-fault-tolerant maximally local-(edge-)connected if [Formula: see text] is maximally local-(edge-)connected for any [Formula: see text] ([Formula: see text]) with [Formula: see text]. A graph [Formula: see text] is called [Formula: see text]-fault-tolerant maximally local-(edge-)connected of order [Formula: see text] if [Formula: see text] is maximally local-(edge-)connected for any [Formula: see text] with [Formula: see text], where [Formula: see text] is a conditional faulty vertex (edge) set of order [Formula: see text]. In this paper, we obtain the sufficient condition of connected graphs to be [Formula: see text]-edge-fault-tolerant maximally local-edge-connected. Moreover, we consider the sufficient condition of connected graphs to be [Formula: see text]-fault-tolerant maximally local-(edge-)connected of order [Formula: see text]. Some previous results in [Theor. Comput. Sci. 731 (2018) 50–67] and [Theor. Comput. Sci. 847 (2020) 39–48] are extended.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122128364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-16DOI: 10.1142/s0219265921460130
Xiujun Zhai, A. Rajaram, K. Ramesh
The number of autistic children and young people is rising rapidly across the world. Children with intellectual disabilities need special attention from trained experts. Educating them on improving their lifestyle is critical through the traditional teaching-learning environment. This study introduces an interactive educational framework that helps children with special needs have an improved and exciting learning process and explores the need to incorporate physical exercise into their everyday lives. Virtual Reality (VR) seeks more attention from autistic students. This research presents a Machine Learning-based Virtual Reality Application (ML-VRA) for Mentally Challenged Children and keeps the Human Behavior Analysis log files. Machine learning can predict the score of brain data ability. The visual short-term memory and visual-spatial memory are further assessed to identify students’ interaction with the VR application. Support Vector Regression prediction algorithm and Baseline Prediction algorithm are used to assess the score prediction for visual short memory and visual-spatial memory.Using an audio technology that allows autistic persons to hear various sounds, the cognitive method VRA instructs autistic children.Further, this study proposes a cognitive model for intellectual task processes and problem-solving using metacognitive architecture. Thus, children can acquire different levels of learning knowledge and skills. The case study performed on these model results with the highest prediction accuracy of 93.65%.
{"title":"Cognitive Model for Human Behavior Analysis","authors":"Xiujun Zhai, A. Rajaram, K. Ramesh","doi":"10.1142/s0219265921460130","DOIUrl":"https://doi.org/10.1142/s0219265921460130","url":null,"abstract":"The number of autistic children and young people is rising rapidly across the world. Children with intellectual disabilities need special attention from trained experts. Educating them on improving their lifestyle is critical through the traditional teaching-learning environment. This study introduces an interactive educational framework that helps children with special needs have an improved and exciting learning process and explores the need to incorporate physical exercise into their everyday lives. Virtual Reality (VR) seeks more attention from autistic students. This research presents a Machine Learning-based Virtual Reality Application (ML-VRA) for Mentally Challenged Children and keeps the Human Behavior Analysis log files. Machine learning can predict the score of brain data ability. The visual short-term memory and visual-spatial memory are further assessed to identify students’ interaction with the VR application. Support Vector Regression prediction algorithm and Baseline Prediction algorithm are used to assess the score prediction for visual short memory and visual-spatial memory.Using an audio technology that allows autistic persons to hear various sounds, the cognitive method VRA instructs autistic children.Further, this study proposes a cognitive model for intellectual task processes and problem-solving using metacognitive architecture. Thus, children can acquire different levels of learning knowledge and skills. The case study performed on these model results with the highest prediction accuracy of 93.65%.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116974556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1142/s0219265921430441
Jie Ding, G. Zhao, Jinyong Huang
In order to improve the retrieval ability of super-resolution multi-space block images, a texture retrieval method of block images based on depth hash is proposed. A texture feature analysis model of super-resolution multi-dimensional partitioned images is constructed, which combines texture spatial structure mapping method to realize depth information fusion of partitioned images, adopts edge feature detection and texture sparse feature clustering to realize texture hierarchical structure feature decomposition of super-resolution multi-dimensional partitioned images, and adopts deep image parameter analysis method to construct pixel structure recombination model of multi-dimensional partitioned images. Multi-dimensional texture parameter structure analysis and information clustering are realized for the collected partitioned images in multi-dimensional space. According to the information clustering results, the texture retrieval and extraction of partitioned images are realized by using the deep hash fusion algorithm, and the information detection and feature recognition capabilities of partitioned images in multi-dimensional space are improved. Simulation results show that this method has higher precision and better feature resolution in texture retrieval of partitioned images in multidimensional space, which improves the texture retrieval and recognition ability of partitioned images.
{"title":"Research on Block Image Texture Retrieval Method Based on Depth Hash","authors":"Jie Ding, G. Zhao, Jinyong Huang","doi":"10.1142/s0219265921430441","DOIUrl":"https://doi.org/10.1142/s0219265921430441","url":null,"abstract":"In order to improve the retrieval ability of super-resolution multi-space block images, a texture retrieval method of block images based on depth hash is proposed. A texture feature analysis model of super-resolution multi-dimensional partitioned images is constructed, which combines texture spatial structure mapping method to realize depth information fusion of partitioned images, adopts edge feature detection and texture sparse feature clustering to realize texture hierarchical structure feature decomposition of super-resolution multi-dimensional partitioned images, and adopts deep image parameter analysis method to construct pixel structure recombination model of multi-dimensional partitioned images. Multi-dimensional texture parameter structure analysis and information clustering are realized for the collected partitioned images in multi-dimensional space. According to the information clustering results, the texture retrieval and extraction of partitioned images are realized by using the deep hash fusion algorithm, and the information detection and feature recognition capabilities of partitioned images in multi-dimensional space are improved. Simulation results show that this method has higher precision and better feature resolution in texture retrieval of partitioned images in multidimensional space, which improves the texture retrieval and recognition ability of partitioned images.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1142/s0219265921410322
A. G. Nagesha, G. Mahesh, Gowrishankar
The fifth-generation (5G) technology is anticipated to permit connectivity to billions of devices, called the Internet of Things (IoT). The primary benefit of 5G is that it has maximum bandwidth and can drastically expand service beyond cell phones to standard internet service for conventionally fixed connectivity to homes, offices, factories, etc. But IoT devices will unavoidably be the primary target of diverse kinds of cyberattacks, notably distributed denial of service (DDoS) attacks. Since the conventional DDoS mitigation techniques are ineffective for 5G networks, machine learning (ML) approaches find helpful to accomplish better security. With this motivation, this study resolves the network security issues posed by network devices in the 5G networks and mitigates the harmful effects of DDoS attacks. This paper presents a new pigeon-inspired optimization-based feature selection with optimal functional link neural network (FLNN), PIOFS-OFLNN model for mitigating DDoS attacks in the 5G environment. The proposed PIOFS-OFLNN model aims to detect DDoS attacks with the inclusion of feature selection and classification processes. The proposed PIOFS-OFLNN model incorporates different techniques such as pre-processing, feature selection, classification, and parameter tuning. In addition, the PIOFS algorithm is employed to choose an optimal subset of features from the pre-processed data. Besides, the OFLNN based classification model is applied to determine DDoS attacks where the Rat Swarm Optimizer (RSO) parameter tuning takes place to adjust the parameters involved in the FLNN model optimally. FLNN is a low computational interconnectivity higher cognitive neural network. There are still no hidden layers. FLNN’s input vector is operationally enlarged to produce non-linear remedies. More details can be accessed application of Nature-Inspired Method to Odia Written by hand Number system Recognition. To validate the improved DDoS detection performance of the proposed model, a benchmark dataset is used.
{"title":"Intelligent Feature Subset Selection with Machine Learning Based Detection and Mitigation of DDoS Attacks in 5G Environment","authors":"A. G. Nagesha, G. Mahesh, Gowrishankar","doi":"10.1142/s0219265921410322","DOIUrl":"https://doi.org/10.1142/s0219265921410322","url":null,"abstract":"The fifth-generation (5G) technology is anticipated to permit connectivity to billions of devices, called the Internet of Things (IoT). The primary benefit of 5G is that it has maximum bandwidth and can drastically expand service beyond cell phones to standard internet service for conventionally fixed connectivity to homes, offices, factories, etc. But IoT devices will unavoidably be the primary target of diverse kinds of cyberattacks, notably distributed denial of service (DDoS) attacks. Since the conventional DDoS mitigation techniques are ineffective for 5G networks, machine learning (ML) approaches find helpful to accomplish better security. With this motivation, this study resolves the network security issues posed by network devices in the 5G networks and mitigates the harmful effects of DDoS attacks. This paper presents a new pigeon-inspired optimization-based feature selection with optimal functional link neural network (FLNN), PIOFS-OFLNN model for mitigating DDoS attacks in the 5G environment. The proposed PIOFS-OFLNN model aims to detect DDoS attacks with the inclusion of feature selection and classification processes. The proposed PIOFS-OFLNN model incorporates different techniques such as pre-processing, feature selection, classification, and parameter tuning. In addition, the PIOFS algorithm is employed to choose an optimal subset of features from the pre-processed data. Besides, the OFLNN based classification model is applied to determine DDoS attacks where the Rat Swarm Optimizer (RSO) parameter tuning takes place to adjust the parameters involved in the FLNN model optimally. FLNN is a low computational interconnectivity higher cognitive neural network. There are still no hidden layers. FLNN’s input vector is operationally enlarged to produce non-linear remedies. More details can be accessed application of Nature-Inspired Method to Odia Written by hand Number system Recognition. To validate the improved DDoS detection performance of the proposed model, a benchmark dataset is used.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1142/s0219265921440242
M. V. Rao, Divya Midhunchakkaravarthy, Sujatha Dandu
A worm is a standalone program, which is self-replicating malware that distributes itself to other computers and networks. An Internet worm can spread across the network and infect millions of computers in truly little time and the damages caused from such attacks are considered extremely high. In addition, these worms also affect the network packet and its performance, where the packets are analyzed by the signature-based intrusion detection system (IDS) and the network performance is analyzed by the NetFlow based IDS. Hence, this article proposes a joint detection of both the signature based and NetFlow based Internet worms using deep learning convolution neural network (DLCNN) with respect to various attacks and it can also prevent the suspicious actions of attackers (cyber-criminals). Additionally, it provides the security for users’ data maintenance, countermeasures, and controls the spreading of the internet worms. The effectiveness of proposed DLCNN model is evaluated using both packet capture (PCAP) and KDD-CUP-99 datasets. Finally, various quality metrics are employed to disclose the superiority of proposed DLCNN model as compared existing machine learning, and back propagated neural network models.
{"title":"Deep Learning CNN Framework for Detection and Classification of Internet Worms","authors":"M. V. Rao, Divya Midhunchakkaravarthy, Sujatha Dandu","doi":"10.1142/s0219265921440242","DOIUrl":"https://doi.org/10.1142/s0219265921440242","url":null,"abstract":"A worm is a standalone program, which is self-replicating malware that distributes itself to other computers and networks. An Internet worm can spread across the network and infect millions of computers in truly little time and the damages caused from such attacks are considered extremely high. In addition, these worms also affect the network packet and its performance, where the packets are analyzed by the signature-based intrusion detection system (IDS) and the network performance is analyzed by the NetFlow based IDS. Hence, this article proposes a joint detection of both the signature based and NetFlow based Internet worms using deep learning convolution neural network (DLCNN) with respect to various attacks and it can also prevent the suspicious actions of attackers (cyber-criminals). Additionally, it provides the security for users’ data maintenance, countermeasures, and controls the spreading of the internet worms. The effectiveness of proposed DLCNN model is evaluated using both packet capture (PCAP) and KDD-CUP-99 datasets. Finally, various quality metrics are employed to disclose the superiority of proposed DLCNN model as compared existing machine learning, and back propagated neural network models.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"38 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1142/s0219265921450122
Yi Jiang
In order to improve the effect of engineering cost accounting, this paper applies the Internet of Things technology to the engineering cost accounting system, and combines the chaotic data processing method with the Internet of Things technology. Moreover, this paper uses the Internet of Things technology to construct the engineering cost accounting system, uses the Internet of Things technology to collect various data in the real-time process of the project, and builds the engineering cost accounting system based on the actual situation. In addition, this paper combines improved intelligent algorithms to improve system performance to enable the system to collect data, manage data, process data, transmit data, and output data. It can be seen from the research results that the project cost evaluation system constructed in this paper is rated above good, which is higher than the existing project cost evaluation methods.The experimental research shows that the project cost accounting system based on the Internet of Things technology proposed in this paper has a good engineering data processing effect.
{"title":"Project Cost Accounting Based on Internet of Things Technology","authors":"Yi Jiang","doi":"10.1142/s0219265921450122","DOIUrl":"https://doi.org/10.1142/s0219265921450122","url":null,"abstract":"In order to improve the effect of engineering cost accounting, this paper applies the Internet of Things technology to the engineering cost accounting system, and combines the chaotic data processing method with the Internet of Things technology. Moreover, this paper uses the Internet of Things technology to construct the engineering cost accounting system, uses the Internet of Things technology to collect various data in the real-time process of the project, and builds the engineering cost accounting system based on the actual situation. In addition, this paper combines improved intelligent algorithms to improve system performance to enable the system to collect data, manage data, process data, transmit data, and output data. It can be seen from the research results that the project cost evaluation system constructed in this paper is rated above good, which is higher than the existing project cost evaluation methods.The experimental research shows that the project cost accounting system based on the Internet of Things technology proposed in this paper has a good engineering data processing effect.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124964560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1142/s0219265921410334
S. Manjunatha, L. Suresh
Cloud computing is the emerging trend that provides a variety of applications to promote corporate business across the globe over the internet. Cloud computing offers services to deploy the infrastructure in the specified environment. Different computing techniques are used to manipulate the cloud services. One of the most eminent techniques is Virtual Machine (VM) Migration which enables to set up the compute resources and storages from one to another host without detaching the application or client. Virtual Machine Migration helpful in minimizing energy dissipation, load balancing, and fault management. It is based on the migration and down time with live and non-live categorization. Live VM Migration in data hubs has potential to minimize energy consumption. The proposed Optimized Energy Model Virtual Machine Algorithm is used to calculate each host in the data hub, while the energy consumed by the system in each hour is increasing exponentially then the proposed algorithm is also responsible for reorders the node and the minimizing the energy after reordering.
{"title":"Improving Energy Efficiency Using Optimized Energy Model Virtual Machine Algorithm in Cloud Computing","authors":"S. Manjunatha, L. Suresh","doi":"10.1142/s0219265921410334","DOIUrl":"https://doi.org/10.1142/s0219265921410334","url":null,"abstract":"Cloud computing is the emerging trend that provides a variety of applications to promote corporate business across the globe over the internet. Cloud computing offers services to deploy the infrastructure in the specified environment. Different computing techniques are used to manipulate the cloud services. One of the most eminent techniques is Virtual Machine (VM) Migration which enables to set up the compute resources and storages from one to another host without detaching the application or client. Virtual Machine Migration helpful in minimizing energy dissipation, load balancing, and fault management. It is based on the migration and down time with live and non-live categorization. Live VM Migration in data hubs has potential to minimize energy consumption. The proposed Optimized Energy Model Virtual Machine Algorithm is used to calculate each host in the data hub, while the energy consumed by the system in each hour is increasing exponentially then the proposed algorithm is also responsible for reorders the node and the minimizing the energy after reordering.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-10DOI: 10.1142/s0219265921470204
Yuanjing Zhao, R. D. J. Samuel, Adhiyaman Manickam
The professional painting industry has experienced a dramatic breakthrough with the rapid expansion of computer science and technology. In the current digital era, digital painting art is extending the more significant creative space to add new content. Digital painting is the modern trend of mainstream painting presented to the public as a new generation of visual art. Creativity may show up, and new techniques of creating art can arise infinitely with the assistance of computer intelligence technology. This article explains how computer image processing is used in the production of art. The report offers a painting technique based on Image Rendering (IR), which does not rely on human expertise in the past, and a color image is turned into a photo with a painting effect automated. Image-based rendering is a novel way in which computer graphics and picture processing are drawn and combined with the requirement to build geometric models, get information from the input image simply by interpolating views, deforming images, and reconstructing the desired action. This article proposes the indirect use of picture processing technology and computer technology to produce oil painting. It will investigate the application of contemporary digital picture technology in order not only to maintain traditional tastes, and to keep pace with the pace of the times, to create traditional optimization.
{"title":"Research on the Application of Computer Image Processing Technology in Painting Creation","authors":"Yuanjing Zhao, R. D. J. Samuel, Adhiyaman Manickam","doi":"10.1142/s0219265921470204","DOIUrl":"https://doi.org/10.1142/s0219265921470204","url":null,"abstract":"The professional painting industry has experienced a dramatic breakthrough with the rapid expansion of computer science and technology. In the current digital era, digital painting art is extending the more significant creative space to add new content. Digital painting is the modern trend of mainstream painting presented to the public as a new generation of visual art. Creativity may show up, and new techniques of creating art can arise infinitely with the assistance of computer intelligence technology. This article explains how computer image processing is used in the production of art. The report offers a painting technique based on Image Rendering (IR), which does not rely on human expertise in the past, and a color image is turned into a photo with a painting effect automated. Image-based rendering is a novel way in which computer graphics and picture processing are drawn and combined with the requirement to build geometric models, get information from the input image simply by interpolating views, deforming images, and reconstructing the desired action. This article proposes the indirect use of picture processing technology and computer technology to produce oil painting. It will investigate the application of contemporary digital picture technology in order not only to maintain traditional tastes, and to keep pace with the pace of the times, to create traditional optimization.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121597527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-10DOI: 10.1142/s0219265921410176
Xie Yong, Qingliang Zhang, Ravindra Luhach, Muhammed Alshehri
Bench press training seems to be the most common exercise for increasing upper-body strength and control among athletes, fitness enthusiasts, and wellness buffs. Bench presses are usually done by lying down on the bench with the back, shoulders, and buttocks in touch. The bench press training surveillance systems with technical advancements are rarely seen in the research domain. Therefore, this paper presents a novel bench press training monitoring method (BPTMM) by evaluating mechanical joint rotational friction using Internet of Things (IoT) sensors. The bench press is a common upper body strength-building and muscle-building conditioning exercise. The bench press and the squat and deadlift are the three primary lifts performed in powerlifting competitions. Artificial intelligence aids in risk prediction and suggests possible positions. There is a 96.8% accuracy rate in surveillance and categorization, according to the findings of the experiments.
{"title":"Research on Monitoring Method of Mechanical Joint Rotational Friction in Bench Press Training","authors":"Xie Yong, Qingliang Zhang, Ravindra Luhach, Muhammed Alshehri","doi":"10.1142/s0219265921410176","DOIUrl":"https://doi.org/10.1142/s0219265921410176","url":null,"abstract":"Bench press training seems to be the most common exercise for increasing upper-body strength and control among athletes, fitness enthusiasts, and wellness buffs. Bench presses are usually done by lying down on the bench with the back, shoulders, and buttocks in touch. The bench press training surveillance systems with technical advancements are rarely seen in the research domain. Therefore, this paper presents a novel bench press training monitoring method (BPTMM) by evaluating mechanical joint rotational friction using Internet of Things (IoT) sensors. The bench press is a common upper body strength-building and muscle-building conditioning exercise. The bench press and the squat and deadlift are the three primary lifts performed in powerlifting competitions. Artificial intelligence aids in risk prediction and suggests possible positions. There is a 96.8% accuracy rate in surveillance and categorization, according to the findings of the experiments.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133786367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-10DOI: 10.1142/s0219265921480030
Lei Cheng, Lin Lu, J. Bhola, Ahmed Mateen Butter
In order to solve the problem that the test time is long and the test efficiency is affected in the process of IC test. With the increase in the complexity of integrated circuits, it is difficult now to diagnose the faults. To overcome this situation, there is a need to upgrade the test strategies. Based on the fault probability model, the order of test types and test vector is being adjusted. To improve the test efficiency, the high-quality test types and test vectors are loaded first, and the fault circuits are hit earlier. A hierarchical dynamic method for IC test flow is proposed. The Bayesian probability model was established by counting the failure rates of each test type and each test vector in the sample integrated circuit, and the loading sequence of each test vector was adjusted according to the probability of hitting the fault point. As the test progresses, the test data are collected constantly, the test failure rates of test type and test vector are dynamically updated, and the loading sequence of test type and test vector is adjusted synchronously. It is proved that the final circuit test time is reduced to 32.172s by the dynamic adjustment method, and the test time is reduced by 53.9%. The use of dynamically adjusted test process can find the fault circuit earlier, significantly reduce the test time of the fault circuit, and improve the test efficiency.
{"title":"Study on Hierarchical Dynamic Adjustment of Integrated Circuit Flow Based on Nonlinear Detection","authors":"Lei Cheng, Lin Lu, J. Bhola, Ahmed Mateen Butter","doi":"10.1142/s0219265921480030","DOIUrl":"https://doi.org/10.1142/s0219265921480030","url":null,"abstract":"In order to solve the problem that the test time is long and the test efficiency is affected in the process of IC test. With the increase in the complexity of integrated circuits, it is difficult now to diagnose the faults. To overcome this situation, there is a need to upgrade the test strategies. Based on the fault probability model, the order of test types and test vector is being adjusted. To improve the test efficiency, the high-quality test types and test vectors are loaded first, and the fault circuits are hit earlier. A hierarchical dynamic method for IC test flow is proposed. The Bayesian probability model was established by counting the failure rates of each test type and each test vector in the sample integrated circuit, and the loading sequence of each test vector was adjusted according to the probability of hitting the fault point. As the test progresses, the test data are collected constantly, the test failure rates of test type and test vector are dynamically updated, and the loading sequence of test type and test vector is adjusted synchronously. It is proved that the final circuit test time is reduced to 32.172s by the dynamic adjustment method, and the test time is reduced by 53.9%. The use of dynamically adjusted test process can find the fault circuit earlier, significantly reduce the test time of the fault circuit, and improve the test efficiency.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127650525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}