Abstract For product packaging, the visual elements in it can further enhance the appeal of the package to customers. This article briefly introduces visual elements and packaging design and made an example analysis with the gift packaging design of Squirrel Design Studio. In the case study, the packaging design of the studio’s mirror, storage bag, and puzzle was rated by hierarchical analysis and questionnaires, and the packaging design was analyzed based on the rating results. A convolutional neural network (CNN) was also used to evaluate packages in batches. The results showed that the CNN could make a batch evaluation of gift packaging design accurately; the three gift packaging designs were based on the studio’s logo, making the ratings similar; in addition, the packaging design patterns were composed of different geometric shapes to show the studio’s innovative design theme, and the squirrel silhouette and text description were used to strengthen the impression of the studio among customers.
{"title":"Application of visual elements in product paper packaging design: An example of the “squirrel” pattern","authors":"Menghan Ding","doi":"10.1515/jisys-2021-0195","DOIUrl":"https://doi.org/10.1515/jisys-2021-0195","url":null,"abstract":"Abstract For product packaging, the visual elements in it can further enhance the appeal of the package to customers. This article briefly introduces visual elements and packaging design and made an example analysis with the gift packaging design of Squirrel Design Studio. In the case study, the packaging design of the studio’s mirror, storage bag, and puzzle was rated by hierarchical analysis and questionnaires, and the packaging design was analyzed based on the rating results. A convolutional neural network (CNN) was also used to evaluate packages in batches. The results showed that the CNN could make a batch evaluation of gift packaging design accurately; the three gift packaging designs were based on the studio’s logo, making the ratings similar; in addition, the packaging design patterns were composed of different geometric shapes to show the studio’s innovative design theme, and the squirrel silhouette and text description were used to strengthen the impression of the studio among customers.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"1054 1","pages":"104 - 112"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77247714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoshu Wang, Su-hua Zhao, Jingwen Liu, Liyan Wang
Abstract In order to highlight the role of music teaching in the teaching of ideological and political courses, this study puts forward research on the integration of music teaching and ideological and political teaching. This study analyzes the promotion and necessity of college music teaching to ideological and political work, constructs a fusion model of college music teaching and ideological and political work, introduces deep learning methods, and weakens the influence of errors in the data of college music teaching and ideological and political work. This study also optimized the integration mode of college music teaching and ideological and political work and realized the model research of college music teaching and ideological and political work. The experimental results show that the resource output amplitude controlled by the deep learning method has the best stability, and there is no large amplitude fluctuation during the experiment. The output amplitude and control time of the fusion resource are guaranteed and the fusion path of music teaching and ideological and political education is clearer. The maximum control time of the fusion resource of this method is 23.55 ms.
{"title":"College music teaching and ideological and political education integration mode based on deep learning","authors":"Xiaoshu Wang, Su-hua Zhao, Jingwen Liu, Liyan Wang","doi":"10.1515/jisys-2022-0031","DOIUrl":"https://doi.org/10.1515/jisys-2022-0031","url":null,"abstract":"Abstract In order to highlight the role of music teaching in the teaching of ideological and political courses, this study puts forward research on the integration of music teaching and ideological and political teaching. This study analyzes the promotion and necessity of college music teaching to ideological and political work, constructs a fusion model of college music teaching and ideological and political work, introduces deep learning methods, and weakens the influence of errors in the data of college music teaching and ideological and political work. This study also optimized the integration mode of college music teaching and ideological and political work and realized the model research of college music teaching and ideological and political work. The experimental results show that the resource output amplitude controlled by the deep learning method has the best stability, and there is no large amplitude fluctuation during the experiment. The output amplitude and control time of the fusion resource are guaranteed and the fusion path of music teaching and ideological and political education is clearer. The maximum control time of the fusion resource of this method is 23.55 ms.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"72 1","pages":"466 - 476"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76218891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhaoxia Li, Jianxing Zhu, K. Arumugam, J. Bhola, Rahul Neware
Abstract To study the static software defect detection system, based on the traditional static software defect detection system design, a new static software defect detection system design based on big data technology is proposed. The proposed method can optimize the distribution of test resources and improve the quality of software products by predicting the potential defect program modules and design the software and hardware of the static software defect detection system of big data technology. It is found that the traditional static software defect detection system design based on code source data takes a long time, averaging 65 h /day. However, the traditional static software defect detection system based on deep learning has a short detection time, averaging 35 h/day. In this article, the detection time of the static software defect detection system based on big data is shorter than that of the other two traditional system designs, with an average of 15 h/day. Because the system design adjusts the operating state of the system, it improves the accuracy of data operation. On the premise of data collection, the system inspection research is completed, which ensures the operational safety of software data, alleviates the contradiction between system and data to a high degree, improves the efficiency of system operation, reduces unnecessary operations, further shortens the time required for inspection, improves the system performance, and has higher research and operation value.
{"title":"Research on computer static software defect detection system based on big data technology","authors":"Zhaoxia Li, Jianxing Zhu, K. Arumugam, J. Bhola, Rahul Neware","doi":"10.1515/jisys-2021-0260","DOIUrl":"https://doi.org/10.1515/jisys-2021-0260","url":null,"abstract":"Abstract To study the static software defect detection system, based on the traditional static software defect detection system design, a new static software defect detection system design based on big data technology is proposed. The proposed method can optimize the distribution of test resources and improve the quality of software products by predicting the potential defect program modules and design the software and hardware of the static software defect detection system of big data technology. It is found that the traditional static software defect detection system design based on code source data takes a long time, averaging 65 h /day. However, the traditional static software defect detection system based on deep learning has a short detection time, averaging 35 h/day. In this article, the detection time of the static software defect detection system based on big data is shorter than that of the other two traditional system designs, with an average of 15 h/day. Because the system design adjusts the operating state of the system, it improves the accuracy of data operation. On the premise of data collection, the system inspection research is completed, which ensures the operational safety of software data, alleviates the contradiction between system and data to a high degree, improves the efficiency of system operation, reduces unnecessary operations, further shortens the time required for inspection, improves the system performance, and has higher research and operation value.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"47 1","pages":"1055 - 1064"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76239291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Internet of thing (IoT) building sensors can capture several types of building operations, performances, and conditions and send them to a central dashboard to analyze data to support decision-making. Traditionally, laptops and cell phones are the majority of Internet-connected devices. IoT tracking allows customers to close the distance between devices and enterprises by collecting and analyzing various IoT data through connected devices, customers, and applications on the network. There is a lack of requirements for IoT edge applications security and approval. There are no best practices regarding operations focused on IoT incidents. IoT elements are not covered by audit and logging requirements. In this article, a big data analytics-based customer operation (BDA-CO) system analyzes the operation. With the exponential rise in data usage, the explosive development in the IoT devices reflects the ideal overlap of big data growth with IoT. Big data analytics continuously evolving network raises trivial questions about the performance, distribution of data, analysis, and protection of data collection. IoT modifies almost all the construction industry characteristics. Human-centered artificial intelligence is described as systems that always improve because of human input while also delivering an effective experience between the human and the robotic. The IoT is the key factor that ensures greater building performance. It was the first evolution of technology in a long time to turn genuine inventions into an industry that depended heavily on paper and manual processes. The benefits of the IoT in construction are now quite obviously much heavier than those of current manual processes. As a result, more construction companies explore and incorporate IoT strategies to address their productivity challenges, increasing efficiencies and profits. The simulation analysis shows that the proposed BDA-CO model enhances the trust score of 98.5%, accuracy detection ratio of 93.4%, probability ratio of 97.6%, and security ratio of 98.7% and reduces the false negative ratio of 21.3%, response time of 10.5%, delay rate of 19.9%, and packet loss ratio of 15.4% when compared to other existing techniques.
{"title":"Construction of an IoT customer operation analysis system based on big data analysis and human-centered artificial intelligence for web 4.0","authors":"Xinxin Liu, Baojing Liu, Chenye Han, Wei Li","doi":"10.1515/jisys-2022-0067","DOIUrl":"https://doi.org/10.1515/jisys-2022-0067","url":null,"abstract":"Abstract Internet of thing (IoT) building sensors can capture several types of building operations, performances, and conditions and send them to a central dashboard to analyze data to support decision-making. Traditionally, laptops and cell phones are the majority of Internet-connected devices. IoT tracking allows customers to close the distance between devices and enterprises by collecting and analyzing various IoT data through connected devices, customers, and applications on the network. There is a lack of requirements for IoT edge applications security and approval. There are no best practices regarding operations focused on IoT incidents. IoT elements are not covered by audit and logging requirements. In this article, a big data analytics-based customer operation (BDA-CO) system analyzes the operation. With the exponential rise in data usage, the explosive development in the IoT devices reflects the ideal overlap of big data growth with IoT. Big data analytics continuously evolving network raises trivial questions about the performance, distribution of data, analysis, and protection of data collection. IoT modifies almost all the construction industry characteristics. Human-centered artificial intelligence is described as systems that always improve because of human input while also delivering an effective experience between the human and the robotic. The IoT is the key factor that ensures greater building performance. It was the first evolution of technology in a long time to turn genuine inventions into an industry that depended heavily on paper and manual processes. The benefits of the IoT in construction are now quite obviously much heavier than those of current manual processes. As a result, more construction companies explore and incorporate IoT strategies to address their productivity challenges, increasing efficiencies and profits. The simulation analysis shows that the proposed BDA-CO model enhances the trust score of 98.5%, accuracy detection ratio of 93.4%, probability ratio of 97.6%, and security ratio of 98.7% and reduces the false negative ratio of 21.3%, response time of 10.5%, delay rate of 19.9%, and packet loss ratio of 15.4% when compared to other existing techniques.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"19 1","pages":"927 - 943"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76489255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hua Ai, Jianwei Chai, Jilei Zhang, S. Khanna, K. Ghafoor
Abstract This article mitigates the challenges of previously reported literature by reducing the operating cost and improving the performance of network. A genetic algorithm-based tabu search methodology is proposed to solve the link capacity and traffic allocation (CFA) problem in a computer communication network. An efficient modern super-heuristic search method is used to influence the fixed cost, delay cost, and variable cost of a link on the total operating cost in the computer communication network are discussed. The article analyses a large number of computer simulation results to verify the effectiveness of the tabu search algorithm for CFA problems and also improves the quality of solutions significantly compared with traditional Lagrange relaxation and subgradient optimization algorithms. The experimental results show that with the increase of the weighted coefficient of variable cost, the proportion of variable cost in the total cost increases from 10 to 35%. The growth is relatively slow, and the fixed cost is still the main component. In addition, due to the increase in the variable cost, the tabu search algorithm will also choose the link with large luxury to reduce the variable cost, which makes the fixed cost slightly increase, while the network delay cost and average delay slightly decrease. The proposed method, when compared with the genetic algorithm, has more advantages for large-scale or heavy-load networks.
{"title":"Research on the application of search algorithm in computer communication network","authors":"Hua Ai, Jianwei Chai, Jilei Zhang, S. Khanna, K. Ghafoor","doi":"10.1515/jisys-2021-0263","DOIUrl":"https://doi.org/10.1515/jisys-2021-0263","url":null,"abstract":"Abstract This article mitigates the challenges of previously reported literature by reducing the operating cost and improving the performance of network. A genetic algorithm-based tabu search methodology is proposed to solve the link capacity and traffic allocation (CFA) problem in a computer communication network. An efficient modern super-heuristic search method is used to influence the fixed cost, delay cost, and variable cost of a link on the total operating cost in the computer communication network are discussed. The article analyses a large number of computer simulation results to verify the effectiveness of the tabu search algorithm for CFA problems and also improves the quality of solutions significantly compared with traditional Lagrange relaxation and subgradient optimization algorithms. The experimental results show that with the increase of the weighted coefficient of variable cost, the proportion of variable cost in the total cost increases from 10 to 35%. The growth is relatively slow, and the fixed cost is still the main component. In addition, due to the increase in the variable cost, the tabu search algorithm will also choose the link with large luxury to reduce the variable cost, which makes the fixed cost slightly increase, while the network delay cost and average delay slightly decrease. The proposed method, when compared with the genetic algorithm, has more advantages for large-scale or heavy-load networks.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"14 1","pages":"1150 - 1159"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79846105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdulrahman Abbas Mukhlif, Belal Al-Khateeb, M. Mohammed
Abstract Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
{"title":"An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges","authors":"Abdulrahman Abbas Mukhlif, Belal Al-Khateeb, M. Mohammed","doi":"10.1515/jisys-2022-0198","DOIUrl":"https://doi.org/10.1515/jisys-2022-0198","url":null,"abstract":"Abstract Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"33 1","pages":"1085 - 1111"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73851506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Existing image enhancement methods have problems of a slow data transmission and poor conversion effect, resulting in a low image-recognition rate and recognition efficiency. To solve these problems and improve the recognition accuracy and recognition efficiency of image features, this study proposes an edge detail enhancement algorithm for a high-dynamic range image. The original image is transformed by Fourier transform, and the low-frequency and high-frequency images are obtained by the frequency-domain Gaussian filtering and inverse Fourier transform. The low-frequency image is processed by the contrast limited adaptive histogram equalization, and the high-frequency image is obtained by the nonsharpening masking and gray transformation. The low-frequency enhanced and the high-frequency enhanced images are weighted and fused to enhance the edge details of the image. Finally, the experimental results show that the proposed high-dynamic range image edge detail enhancement algorithm maintains the image recognition rate of more than 80% during the practical application, and the recognition time is within 1,200 min, which enhances the image effect, improves the recognition accuracy and recognition efficiency of image characteristics, and fully meets the research requirements.
{"title":"Edge detail enhancement algorithm for high-dynamic range images","authors":"Lanfei Zhao, Qidan Zhu","doi":"10.1515/jisys-2022-0008","DOIUrl":"https://doi.org/10.1515/jisys-2022-0008","url":null,"abstract":"Abstract Existing image enhancement methods have problems of a slow data transmission and poor conversion effect, resulting in a low image-recognition rate and recognition efficiency. To solve these problems and improve the recognition accuracy and recognition efficiency of image features, this study proposes an edge detail enhancement algorithm for a high-dynamic range image. The original image is transformed by Fourier transform, and the low-frequency and high-frequency images are obtained by the frequency-domain Gaussian filtering and inverse Fourier transform. The low-frequency image is processed by the contrast limited adaptive histogram equalization, and the high-frequency image is obtained by the nonsharpening masking and gray transformation. The low-frequency enhanced and the high-frequency enhanced images are weighted and fused to enhance the edge details of the image. Finally, the experimental results show that the proposed high-dynamic range image edge detail enhancement algorithm maintains the image recognition rate of more than 80% during the practical application, and the recognition time is within 1,200 min, which enhances the image effect, improves the recognition accuracy and recognition efficiency of image characteristics, and fully meets the research requirements.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"66 1","pages":"193 - 206"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72588676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract To reduce the workload of paper evaluation and improve the fairness and accuracy of the evaluation process, a writing assistant scoring system for English as a Foreign Language (EFL) learners is designed based on the principle of machine learning. According to the characteristics of the data processing process and the advantages and disadvantages of the Browser/Server (B/S) structure, the equipment structure design of the project online evaluation teaching auxiliary system is further optimized. The panda method is used to read the data, the clean method is used to realize the data preprocessing, the model test is carried out, the cross validation method is selected, the data is divided in advance, and the process of programming the problem scoring system is further optimized, the automatic scoring technology is constructed by English teaching recognition module, feature extraction module and scoring module, the table structure of programming problems is designed, the auxiliary evaluation program of English writing is designed, and the design of writing auxiliary scoring system is completed. The analysis of the experimental results shows that the accuracy of the system is close to 90%, and the total average difference is 0.56. The system can normally take out a variety of test papers. Considering the subjectivity of manual scoring and the impact of key code setting on scoring, the carefully set key code can effectively improve the scoring accuracy of the system. The scoring strategy of the automatic scoring system is effective and the scoring effect is good, and it can be used in practical application.
{"title":"Writing assistant scoring system for English second language learners based on machine learning","authors":"Jianlan Lyu","doi":"10.1515/jisys-2022-0009","DOIUrl":"https://doi.org/10.1515/jisys-2022-0009","url":null,"abstract":"Abstract To reduce the workload of paper evaluation and improve the fairness and accuracy of the evaluation process, a writing assistant scoring system for English as a Foreign Language (EFL) learners is designed based on the principle of machine learning. According to the characteristics of the data processing process and the advantages and disadvantages of the Browser/Server (B/S) structure, the equipment structure design of the project online evaluation teaching auxiliary system is further optimized. The panda method is used to read the data, the clean method is used to realize the data preprocessing, the model test is carried out, the cross validation method is selected, the data is divided in advance, and the process of programming the problem scoring system is further optimized, the automatic scoring technology is constructed by English teaching recognition module, feature extraction module and scoring module, the table structure of programming problems is designed, the auxiliary evaluation program of English writing is designed, and the design of writing auxiliary scoring system is completed. The analysis of the experimental results shows that the accuracy of the system is close to 90%, and the total average difference is 0.56. The system can normally take out a variety of test papers. Considering the subjectivity of manual scoring and the impact of key code setting on scoring, the carefully set key code can effectively improve the scoring accuracy of the system. The scoring strategy of the automatic scoring system is effective and the scoring effect is good, and it can be used in practical application.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"6 6","pages":"271 - 288"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72482610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Day-to-day lives are affected globally by the epidemic coronavirus 2019. With an increasing number of positive cases, India has now become a highly affected country. Chronic diseases affect individuals with no time identification and impose a huge disease burden on society. In this article, an Efficient Recurrent Neural Network with Ensemble Classifier (ERNN-EC) is built using VGG-16 and Alexnet with weighted model to predict disease and its level. The dataset is partitioned randomly into small subsets by utilizing mean-based splitting method. Various models of classifier create a homogeneous ensemble by utilizing an accuracy-based weighted aging classifier ensemble, which is a weighted model’s modification. Two state of art methods such as Graph Sequence Recurrent Neural Network and Hybrid Rough-Block-Based Neural Network are used for comparison with respect to some parameters such as accuracy, precision, recall, f1-score, and relative absolute error (RAE). As a result, it is found that the proposed ERNN-EC method accomplishes accuracy of 95.2%, precision of 91%, recall of 85%, F1-score of 83.4%, and RAE of 41.6%.
{"title":"An efficient recurrent neural network with ensemble classifier-based weighted model for disease prediction","authors":"Tamilselvi Kesavan, Ramesh Kumar Krishnamoorthy","doi":"10.1515/jisys-2022-0068","DOIUrl":"https://doi.org/10.1515/jisys-2022-0068","url":null,"abstract":"Abstract Day-to-day lives are affected globally by the epidemic coronavirus 2019. With an increasing number of positive cases, India has now become a highly affected country. Chronic diseases affect individuals with no time identification and impose a huge disease burden on society. In this article, an Efficient Recurrent Neural Network with Ensemble Classifier (ERNN-EC) is built using VGG-16 and Alexnet with weighted model to predict disease and its level. The dataset is partitioned randomly into small subsets by utilizing mean-based splitting method. Various models of classifier create a homogeneous ensemble by utilizing an accuracy-based weighted aging classifier ensemble, which is a weighted model’s modification. Two state of art methods such as Graph Sequence Recurrent Neural Network and Hybrid Rough-Block-Based Neural Network are used for comparison with respect to some parameters such as accuracy, precision, recall, f1-score, and relative absolute error (RAE). As a result, it is found that the proposed ERNN-EC method accomplishes accuracy of 95.2%, precision of 91%, recall of 85%, F1-score of 83.4%, and RAE of 41.6%.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"12 1","pages":"979 - 991"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74383758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In today’s era of rapid development in science and technology, the development of digital technology has increasingly higher requirements for data processing functions. The matrix signal commonly used in engineering applications also puts forward higher requirements for processing speed. The eigenvalues of the matrix represent many characteristics of the matrix. Its mathematical meaning represents the expansion of the inherent vector, and its physical meaning represents the spectrum of vibration. The eigenvalue of a matrix is the focus of matrix theory. The problem of matrix eigenvalues is widely used in many research fields such as physics, chemistry, and biology. A neural network is a neuron model constructed by imitating biological neural networks. Since it was proposed, the application research of its typical models, such as recurrent neural networks and cellular neural networks, has become a new hot spot. With the emergence of deep neural network theory, scholars continue to combine deep neural networks to calculate matrix eigenvalues. This article aims to study the estimation and application of matrix eigenvalues based on deep neural networks. This article introduces the related methods of matrix eigenvalue estimation based on deep neural networks, and also designs experiments to compare the time of matrix eigenvalue estimation methods based on deep neural networks and traditional algorithms. It was found that under the serial algorithm, the algorithm based on the deep neural network reduced the calculation time by about 7% compared with the traditional algorithm, and under the parallel algorithm, the calculation time was reduced by about 17%. Experiments are also designed to calculate matrix eigenvalues with Obj and recurrent neural networks (RNNS) models, which proves that the Oja algorithm is only suitable for calculating the maximum eigenvalues of non-negative matrices, while RNNS is commonly used in general models.
{"title":"Estimation and application of matrix eigenvalues based on deep neural network","authors":"Zhi-quan Hu","doi":"10.1515/jisys-2022-0126","DOIUrl":"https://doi.org/10.1515/jisys-2022-0126","url":null,"abstract":"Abstract In today’s era of rapid development in science and technology, the development of digital technology has increasingly higher requirements for data processing functions. The matrix signal commonly used in engineering applications also puts forward higher requirements for processing speed. The eigenvalues of the matrix represent many characteristics of the matrix. Its mathematical meaning represents the expansion of the inherent vector, and its physical meaning represents the spectrum of vibration. The eigenvalue of a matrix is the focus of matrix theory. The problem of matrix eigenvalues is widely used in many research fields such as physics, chemistry, and biology. A neural network is a neuron model constructed by imitating biological neural networks. Since it was proposed, the application research of its typical models, such as recurrent neural networks and cellular neural networks, has become a new hot spot. With the emergence of deep neural network theory, scholars continue to combine deep neural networks to calculate matrix eigenvalues. This article aims to study the estimation and application of matrix eigenvalues based on deep neural networks. This article introduces the related methods of matrix eigenvalue estimation based on deep neural networks, and also designs experiments to compare the time of matrix eigenvalue estimation methods based on deep neural networks and traditional algorithms. It was found that under the serial algorithm, the algorithm based on the deep neural network reduced the calculation time by about 7% compared with the traditional algorithm, and under the parallel algorithm, the calculation time was reduced by about 17%. Experiments are also designed to calculate matrix eigenvalues with Obj and recurrent neural networks (RNNS) models, which proves that the Oja algorithm is only suitable for calculating the maximum eigenvalues of non-negative matrices, while RNNS is commonly used in general models.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"1 1","pages":"1246 - 1261"},"PeriodicalIF":3.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90729628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}