Software testing is an important aspect of software development to ensure the quality and reliability of the software. With the increasing complexity of software systems, the number of test cases has also increased significantly, making it challenging to execute all the test cases in a limited amount of time. Test case prioritization techniques have been proposed to tackle this problem by identifying and executing the most important test cases first. In this research paper, we propose the use of machine learning algorithms for prioritization of test cases. We explore different machine learning algorithms, including decision trees, random forests, and neural networks, and compare their performance with traditional prioritization techniques such as code coverage-based and risk-based prioritization. We evaluate the effectiveness of these algorithms on various datasets and metrics such as the number of test cases executed, the fault detection rate, and the execution time. Our experimental results demonstrate that machine learning algorithms can effectively prioritize test cases and outperform traditional techniques in terms of reducing the number of test cases executed while maintaining high fault detection rates. Furthermore, we discuss the potential limitations and future research directions of using machine learning algorithms for test case prioritization. Our research findings contribute to the development of more efficient and effective software testing techniques that can improve the quality and reliability of software systems.
{"title":"Optimizing test case prioritization using machine learning algorithms","authors":"Sheetal Sharma, Swati V. Chande","doi":"10.32629/jai.v6i2.661","DOIUrl":"https://doi.org/10.32629/jai.v6i2.661","url":null,"abstract":"Software testing is an important aspect of software development to ensure the quality and reliability of the software. With the increasing complexity of software systems, the number of test cases has also increased significantly, making it challenging to execute all the test cases in a limited amount of time. Test case prioritization techniques have been proposed to tackle this problem by identifying and executing the most important test cases first. In this research paper, we propose the use of machine learning algorithms for prioritization of test cases. We explore different machine learning algorithms, including decision trees, random forests, and neural networks, and compare their performance with traditional prioritization techniques such as code coverage-based and risk-based prioritization. We evaluate the effectiveness of these algorithms on various datasets and metrics such as the number of test cases executed, the fault detection rate, and the execution time. Our experimental results demonstrate that machine learning algorithms can effectively prioritize test cases and outperform traditional techniques in terms of reducing the number of test cases executed while maintaining high fault detection rates. Furthermore, we discuss the potential limitations and future research directions of using machine learning algorithms for test case prioritization. Our research findings contribute to the development of more efficient and effective software testing techniques that can improve the quality and reliability of software systems.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44175631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate classification of dental caries is crucial for effective oral healthcare. Filters help to increase exposure of the picture taken for the investigation without degrading image quality. Selective median filter is the chosen preprocessing technique that helps to reduce the noise present in the captured image. Dental caries classification system is a model used to detect the presence of cavity in the given input image. Dental caries classification system is evolved with the use of conventional techniques to artificial neural network. Deep learning models are the artificial neural network models that can able to learn the features from the raw images available in the dataset. If this raw image has noise, then it severely affects the accuracy of the deep learning models. In this paper, impact of the preprocessing technique on the classification accuracy is analyzed. Initially, raw images are taken for training on deep learning models without applying any preprocessing technique. This study investigates the impact of Selective median filtering on a dental caries classification system using deep learning models. The motivation behind this research is to enhance the accuracy and reliability of dental caries diagnosis by reducing noise, removing artifacts, and preserving important details in dental radiographs. Experimental results demonstrate that the implementation of Selective median filtering significantly improves the performance of the deep learning model. The hybrid neural network (HNN) classifier achieves an accuracy of 96.15% with Selective median filtering, outperforming the accuracy of 85.07% without preprocessing. The study highlights the theoretical contribution of Selective median filtering in enhancing dental caries classification systems and emphasizes the practical implications for dental clinics, offering improved diagnostic capabilities and better patient outcomes.
{"title":"Impact of Selective median filter on dental caries classification system using deep learning models","authors":"L. Megalan Leo, T. Reddy, A. Simla","doi":"10.32629/jai.v6i2.560","DOIUrl":"https://doi.org/10.32629/jai.v6i2.560","url":null,"abstract":"Accurate classification of dental caries is crucial for effective oral healthcare. Filters help to increase exposure of the picture taken for the investigation without degrading image quality. Selective median filter is the chosen preprocessing technique that helps to reduce the noise present in the captured image. Dental caries classification system is a model used to detect the presence of cavity in the given input image. Dental caries classification system is evolved with the use of conventional techniques to artificial neural network. Deep learning models are the artificial neural network models that can able to learn the features from the raw images available in the dataset. If this raw image has noise, then it severely affects the accuracy of the deep learning models. In this paper, impact of the preprocessing technique on the classification accuracy is analyzed. Initially, raw images are taken for training on deep learning models without applying any preprocessing technique. This study investigates the impact of Selective median filtering on a dental caries classification system using deep learning models. The motivation behind this research is to enhance the accuracy and reliability of dental caries diagnosis by reducing noise, removing artifacts, and preserving important details in dental radiographs. Experimental results demonstrate that the implementation of Selective median filtering significantly improves the performance of the deep learning model. The hybrid neural network (HNN) classifier achieves an accuracy of 96.15% with Selective median filtering, outperforming the accuracy of 85.07% without preprocessing. The study highlights the theoretical contribution of Selective median filtering in enhancing dental caries classification systems and emphasizes the practical implications for dental clinics, offering improved diagnostic capabilities and better patient outcomes.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47748562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chandana, C. J. Dhanyashree, K. L. Ashwini, R. Harini, M. Premkumar, L. Abualigah
In the modern era, time holds immense value, and individuals strive to avoid delays in their daily responsibilities. These fuel stations are time-consuming and rely on human labour for efficient operation. With each passing day, the number of vehicles and devices in our technologically advanced world continues to grow rapidly. As a result, customers wait in queues at fuel stations, fuelling their desire to transition to an automated fuel dispensing system and eliminate the manual fuel distribution process from their daily routines. This research paper introduces an innovative smart fuel dispenser system that leverages RFID technology and IoT-based monitoring to enhance automotive fuelling processes. By addressing the limitations of conventional fuelling systems, this proposed system provides a superior solution that is more efficient and effective. Notably, it offers numerous benefits, such as improved accuracy, efficiency, safety, and sustainability, thereby presenting potential cost savings for fuel station owners and operators. The ongoing project is focused on automating fuel dispensing stations using RFID technology as a highly efficient tool. This approach aims to reduce the traffic congestion typically seen in front of fuel stations by shortening the time required for fuel dispensing compared to traditional manual operations. To enhance control and monitoring capabilities, an Android application has been created. This app allows for the tracking of fuel transactions and transaction history for both customers and fuel station dealers. The system utilizes NodeMCU and the Android app as an Internet-of-Things platform for seamless communication between the system, customers, and dealers. This study presents concrete evidence that supports the viability and potential advantages of the proposed system, emphasizing its capacity to revolutionize the fuelling industry and mitigate carbon emissions. The findings derived from the implemented system have been thoroughly examined, offering an intelligent solution for a sustainable future.
{"title":"Fuel automata: Smart fuel dispenser using RFID technology and IoT-based monitoring for automotive applications","authors":"S. Chandana, C. J. Dhanyashree, K. L. Ashwini, R. Harini, M. Premkumar, L. Abualigah","doi":"10.32629/jai.v6i1.682","DOIUrl":"https://doi.org/10.32629/jai.v6i1.682","url":null,"abstract":"In the modern era, time holds immense value, and individuals strive to avoid delays in their daily responsibilities. These fuel stations are time-consuming and rely on human labour for efficient operation. With each passing day, the number of vehicles and devices in our technologically advanced world continues to grow rapidly. As a result, customers wait in queues at fuel stations, fuelling their desire to transition to an automated fuel dispensing system and eliminate the manual fuel distribution process from their daily routines. This research paper introduces an innovative smart fuel dispenser system that leverages RFID technology and IoT-based monitoring to enhance automotive fuelling processes. By addressing the limitations of conventional fuelling systems, this proposed system provides a superior solution that is more efficient and effective. Notably, it offers numerous benefits, such as improved accuracy, efficiency, safety, and sustainability, thereby presenting potential cost savings for fuel station owners and operators. The ongoing project is focused on automating fuel dispensing stations using RFID technology as a highly efficient tool. This approach aims to reduce the traffic congestion typically seen in front of fuel stations by shortening the time required for fuel dispensing compared to traditional manual operations. To enhance control and monitoring capabilities, an Android application has been created. This app allows for the tracking of fuel transactions and transaction history for both customers and fuel station dealers. The system utilizes NodeMCU and the Android app as an Internet-of-Things platform for seamless communication between the system, customers, and dealers. This study presents concrete evidence that supports the viability and potential advantages of the proposed system, emphasizing its capacity to revolutionize the fuelling industry and mitigate carbon emissions. The findings derived from the implemented system have been thoroughly examined, offering an intelligent solution for a sustainable future.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46719441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high-quality annotated training samples in medical image processing have limited the development of deep neural networks in their field. This paper designs and proposes an integrated method for classifying and detecting diabetic retinopathy based on a multi-scale shallow neural network. The method consists of multiple shallow neural network base learners, which extract pathological features under different receptive fields. The integrated learning strategy proposed is used to optimize the integration and finally realize the classification and detection of diabetic retinopathy. In addition, to verify the effectiveness of the method in this paper on a small sample data-set, based on the two-dimensional entropy of the image, multiple sub-datasets are constructed for verification. The results show that, compared with the existing methods, the integrated method for the classification and detection of diabetic retinopathy proposed in this paper has a good detection effect on a small sample data-set.
{"title":"Classification and detection of diabetic retinopathy based on multi-scale shallow neural network","authors":"M. Ghet, Omar Ismael Al-Sanjary, A. Khatibi","doi":"10.32629/jai.v6i2.638","DOIUrl":"https://doi.org/10.32629/jai.v6i2.638","url":null,"abstract":"The high-quality annotated training samples in medical image processing have limited the development of deep neural networks in their field. This paper designs and proposes an integrated method for classifying and detecting diabetic retinopathy based on a multi-scale shallow neural network. The method consists of multiple shallow neural network base learners, which extract pathological features under different receptive fields. The integrated learning strategy proposed is used to optimize the integration and finally realize the classification and detection of diabetic retinopathy. In addition, to verify the effectiveness of the method in this paper on a small sample data-set, based on the two-dimensional entropy of the image, multiple sub-datasets are constructed for verification. The results show that, compared with the existing methods, the integrated method for the classification and detection of diabetic retinopathy proposed in this paper has a good detection effect on a small sample data-set.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45765657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature extraction plays an important role in accurate preprocessing and real-world applications. High-dimensional features in the data have a significant impact on the machine learning classification system. Relevant feature extraction is a fundamental step not only to reduce the dimensionality but also to improve the performance of the classifier. In this paper, the author proposes a hybrid dimensionality reduction technique using principal component analysis (PCA) and singular value decomposition (SVD) in a machine classification system with a support vector classifier (SVC). To evaluate the performance of PCSVD, the results are compared without using feature extraction techniques or with existing methods of independent component analysis (ICA), PCA, linear discriminant analysis (LDA), and SVD. In addition, the efficiency of the PCSVD method is measured on an increased scale of 1.54% accuracy, 2.70% sensitivity, 3.71% specificity, and 3.58% precision. In addition, reduce the 15% dimensionality and 40.60% RMSE, which are better than existing techniques found in the literature.
{"title":"PCSVD: A hybrid feature extraction technique based on principal component analysis and singular value decomposition","authors":"Vineeta Gulati, Neeraj Raheja","doi":"10.32629/jai.v6i2.586","DOIUrl":"https://doi.org/10.32629/jai.v6i2.586","url":null,"abstract":"Feature extraction plays an important role in accurate preprocessing and real-world applications. High-dimensional features in the data have a significant impact on the machine learning classification system. Relevant feature extraction is a fundamental step not only to reduce the dimensionality but also to improve the performance of the classifier. In this paper, the author proposes a hybrid dimensionality reduction technique using principal component analysis (PCA) and singular value decomposition (SVD) in a machine classification system with a support vector classifier (SVC). To evaluate the performance of PCSVD, the results are compared without using feature extraction techniques or with existing methods of independent component analysis (ICA), PCA, linear discriminant analysis (LDA), and SVD. In addition, the efficiency of the PCSVD method is measured on an increased scale of 1.54% accuracy, 2.70% sensitivity, 3.71% specificity, and 3.58% precision. In addition, reduce the 15% dimensionality and 40.60% RMSE, which are better than existing techniques found in the literature.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48544805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed Al-jumaili, R. C. Muniyandi, M. K. Hasan, Mandeep Jit Singh, J. Paw
Electric power production data has the characteristics of massive data scale, high update frequency and fast growth rate. It is significant to process and analyse electric power production data to diagnose a fault. High levels of informationalisation and intellectualization can be achieved in the actual details of developing a Power Plant Fault Diagnosis Management System. Furthermore, cloud computing technology and association rule mining as the core technology based on analysis of domestic and foreign research. In this paper, the optimised Apriori association rule algorithm is used as technical support to realise the function of interlocking fault diagnosis in the intelligent fault diagnosis system module. Hadoop distributed architecture is used to design and implement the power private cloud computing cluster. The functions of private cloud computing clusters for power extensive data management and analysis are realised through MapReduce computing framework and Hbase database. The leakage fault cases verify the algorithm’s applicability and complete the correlation diagnosis of water wall leakage fault. Through analysing the functional requirements of the system in the project, using MySQL database and Enhancer platform, the intelligent fault diagnosis management system of cloud computing power plant is designed and developed, which realises the functions of system modules such as system authority management, electronic equipment account, technical supervision, expert database, data centre. The result shows that the proposed method improves the security problem of the system, the message-digest algorithm (MD5) is used to encrypt the user password, and a strict role authorisation system is designed to realise the access and manage the system’s security.
{"title":"Intelligent transmission line fault diagnosis using the Apriori associated rule algorithm under cloud computing environment","authors":"Ahmed Al-jumaili, R. C. Muniyandi, M. K. Hasan, Mandeep Jit Singh, J. Paw","doi":"10.32629/jai.v6i1.640","DOIUrl":"https://doi.org/10.32629/jai.v6i1.640","url":null,"abstract":"Electric power production data has the characteristics of massive data scale, high update frequency and fast growth rate. It is significant to process and analyse electric power production data to diagnose a fault. High levels of informationalisation and intellectualization can be achieved in the actual details of developing a Power Plant Fault Diagnosis Management System. Furthermore, cloud computing technology and association rule mining as the core technology based on analysis of domestic and foreign research. In this paper, the optimised Apriori association rule algorithm is used as technical support to realise the function of interlocking fault diagnosis in the intelligent fault diagnosis system module. Hadoop distributed architecture is used to design and implement the power private cloud computing cluster. The functions of private cloud computing clusters for power extensive data management and analysis are realised through MapReduce computing framework and Hbase database. The leakage fault cases verify the algorithm’s applicability and complete the correlation diagnosis of water wall leakage fault. Through analysing the functional requirements of the system in the project, using MySQL database and Enhancer platform, the intelligent fault diagnosis management system of cloud computing power plant is designed and developed, which realises the functions of system modules such as system authority management, electronic equipment account, technical supervision, expert database, data centre. The result shows that the proposed method improves the security problem of the system, the message-digest algorithm (MD5) is used to encrypt the user password, and a strict role authorisation system is designed to realise the access and manage the system’s security.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48651349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Sbrollini, Selene Tomassini, Ruba Sharaan, M. Morettini, A. Dragoni, L. Burattini
Leukemia is a blood cancer characterized by leukocyte overproduction. Clinically, the reference for acute lymphoblastic leukemia diagnosis is a blood biopsy that allows obtain microscopic images of leukocytes, whose early-stage classification into leukemic (LEU) and healthy (HEA) may be disease predictor. Thus, the aim of this study is to propose an interpretable artificial neural network (ANN) for leukocyte classification to timely diagnose acute lymphoblastic leukemia. The “ALL_IDB2” dataset was used. It contains 260 microscopic images showing leukocytes acquired from 130 LEU and 130 HEA subjects. Each microscopic image shows a single leukocyte that was characterized by 8 morphological and 4 statistical features. An ANN was developed to distinguish microscopic images acquired from LEU and HEA subjects, considering 12 features as inputs and the local-interpretable model-agnostic explanatory (LIME) algorithm as an interpretable post-processing algorithm. The ANN was evaluated by the leave-one-out cross-validation procedure. The performance of our ANN is promising, presenting a testing area under the curve of the receiver operating characteristic equal to 87%. Being implemented using standard features and having LIME as a post-processing algorithm, it is clinically interpretable. Therefore, our ANN seems to be a reliable instrument for leukocyte classification to timely diagnose acute lymphoblastic leukemia, guaranteeing a high clinical interpretability level.
{"title":"Leukocyte classification for acute lymphoblastic leukemia timely diagnosis by interpretable artificial neural network","authors":"A. Sbrollini, Selene Tomassini, Ruba Sharaan, M. Morettini, A. Dragoni, L. Burattini","doi":"10.32629/jai.v6i1.594","DOIUrl":"https://doi.org/10.32629/jai.v6i1.594","url":null,"abstract":"Leukemia is a blood cancer characterized by leukocyte overproduction. Clinically, the reference for acute lymphoblastic leukemia diagnosis is a blood biopsy that allows obtain microscopic images of leukocytes, whose early-stage classification into leukemic (LEU) and healthy (HEA) may be disease predictor. Thus, the aim of this study is to propose an interpretable artificial neural network (ANN) for leukocyte classification to timely diagnose acute lymphoblastic leukemia. The “ALL_IDB2” dataset was used. It contains 260 microscopic images showing leukocytes acquired from 130 LEU and 130 HEA subjects. Each microscopic image shows a single leukocyte that was characterized by 8 morphological and 4 statistical features. An ANN was developed to distinguish microscopic images acquired from LEU and HEA subjects, considering 12 features as inputs and the local-interpretable model-agnostic explanatory (LIME) algorithm as an interpretable post-processing algorithm. The ANN was evaluated by the leave-one-out cross-validation procedure. The performance of our ANN is promising, presenting a testing area under the curve of the receiver operating characteristic equal to 87%. Being implemented using standard features and having LIME as a post-processing algorithm, it is clinically interpretable. Therefore, our ANN seems to be a reliable instrument for leukocyte classification to timely diagnose acute lymphoblastic leukemia, guaranteeing a high clinical interpretability level.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48355738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this modern world, a single day without light or the internet is unimaginable. Nowadays, wireless fidelity, often known as Wi-Fi, is the most well-known and commonly utilized conventional wireless technology. Wi-Fi employs radio waves or electromagnetic waves to carry data across networks. Imagine if a basic LED light in and around the hospital could link us to high-speed wireless internet with just a simple flickering of light at a very high speed where eyes cannot detect it. This technology is known as Li-Fi, or light fidelity, and it is 10,000 times faster than Wi-Fi. Hospitals are among the locations where Wi-Fi is absolutely forbidden. As doctors are the frontline soldiers against COVID-19, the objective of this project is to develop smart healthcare systems that use green communications to monitor COVID-19 patients using temperature, pressure, and heart rate sensors from Li-Fi transmitter to Li-Fi receiver by using simple LED light as a medium to transmit the data or information of COVID-19 to the cloud by using Li-Fi Dongle.
{"title":"Need of Li-Fi (light fidelity) technology for the world to track COVID-19 patients","authors":"S. Dinesh, Bharti Chourasia","doi":"10.32629/jai.v6i1.602","DOIUrl":"https://doi.org/10.32629/jai.v6i1.602","url":null,"abstract":"In this modern world, a single day without light or the internet is unimaginable. Nowadays, wireless fidelity, often known as Wi-Fi, is the most well-known and commonly utilized conventional wireless technology. Wi-Fi employs radio waves or electromagnetic waves to carry data across networks. Imagine if a basic LED light in and around the hospital could link us to high-speed wireless internet with just a simple flickering of light at a very high speed where eyes cannot detect it. This technology is known as Li-Fi, or light fidelity, and it is 10,000 times faster than Wi-Fi. Hospitals are among the locations where Wi-Fi is absolutely forbidden. As doctors are the frontline soldiers against COVID-19, the objective of this project is to develop smart healthcare systems that use green communications to monitor COVID-19 patients using temperature, pressure, and heart rate sensors from Li-Fi transmitter to Li-Fi receiver by using simple LED light as a medium to transmit the data or information of COVID-19 to the cloud by using Li-Fi Dongle.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48703973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaurav Meena, K. Mohbey, Malika Acharya, K. Lokesh
Identifying and categorizing a brain tumor is a crucial stage in enhancing knowledge of its underlying mechanisms. Brain tumor detection is one of the most complex challenges in modern medicine. There are a variety of diagnostic imaging techniques that may be used to locate malignancies in the brain. MRI technique has the unparallel image quality and hence serves the purpose. Deep learning methods put at the forefront have facilitated the new paradigm of automated medical image identification approaches. Therefore, reliable and automated categorization techniques are necessary for decreasing the mortality rate in humans caused by this significant chronic condition. To solve a binary problem involving MRI scans that either show or don’t show brain tumors, we offer an automatic classification method in this paper that uses a computationally efficient CNN. The goal is to determine whether the image shows brain tumors. We use the Br35H benchmark dataset for experimentation, freely available on the Internet. We augment the dataset before training to enhance accuracy and reduce time consumption. The experimental evaluation of statistical measures like accuracy, recall, precision, F1 score, and loss suggests that the proposed model outperforms other state-of-the-art methods.
{"title":"An improved convolutional neural network-based model for detecting brain tumors from augmented MRI images","authors":"Gaurav Meena, K. Mohbey, Malika Acharya, K. Lokesh","doi":"10.32629/jai.v6i1.561","DOIUrl":"https://doi.org/10.32629/jai.v6i1.561","url":null,"abstract":"Identifying and categorizing a brain tumor is a crucial stage in enhancing knowledge of its underlying mechanisms. Brain tumor detection is one of the most complex challenges in modern medicine. There are a variety of diagnostic imaging techniques that may be used to locate malignancies in the brain. MRI technique has the unparallel image quality and hence serves the purpose. Deep learning methods put at the forefront have facilitated the new paradigm of automated medical image identification approaches. Therefore, reliable and automated categorization techniques are necessary for decreasing the mortality rate in humans caused by this significant chronic condition. To solve a binary problem involving MRI scans that either show or don’t show brain tumors, we offer an automatic classification method in this paper that uses a computationally efficient CNN. The goal is to determine whether the image shows brain tumors. We use the Br35H benchmark dataset for experimentation, freely available on the Internet. We augment the dataset before training to enhance accuracy and reduce time consumption. The experimental evaluation of statistical measures like accuracy, recall, precision, F1 score, and loss suggests that the proposed model outperforms other state-of-the-art methods.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48309987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Fakhri Mahdi Al-Jumaily, A. Al-Jumaily, Saba J. Al-Jumaili
Most of the existing deep learning-based business process remaining time prediction methods use traditional long-short-term memory recurrent neural networks to build prediction models. Due to the limited modeling ability of traditional long-short-term memory recurrent neural networks for sequence data, and existing methods there is still much room for improvement in the prediction effect. Aiming at the shortcomings of existing methods, this paper proposes a business process remaining time prediction method based on attention bidirectional recurrent neural network. The method uses a bidirectional recurrent neural network to model the process instance data and introduces an attention mechanism to automatically learn the weights of different events in the process instance. In addition, in order to further improve the learning effect, an iterative learning strategy is designed based on the idea of transfer learning, which builds remaining time prediction models for process instances of different lengths, which improves the pertinence of the model. The experimental results show that the proposed method has obvious advantages compared with traditional methods.
{"title":"Prediction method of business process remaining time based on attention bidirectional recurrent neural network","authors":"Ali Fakhri Mahdi Al-Jumaily, A. Al-Jumaily, Saba J. Al-Jumaili","doi":"10.32629/jai.v6i1.639","DOIUrl":"https://doi.org/10.32629/jai.v6i1.639","url":null,"abstract":"Most of the existing deep learning-based business process remaining time prediction methods use traditional long-short-term memory recurrent neural networks to build prediction models. Due to the limited modeling ability of traditional long-short-term memory recurrent neural networks for sequence data, and existing methods there is still much room for improvement in the prediction effect. Aiming at the shortcomings of existing methods, this paper proposes a business process remaining time prediction method based on attention bidirectional recurrent neural network. The method uses a bidirectional recurrent neural network to model the process instance data and introduces an attention mechanism to automatically learn the weights of different events in the process instance. In addition, in order to further improve the learning effect, an iterative learning strategy is designed based on the idea of transfer learning, which builds remaining time prediction models for process instances of different lengths, which improves the pertinence of the model. The experimental results show that the proposed method has obvious advantages compared with traditional methods.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46390112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}