To address the challenges of incomplete knowledge representation, independent decision ranges, and insufficient causal decisions in bogie welding decisions, this paper proposes a hybrid decision-making method and develops a corresponding intelligent system. The collaborative case, rule, and knowledge graph approach is used to support structured documents and domain causality decisions. In addition, we created a knowledge model of bogie welding characteristics and proposed a case-matching method based on empirical weights. Several entity categorizations and relationship extraction models were trained under supervised conditions while building the knowledge graph. CRF and CR-CNN obtained high combined F1 scores (0.710 for CRF and 0.802 for CR-CNN) in the entity classification and relationship extraction tasks, respectively. We designed and developed an intelligent decision system based on the proposed method to implement engineering applications. This system was validated with some actual engineering data. The results show that the system obtained a high score on the accuracy test (0.947 for Corrected Accuracy) and can effectively complete structured document and causality decision-making tasks, having large research significance and engineering value.
{"title":"Hybrid Decision-Making-Method-Based Intelligent System for Integrated Bogie Welding Manufacturing","authors":"Kainan Guan, Yang Sun, Guang Yang, Xinhua Yang","doi":"10.3390/asi6010029","DOIUrl":"https://doi.org/10.3390/asi6010029","url":null,"abstract":"To address the challenges of incomplete knowledge representation, independent decision ranges, and insufficient causal decisions in bogie welding decisions, this paper proposes a hybrid decision-making method and develops a corresponding intelligent system. The collaborative case, rule, and knowledge graph approach is used to support structured documents and domain causality decisions. In addition, we created a knowledge model of bogie welding characteristics and proposed a case-matching method based on empirical weights. Several entity categorizations and relationship extraction models were trained under supervised conditions while building the knowledge graph. CRF and CR-CNN obtained high combined F1 scores (0.710 for CRF and 0.802 for CR-CNN) in the entity classification and relationship extraction tasks, respectively. We designed and developed an intelligent decision system based on the proposed method to implement engineering applications. This system was validated with some actual engineering data. The results show that the system obtained a high score on the accuracy test (0.947 for Corrected Accuracy) and can effectively complete structured document and causality decision-making tasks, having large research significance and engineering value.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42414164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of the paper was the implementation of low-cost smart sensors for the collection of bathymetric data in shallow water and the development of a 3D modelling methodology for the reconstruction of natural and artificial aquatic scenarios. To achieve the aim, a system called GNSS > Sonar > Phone System (G > S > P Sys) was implemented to synchronise sonar sensors (Deeper Smart Sonars CHIRP+ and Pro+ 2) with an external GNSS receiver (SimpleRTK2B) via smartphone. The bathymetric data collection performances of the G > S > P Sys and the Deeper Smart Sonars were studied through specific tests. Finally, a data-driven method based on a machine learning approach to mapping was developed for the 3D modelling of the bathymetric data produced by the G > S > P Sys. The developed 3D modelling method proved to be flexible, easily implementable and capable of producing models of natural surfaces and submerged artificial structures with centimetre accuracy and precision.
该论文的目的是实现低成本的智能传感器,用于收集浅水中的水深数据,并开发用于重建自然和人工水生场景的3D建模方法。为了实现这一目标,一种名为GNSS > Sonar >电话系统(G > S > P Sys)的系统通过智能手机实现了声纳传感器(deep Smart Sonars CHIRP+和Pro+ 2)与外部GNSS接收器(SimpleRTK2B)的同步。通过具体试验,研究了g> S > P Sys和深层智能声纳的测深数据采集性能。最后,开发了一种基于机器学习方法的数据驱动方法,用于对G > S > P Sys生成的测深数据进行三维建模。所开发的三维建模方法被证明是灵活的,易于实现,能够产生具有厘米精度和精度的自然表面和水下人工结构的模型。
{"title":"Smart Sensors System Based on Smartphones and Methodology for 3D Modelling in Shallow Water Scenarios","authors":"Gabriele Vozza, D. Costantino, M. Pepe, V. Alfio","doi":"10.3390/asi6010028","DOIUrl":"https://doi.org/10.3390/asi6010028","url":null,"abstract":"The aim of the paper was the implementation of low-cost smart sensors for the collection of bathymetric data in shallow water and the development of a 3D modelling methodology for the reconstruction of natural and artificial aquatic scenarios. To achieve the aim, a system called GNSS > Sonar > Phone System (G > S > P Sys) was implemented to synchronise sonar sensors (Deeper Smart Sonars CHIRP+ and Pro+ 2) with an external GNSS receiver (SimpleRTK2B) via smartphone. The bathymetric data collection performances of the G > S > P Sys and the Deeper Smart Sonars were studied through specific tests. Finally, a data-driven method based on a machine learning approach to mapping was developed for the 3D modelling of the bathymetric data produced by the G > S > P Sys. The developed 3D modelling method proved to be flexible, easily implementable and capable of producing models of natural surfaces and submerged artificial structures with centimetre accuracy and precision.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48523128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Patil, S Kumar, R. Rani, Poorva Agrawal, S. Pippal
Agriculture has observed significant advancements since smart farming technology has been introduced.The Green Movement played an essential role in the evolution of farming methods. The use of smart farming is accelerating at an unprecedented rate because it benefits both farmers and consumers by enabling more effective crop budgeting. The Smart Agriculture domain uses the Internet of Things, which helps farmers to monitor irrigation management, estimate crop yields, and manage plant diseases. Additionally, farmers can learn about environmental trends and, as a result, which crops to cultivate and how to apply fungicides and insecticides. This research article uses the primary and subsidiary keywords related to smart agriculture to query the Scopus database. The query returned 146 research articles related to the keywords inputted, and an analysis of 146 scientific publications, including journal articles, book chapters, and patents, was conducted. Node XL, Gephi, and VOSviewer are open-source tools for visualizing and exploring bibliometric networks. New facets of the data are revealed, facilitating intuitive exploration. The survey includes a bibliometric analysis as well as a word cloud analysis. This analysis focuses on publication types and publication regions, geographical locations, documents by year, subject area, association, and authorship. The research field of IoT in agricultural plant disease detection articles is found to frequently employ English as the language of publication.
{"title":"A Bibliometric and Word Cloud Analysis on the Role of the Internet of Things in Agricultural Plant Disease Detection","authors":"R. Patil, S Kumar, R. Rani, Poorva Agrawal, S. Pippal","doi":"10.3390/asi6010027","DOIUrl":"https://doi.org/10.3390/asi6010027","url":null,"abstract":"Agriculture has observed significant advancements since smart farming technology has been introduced.The Green Movement played an essential role in the evolution of farming methods. The use of smart farming is accelerating at an unprecedented rate because it benefits both farmers and consumers by enabling more effective crop budgeting. The Smart Agriculture domain uses the Internet of Things, which helps farmers to monitor irrigation management, estimate crop yields, and manage plant diseases. Additionally, farmers can learn about environmental trends and, as a result, which crops to cultivate and how to apply fungicides and insecticides. This research article uses the primary and subsidiary keywords related to smart agriculture to query the Scopus database. The query returned 146 research articles related to the keywords inputted, and an analysis of 146 scientific publications, including journal articles, book chapters, and patents, was conducted. Node XL, Gephi, and VOSviewer are open-source tools for visualizing and exploring bibliometric networks. New facets of the data are revealed, facilitating intuitive exploration. The survey includes a bibliometric analysis as well as a word cloud analysis. This analysis focuses on publication types and publication regions, geographical locations, documents by year, subject area, association, and authorship. The research field of IoT in agricultural plant disease detection articles is found to frequently employ English as the language of publication.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48109413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander D. Oblizanov, Natalya V. Shevskaya, A. Kazak, Marina Rudenko, Anna Dorofeeva
In recent years, artificial intelligence technologies have been developing more and more rapidly, and a lot of research is aimed at solving the problem of explainable artificial intelligence. Various XAI methods are being developed to allow the user to understand the logic of how machine learning models work, and in order to compare the methods, it is necessary to evaluate them. The paper analyzes various approaches to the evaluation of XAI methods, defines the requirements for the evaluation system and suggests metrics to determine the various technical characteristics of the methods. A study was conducted, using these metrics, which determined the degradation in the explanation quality of the SHAP and LIME methods with increasing correlation in the input data. Recommendations are also given for further research in the field of practical implementation of metrics, expanding the scope of their use.
{"title":"Evaluation Metrics Research for Explainable Artificial Intelligence Global Methods Using Synthetic Data","authors":"Alexander D. Oblizanov, Natalya V. Shevskaya, A. Kazak, Marina Rudenko, Anna Dorofeeva","doi":"10.3390/asi6010026","DOIUrl":"https://doi.org/10.3390/asi6010026","url":null,"abstract":"In recent years, artificial intelligence technologies have been developing more and more rapidly, and a lot of research is aimed at solving the problem of explainable artificial intelligence. Various XAI methods are being developed to allow the user to understand the logic of how machine learning models work, and in order to compare the methods, it is necessary to evaluate them. The paper analyzes various approaches to the evaluation of XAI methods, defines the requirements for the evaluation system and suggests metrics to determine the various technical characteristics of the methods. A study was conducted, using these metrics, which determined the degradation in the explanation quality of the SHAP and LIME methods with increasing correlation in the input data. Recommendations are also given for further research in the field of practical implementation of metrics, expanding the scope of their use.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44778396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adi Wibowo, J. Setiawan, H. Afrisal, Anak Agung Sagung Manik Mahachandra Jayanti Mertha, S. Santosa, Kuncoro Wisnu, Ambar Mardiyoto, Henri Nurrakhman, Boyi Kartiwa, W. Caesarendra
Human eyes generally perform product defect inspection in Indonesian industrial production lines; resulting in low efficiency and a high margin of error due to eye tiredness. Automated quality assessment systems for mass production can utilize deep learning connected to cameras for more efficient defect detection. However, employing deep learning on multiple high frame rate cameras (HFRC) causes the need for much computation and decreases deep learning performance, especially in the real-time inspection of moving objects. This paper proposes optimizing computational resources for real-time product quality assessment on moving cylindrical shell objects using deep learning with multiple HFRC Sensors. Two application frameworks embedded with several deep learning models were compared and tested to produce robust and powerful applications to assess the quality of production results on rotating objects. Based on the experiment results using three HFRC Sensors, a web-based application with tensorflow.js framework outperformed desktop applications in computation. Moreover, MobileNet v1 delivers the highest performance compared to other models. This result reveals an opportunity for a web-based application as a lightweight framework for quality assessment using multiple HFRC and deep learning.
{"title":"Optimization of Computational Resources for Real-Time Product Quality Assessment Using Deep Learning and Multiple High Frame Rate Camera Sensors","authors":"Adi Wibowo, J. Setiawan, H. Afrisal, Anak Agung Sagung Manik Mahachandra Jayanti Mertha, S. Santosa, Kuncoro Wisnu, Ambar Mardiyoto, Henri Nurrakhman, Boyi Kartiwa, W. Caesarendra","doi":"10.3390/asi6010025","DOIUrl":"https://doi.org/10.3390/asi6010025","url":null,"abstract":"Human eyes generally perform product defect inspection in Indonesian industrial production lines; resulting in low efficiency and a high margin of error due to eye tiredness. Automated quality assessment systems for mass production can utilize deep learning connected to cameras for more efficient defect detection. However, employing deep learning on multiple high frame rate cameras (HFRC) causes the need for much computation and decreases deep learning performance, especially in the real-time inspection of moving objects. This paper proposes optimizing computational resources for real-time product quality assessment on moving cylindrical shell objects using deep learning with multiple HFRC Sensors. Two application frameworks embedded with several deep learning models were compared and tested to produce robust and powerful applications to assess the quality of production results on rotating objects. Based on the experiment results using three HFRC Sensors, a web-based application with tensorflow.js framework outperformed desktop applications in computation. Moreover, MobileNet v1 delivers the highest performance compared to other models. This result reveals an opportunity for a web-based application as a lightweight framework for quality assessment using multiple HFRC and deep learning.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45730654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abha Pragati, D. A. Gadanayak, Tanmoy Parida, Manohar Mishra
Considering the advantage of the ability of data-mining techniques (DMTs) to detect and classify patterns, this paper explores their applicability for the protection of voltage source converter-based high voltage direct current (VSC-HVDC) transmission systems. In spite of the location of fault occurring points such as external/internal, rectifier-substation/inverter-substation, and positive/negative pole of the DC line, the stated approach is capable of accurate fault detection, classification, and location. Initially, the local voltage and current measurements at one end of the HVDC system are used in this work to extract the feature vector. Once the feature vector is retrieved, the DMTs are trained and tested to identify the fault types (internal DC faults, external AC faults, and external DC faults) and fault location in the particular feeder. In the data-mining framework, several state-of-the-art machine learning (ML) models along with one advanced deep learning (DL) model are used for training and testing. The proposed VSC-HVDC relaying system is comprehensively tested on a symmetric-monopolar-multi-terminal VSC-HVDC system and presents heartening results in diverse operating conditions. The results show that the studied deep belief network (DBN) based DL model performs better compared with other ML models in both fault classification and location. The accuracy of fault classification of the DBN is found to be 98.9% in the noiseless condition and 91.8% in the 20 dB noisy condition. Similarly, the DBN-based DMT is found to be effective in fault locations in the HVDC system with a smaller percentage of errors as MSE: 2.116, RMSE: 1.4531, and MAPE: 2.7047. This approach can be used as an effective low-cost relaying support tool for the VSC-HVDC system, as it does not necessitate a communication channel.
{"title":"Data-Mining Techniques Based Relaying Support for Symmetric-Monopolar-Multi-Terminal VSC-HVDC System","authors":"Abha Pragati, D. A. Gadanayak, Tanmoy Parida, Manohar Mishra","doi":"10.3390/asi6010024","DOIUrl":"https://doi.org/10.3390/asi6010024","url":null,"abstract":"Considering the advantage of the ability of data-mining techniques (DMTs) to detect and classify patterns, this paper explores their applicability for the protection of voltage source converter-based high voltage direct current (VSC-HVDC) transmission systems. In spite of the location of fault occurring points such as external/internal, rectifier-substation/inverter-substation, and positive/negative pole of the DC line, the stated approach is capable of accurate fault detection, classification, and location. Initially, the local voltage and current measurements at one end of the HVDC system are used in this work to extract the feature vector. Once the feature vector is retrieved, the DMTs are trained and tested to identify the fault types (internal DC faults, external AC faults, and external DC faults) and fault location in the particular feeder. In the data-mining framework, several state-of-the-art machine learning (ML) models along with one advanced deep learning (DL) model are used for training and testing. The proposed VSC-HVDC relaying system is comprehensively tested on a symmetric-monopolar-multi-terminal VSC-HVDC system and presents heartening results in diverse operating conditions. The results show that the studied deep belief network (DBN) based DL model performs better compared with other ML models in both fault classification and location. The accuracy of fault classification of the DBN is found to be 98.9% in the noiseless condition and 91.8% in the 20 dB noisy condition. Similarly, the DBN-based DMT is found to be effective in fault locations in the HVDC system with a smaller percentage of errors as MSE: 2.116, RMSE: 1.4531, and MAPE: 2.7047. This approach can be used as an effective low-cost relaying support tool for the VSC-HVDC system, as it does not necessitate a communication channel.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42548723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lean and flexible manufacturing is a matter of necessity for the automotive industries today. Rising consumer expectations, higher raw material and processing costs, and dynamic market conditions are driving the auto sector to become smarter and agile. This paper presents a machine learning-based soft sensor approach for identification and prediction of lean manufacturing (LM) levels of auto industries based on their performances over multifarious flexibilities such as volume flexibility, routing flexibility, product flexibility, labour flexibility, machine flexibility, and material handling. This study was based on a database of lean manufacturing and associated flexibilities collected from 46 auto component enterprises located in the Pune region of Maharashtra State, India. As many as 29 different machine learning models belonging to seven architectures were explored to develop lean manufacturing soft sensors. These soft sensors were trained to classify the auto firms into high, medium or low levels of lean manufacturing based on their manufacturing flexibilities. The seven machine learning architectures included Decision Trees, Discriminants, Naive Bayes, Support Vector Machine (SVM), K-nearest neighbour (KNN), Ensembles, and Neural Networks (NN). The performances of all models were compared on the basis of their respective training, validation, testing accuracies, and computation timespans. Primary results indicate that the neural network architectures provided the best lean manufacturing predictions, followed by Trees, SVM, Ensembles, KNN, Naive Bayes, and Discriminants. The trilayered neural network architecture attained the highest testing prediction accuracy of 80%. The fine, medium, and coarse trees attained the testing accuracy of 60%, as did the quadratic and cubic SVMs, the wide and narrow neural networks, and the ensemble RUSBoosted trees. Remaining models obtained inferior testing accuracies. The best performing model was further analysed by scatter plots of predicted LM classes versus flexibilities, validation and testing confusion matrices, receiver operating characteristics (ROC) curves, and the parallel coordinate plot for identifying manufacturing flexibility trends for the predicted LM levels. Thus, machine learning models can be used to create effective soft sensors that can predict the level of lean manufacturing of an enterprise based on the levels of its manufacturing flexibilities.
{"title":"Lean Manufacturing Soft Sensors for Automotive Industries","authors":"R. Aravind Sekhar, Nitin S. Solke, Pritesh Shah","doi":"10.3390/asi6010022","DOIUrl":"https://doi.org/10.3390/asi6010022","url":null,"abstract":"Lean and flexible manufacturing is a matter of necessity for the automotive industries today. Rising consumer expectations, higher raw material and processing costs, and dynamic market conditions are driving the auto sector to become smarter and agile. This paper presents a machine learning-based soft sensor approach for identification and prediction of lean manufacturing (LM) levels of auto industries based on their performances over multifarious flexibilities such as volume flexibility, routing flexibility, product flexibility, labour flexibility, machine flexibility, and material handling. This study was based on a database of lean manufacturing and associated flexibilities collected from 46 auto component enterprises located in the Pune region of Maharashtra State, India. As many as 29 different machine learning models belonging to seven architectures were explored to develop lean manufacturing soft sensors. These soft sensors were trained to classify the auto firms into high, medium or low levels of lean manufacturing based on their manufacturing flexibilities. The seven machine learning architectures included Decision Trees, Discriminants, Naive Bayes, Support Vector Machine (SVM), K-nearest neighbour (KNN), Ensembles, and Neural Networks (NN). The performances of all models were compared on the basis of their respective training, validation, testing accuracies, and computation timespans. Primary results indicate that the neural network architectures provided the best lean manufacturing predictions, followed by Trees, SVM, Ensembles, KNN, Naive Bayes, and Discriminants. The trilayered neural network architecture attained the highest testing prediction accuracy of 80%. The fine, medium, and coarse trees attained the testing accuracy of 60%, as did the quadratic and cubic SVMs, the wide and narrow neural networks, and the ensemble RUSBoosted trees. Remaining models obtained inferior testing accuracies. The best performing model was further analysed by scatter plots of predicted LM classes versus flexibilities, validation and testing confusion matrices, receiver operating characteristics (ROC) curves, and the parallel coordinate plot for identifying manufacturing flexibility trends for the predicted LM levels. Thus, machine learning models can be used to create effective soft sensors that can predict the level of lean manufacturing of an enterprise based on the levels of its manufacturing flexibilities.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46711589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explored college students’ learning experiences and outcomes as club committee members. Using a linear regression model, it investigated the relevance of personal background variables and club learning experiences to club learning outcomes. This study selected 15 universities and colleges’ student club committee members in Taiwan. A total of 1850 questionnaires were distributed, and 1761 valid questionnaires were recovered, with a recovery rate of over 95%. The study findings are as follows: Regarding learning experiences and learning outcomes, the student club committee members was good. According to this study’s linear regression analysis: The personal background of student club committee members and their club learning experience had significant explanatory power on the learning outcomes, with R2 values ranging from 39.6% to 61.1% for each dimension. This indicates that learning from club activities can be an essential pathway to cultivating students’ learning outcomes and a valuable reference for promoting club education in colleges and universities in Taiwan. Higher education practitioners should plan activities or programs for student club leaders with learning outcomes in mind, and design learning programs to meet the needs of student club leaders in each school so that students can achieve higher quality learning outcomes. In addition, this study also found that the assessment indicators of learning outcomes of the CAS of the U.S. can be applied to check the learning outcomes of student clubs in higher education in Taiwan.
{"title":"An Empirical Study on the Learning Experiences and Outcomes of College Student Club Committee Members Using a Linear Hierarchical Regression Model","authors":"Minge Chen, Hsin-Nan Chien, Ruo-Lan Liu","doi":"10.3390/asi6010023","DOIUrl":"https://doi.org/10.3390/asi6010023","url":null,"abstract":"This study explored college students’ learning experiences and outcomes as club committee members. Using a linear regression model, it investigated the relevance of personal background variables and club learning experiences to club learning outcomes. This study selected 15 universities and colleges’ student club committee members in Taiwan. A total of 1850 questionnaires were distributed, and 1761 valid questionnaires were recovered, with a recovery rate of over 95%. The study findings are as follows: Regarding learning experiences and learning outcomes, the student club committee members was good. According to this study’s linear regression analysis: The personal background of student club committee members and their club learning experience had significant explanatory power on the learning outcomes, with R2 values ranging from 39.6% to 61.1% for each dimension. This indicates that learning from club activities can be an essential pathway to cultivating students’ learning outcomes and a valuable reference for promoting club education in colleges and universities in Taiwan. Higher education practitioners should plan activities or programs for student club leaders with learning outcomes in mind, and design learning programs to meet the needs of student club leaders in each school so that students can achieve higher quality learning outcomes. In addition, this study also found that the assessment indicators of learning outcomes of the CAS of the U.S. can be applied to check the learning outcomes of student clubs in higher education in Taiwan.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46782004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most suitable method for assessing bone age is to check the degree of maturation of the ossification centers in the radiograph images of the left wrist. So, a lot of effort has been made to help radiologists and provide reliable automated methods using these images. This study designs and tests Alexnet and GoogLeNet methods and a new architecture to assess bone age. All these methods are implemented fully automatically on the DHA dataset including 1400 wrist images of healthy children aged 0 to 18 years from Asian, Hispanic, Black, and Caucasian races. For this purpose, the images are first segmented, and 4 different regions of the images are then separated. Bone age in each region is assessed by a separate network whose architecture is new and obtained by trial and error. The final assessment of bone age is performed by an ensemble based on the Average algorithm between 4 CNN models. In the section on results and model evaluation, various tests are performed, including pre-trained network tests. The better performance of the designed system compared to other methods is confirmed by the results of all tests. The proposed method achieves an accuracy of 83.4% and an average error rate of 0.1%.
{"title":"Bone Anomaly Detection by Extracting Regions of Interest and Convolutional Neural Networks","authors":"M. N. Meqdad, Hafiz Tayyab Rauf, Seifedine Kadry","doi":"10.3390/asi6010021","DOIUrl":"https://doi.org/10.3390/asi6010021","url":null,"abstract":"The most suitable method for assessing bone age is to check the degree of maturation of the ossification centers in the radiograph images of the left wrist. So, a lot of effort has been made to help radiologists and provide reliable automated methods using these images. This study designs and tests Alexnet and GoogLeNet methods and a new architecture to assess bone age. All these methods are implemented fully automatically on the DHA dataset including 1400 wrist images of healthy children aged 0 to 18 years from Asian, Hispanic, Black, and Caucasian races. For this purpose, the images are first segmented, and 4 different regions of the images are then separated. Bone age in each region is assessed by a separate network whose architecture is new and obtained by trial and error. The final assessment of bone age is performed by an ensemble based on the Average algorithm between 4 CNN models. In the section on results and model evaluation, various tests are performed, including pre-trained network tests. The better performance of the designed system compared to other methods is confirmed by the results of all tests. The proposed method achieves an accuracy of 83.4% and an average error rate of 0.1%.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47535308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fighting fraudulent insurance claims is a vital task for insurance companies as it costs them billions of dollars each year. Fraudulent insurance claims happen in all areas of insurance, with auto insurance claims being the most widely reported and prominent type of fraud. Traditional methods for identifying fraudulent claims, such as statistical techniques for predictive modeling, can be both costly and inaccurate. In this research, we propose a new way to detect fraudulent insurance claims using a data-driven approach. We clean and augment the data using analysis-based techniques to deal with an imbalanced dataset. Three pre-trained Convolutional Neural Network (CNN) models, AlexNet, InceptionV3 and Resnet101, are selected and minimized by reducing the redundant blocks of layers. These CNN models are stacked in parallel with a proposed 1D CNN model using Bagged Ensemble Learning, where an SVM classifier is used to extract the results separately for the CNN models, which is later combined using the majority polling technique. The proposed method was tested on a public dataset and produced an accuracy of 98%, with a 2% Brier score loss. The numerical experiments demonstrate that the proposed approach achieves promising results for detecting fake accident claims.
{"title":"A Bagged Ensemble Convolutional Neural Networks Approach to Recognize Insurance Claim Frauds","authors":"Youness Abakarim, M. Lahby, Abdelbaki Attioui","doi":"10.3390/asi6010020","DOIUrl":"https://doi.org/10.3390/asi6010020","url":null,"abstract":"Fighting fraudulent insurance claims is a vital task for insurance companies as it costs them billions of dollars each year. Fraudulent insurance claims happen in all areas of insurance, with auto insurance claims being the most widely reported and prominent type of fraud. Traditional methods for identifying fraudulent claims, such as statistical techniques for predictive modeling, can be both costly and inaccurate. In this research, we propose a new way to detect fraudulent insurance claims using a data-driven approach. We clean and augment the data using analysis-based techniques to deal with an imbalanced dataset. Three pre-trained Convolutional Neural Network (CNN) models, AlexNet, InceptionV3 and Resnet101, are selected and minimized by reducing the redundant blocks of layers. These CNN models are stacked in parallel with a proposed 1D CNN model using Bagged Ensemble Learning, where an SVM classifier is used to extract the results separately for the CNN models, which is later combined using the majority polling technique. The proposed method was tested on a public dataset and produced an accuracy of 98%, with a 2% Brier score loss. The numerical experiments demonstrate that the proposed approach achieves promising results for detecting fake accident claims.","PeriodicalId":36273,"journal":{"name":"Applied System Innovation","volume":" ","pages":""},"PeriodicalIF":3.8,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41811802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}