Pub Date : 1900-01-01DOI: 10.54364/aaiml.2021.1116
P. Goldschmidt
Pandemics have altered the course of human history. The overarching goal of pandemic response management is to contain pandemogen spread as quickly and as completely as possible; it is not only the first line of defense, it is the only defense. At the start, only non-pharmaceutical interventions (NPI) may be available. There is no classification scheme for NPI. This article 1) proposes both a classification scheme for NPI and a functional way of coding them for descriptive and analytic purposes and 2) by describing the classification scheme, builds an initial inventory of NPI. For classification purposes, NPI can be organized according to the following broad categories: 1) community control, 2) moving and mixing, 3) testing and tracing, 4) personal performance, 5) environmental engineering, 6) bodies and burials, and 7) infection interdiction. Classification facilitates describing and analyzing NPI (eg, comparing how countries used different NPI to respond to Covid-19 and to evaluate their effectiveness). Next steps may include 1) elaborating the classification scheme and coding structure in operation detail as an international standard and 2) maintaining a corresponding set of standard definitions. In the interim, any entity could apply the scheme to suit its purposes.
{"title":"Classification of Non-pharmacological Interventions for Managing Pandemics","authors":"P. Goldschmidt","doi":"10.54364/aaiml.2021.1116","DOIUrl":"https://doi.org/10.54364/aaiml.2021.1116","url":null,"abstract":"Pandemics have altered the course of human history. The overarching goal of pandemic response management is to contain pandemogen spread as quickly and as completely as possible; it is not only the first line of defense, it is the only defense. At the start, only non-pharmaceutical interventions (NPI) may be available. There is no classification scheme for NPI. This article 1) proposes both a classification scheme for NPI and a functional way of coding them for descriptive and analytic purposes and 2) by describing the classification scheme, builds an initial inventory of NPI. For classification purposes, NPI can be organized according to the following broad categories: 1) community control, 2) moving and mixing, 3) testing and tracing, 4) personal performance, 5) environmental engineering, 6) bodies and burials, and 7) infection interdiction. Classification facilitates describing and analyzing NPI (eg, comparing how countries used different NPI to respond to Covid-19 and to evaluate their effectiveness). Next steps may include 1) elaborating the classification scheme and coding structure in operation detail as an international standard and 2) maintaining a corresponding set of standard definitions. In the interim, any entity could apply the scheme to suit its purposes.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2021.1102
J. C. García, V. Kreinovich
As a result of applying fuzzy rules, we get a fuzzy set describing possible control values. In automatic control systems, we need to defuzzify this fuzzy set, i.e., to transform it to a single control value. One of the most frequently used defuzzification techniques is centroid defuzzification. From the practical viewpoint, an important question is: how accurate is the resulting control recommendation? The more accurately we need to implement the control, the more expensive the resulting controller. The possibility to gauge the accuracy of the fuzzy control recommendation follows from the fact that, from the mathematical viewpoint, centroid defuzzification is equivalent to transforming the fuzzy set into a probability distribution and computing the mean value of control. In view of this interpretation, a natural measure of accuracy of a fuzzy control recommendation is the standard deviation of the corresponding random variable. Computing this standard deviation is straightforward for the traditional [0, 1]-based fuzzy logic, in which all experts’ degree of confidence are represented by numbers from the interval [0, 1]. In practice, however, an expert usually cannot describe his/her degree of confidence by a single number, a more appropriate way to describe his/her confidence is by allowing to mark an interval of possible degrees. In this paper, we provide an efficient algorithm for estimating the accuracy of fuzzy control recommendations under such interval-valued fuzzy uncertainty.
{"title":"How Accurate Are Fuzzy Control Recommendations: Interval-Valued Case","authors":"J. C. García, V. Kreinovich","doi":"10.54364/aaiml.2021.1102","DOIUrl":"https://doi.org/10.54364/aaiml.2021.1102","url":null,"abstract":"As a result of applying fuzzy rules, we get a fuzzy set describing possible control values. In automatic control systems, we need to defuzzify this fuzzy set, i.e., to transform it to a single control value. One of the most frequently used defuzzification techniques is centroid defuzzification. From the practical viewpoint, an important question is: how accurate is the resulting control recommendation? The more accurately we need to implement the control, the more expensive the resulting controller. The possibility to gauge the accuracy of the fuzzy control recommendation follows from the fact that, from the mathematical viewpoint, centroid defuzzification is equivalent to transforming the fuzzy set into a probability distribution and computing the mean value of control. In view of this interpretation, a natural measure of accuracy of a fuzzy control recommendation is the standard deviation of the corresponding random variable. Computing this standard deviation is straightforward for the traditional [0, 1]-based fuzzy logic, in which all experts’ degree of confidence are represented by numbers from the interval [0, 1]. In practice, however, an expert usually cannot describe his/her degree of confidence by a single number, a more appropriate way to describe his/her confidence is by allowing to mark an interval of possible degrees. In this paper, we provide an efficient algorithm for estimating the accuracy of fuzzy control recommendations under such interval-valued fuzzy uncertainty.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2021.1110
Yizhou Chen, Xiaofeng Liu, Jie Li, Tingting Zhang, A. Cangelosi
{"title":"Generation of Head Mirror Behavior and Facial Expression for Humanoid Robots","authors":"Yizhou Chen, Xiaofeng Liu, Jie Li, Tingting Zhang, A. Cangelosi","doi":"10.54364/aaiml.2021.1110","DOIUrl":"https://doi.org/10.54364/aaiml.2021.1110","url":null,"abstract":"","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121419679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2022.1135
Inès Rahmany, Sami Mahfoudhi, Mushira Freihat, T. Moulahi
Diabetes mellitus is a set of metabolic illnesses characterized by abnormally high blood sugar levels. In 2017, 8.8% of the world’s population had diabetes. By 2045, it is expected that this percentage will have risen to approximately 10%. Missing data, a prevalent problem even in a well-designed and controlled study, can have a major impact on the conclusions that can be derived from the available data. Missing data may decrease a study’s statistical validity and lead to erroneous results due to distorted estimations. In this study, we hypothesize that (a) replacing missing values using machine learning techniques rather than the mean value and group mean value and (b) using SVM kernel RBF classifier will result in the highest level of accuracy in comparison to traditional techniques such as DT, RF, NB, SVM, AdaBoost, and ANN. The classification results improved significantly when using regression to replace the missing values over the group median or the mean. This is a 10% improvement over previously developed strategies that have been reported in the literature.
{"title":"Missing Data Recovery in the E-health Context Based on Machine Learning Models","authors":"Inès Rahmany, Sami Mahfoudhi, Mushira Freihat, T. Moulahi","doi":"10.54364/aaiml.2022.1135","DOIUrl":"https://doi.org/10.54364/aaiml.2022.1135","url":null,"abstract":"Diabetes mellitus is a set of metabolic illnesses characterized by abnormally high blood sugar levels. In 2017, 8.8% of the world’s population had diabetes. By 2045, it is expected that this percentage will have risen to approximately 10%. Missing data, a prevalent problem even in a well-designed and controlled study, can have a major impact on the conclusions that can be derived from the available data. Missing data may decrease a study’s statistical validity and lead to erroneous results due to distorted estimations. In this study, we hypothesize that (a) replacing missing values using machine learning techniques rather than the mean value and group mean value and (b) using SVM kernel RBF classifier will result in the highest level of accuracy in comparison to traditional techniques such as DT, RF, NB, SVM, AdaBoost, and ANN. The classification results improved significantly when using regression to replace the missing values over the group median or the mean. This is a 10% improvement over previously developed strategies that have been reported in the literature.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126338356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2022.1124
N. Gabriel, Neil F Johnson
The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.
{"title":"Using Neural Architectures to Model Complex Dynamical Systems","authors":"N. Gabriel, Neil F Johnson","doi":"10.54364/aaiml.2022.1124","DOIUrl":"https://doi.org/10.54364/aaiml.2022.1124","url":null,"abstract":"The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132661450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2023.1154
Bernd Langer, Bernd Gems, Maik Mussler, K. Schmahl, C. Roser
Purpose - Increasing productivity continues to be essential for survival. With the Production Cultural Biorhythm (PCB) we enable the recognition and use of previously hidden potentials of up to 80% additional performance. Design/methodology/approach - This paper describes the results of a quantitative field study conducted in over 100 manufacturing companies. Critical metrics were recorded at short time intervals over months, then averaged to produce a standard day. Findings - Specific patterns emerged that make corporate cultural behavior visible. The field study also identified six basic patterns across companies. Working with these basic patterns, in combination with a developed visualization especially at bottlenecks, enables a phase-centered and thus simplified leadership style.
{"title":"Managing the Bottleneck with PCB - Consequences of a Comprehensive Field Study","authors":"Bernd Langer, Bernd Gems, Maik Mussler, K. Schmahl, C. Roser","doi":"10.54364/aaiml.2023.1154","DOIUrl":"https://doi.org/10.54364/aaiml.2023.1154","url":null,"abstract":"Purpose - Increasing productivity continues to be essential for survival. With the Production Cultural Biorhythm (PCB) we enable the recognition and use of previously hidden potentials of up to 80% additional performance. Design/methodology/approach - This paper describes the results of a quantitative field study conducted in over 100 manufacturing companies. Critical metrics were recorded at short time intervals over months, then averaged to produce a standard day. Findings - Specific patterns emerged that make corporate cultural behavior visible. The field study also identified six basic patterns across companies. Working with these basic patterns, in combination with a developed visualization especially at bottlenecks, enables a phase-centered and thus simplified leadership style.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128698738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2022.1132
G. Shao, H. Zhang, J. Shao, K. Woeste, Lina Tang
Machine learning (ML) reproducibility needs to be informed with reliable evaluation measures. However, routine image classification is evaluated using metrics that are highly sensitive to class prevalence. Consequently, the reproducibility of ML models remains unclear due to class imbalance-induced noise. We suggest regularly using class imbalance-resistant evaluation metrics, including balanced accuracy, area under precision-recall curve, and image classification efficacy, for the evaluation of the reproducibility of ML models. Each of these evaluation metrics is conceptually consistent with and logically complements the others, and their joint use can help explain different aspects of classification performance at the whole-class level and individual class level. These metrics can be used for the validation, testing, and/or transfer of ML classifiers. Comprehensive analysis using these metrics as a routine approach strengthens the reproducibility of ML models.
{"title":"Strengthening Machine Learning Reproducibility for Image Classification","authors":"G. Shao, H. Zhang, J. Shao, K. Woeste, Lina Tang","doi":"10.54364/aaiml.2022.1132","DOIUrl":"https://doi.org/10.54364/aaiml.2022.1132","url":null,"abstract":"Machine learning (ML) reproducibility needs to be informed with reliable evaluation measures. However, routine image classification is evaluated using metrics that are highly sensitive to class prevalence. Consequently, the reproducibility of ML models remains unclear due to class imbalance-induced noise. We suggest regularly using class imbalance-resistant evaluation metrics, including balanced accuracy, area under precision-recall curve, and image classification efficacy, for the evaluation of the reproducibility of ML models. Each of these evaluation metrics is conceptually consistent with and logically complements the others, and their joint use can help explain different aspects of classification performance at the whole-class level and individual class level. These metrics can be used for the validation, testing, and/or transfer of ML classifiers. Comprehensive analysis using these metrics as a routine approach strengthens the reproducibility of ML models.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114159168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2022.1121
Juliana Negrini de Araujo, V. Palade, Tabassom Sedighi, A. Daneshkhah
The World Health Organization estimates that well in excess of one million of lives are lost each year due to road traffic accidents. Since the human factor is the preeminent cause behind the traffic accidents, the development of reliable Advanced Driver Assistance Systems (ADASs) and Autonomous Vehicles (AVs) is seen by many as a possible solution to improve road safety. ADASs rely on the car perception system input that consists of camera(s), LIDAR and/or radar to detect pedestrians and other objects on the road. Hardware improvements as well as advances done in employing Deep Learning techniques for object detection popularized the Convolutional Neural Networks in the area of autonomous driving research and applications. However, the availability of quality and large datasets continues to be a most important contributor to the Deep Learning based model’s performance. With this in mind, this work analyses how a YOLO-based object detection architecture responded to limited data available for training and containing low-quality images. The work focused on pedestrian detection, since vulnerable road user’s safety is a major concern within AV and ADAS research communities. The proposed model was trained and tested on data gathered from Coventry, United Kingdom, city streets. The results show that the original YOLOv3 implementation reaches a 42.18% average precision (AP) and the main challenge was in detecting small objects. Network modifications were made and our final model, based on the original YOLOv3 implementation, achieved 51.6% AP. It is also demonstrated that the employed data augmentation approach is responsible for doubling the average precision of the final model.
{"title":"Improving the Pedestrian Detection Performance in the Absence of Rich Training Datasets: A UK Case Study","authors":"Juliana Negrini de Araujo, V. Palade, Tabassom Sedighi, A. Daneshkhah","doi":"10.54364/aaiml.2022.1121","DOIUrl":"https://doi.org/10.54364/aaiml.2022.1121","url":null,"abstract":"The World Health Organization estimates that well in excess of one million of lives are lost each year due to road traffic accidents. Since the human factor is the preeminent cause behind the traffic accidents, the development of reliable Advanced Driver Assistance Systems (ADASs) and Autonomous Vehicles (AVs) is seen by many as a possible solution to improve road safety. ADASs rely on the car perception system input that consists of camera(s), LIDAR and/or radar to detect pedestrians and other objects on the road. Hardware improvements as well as advances done in employing Deep Learning techniques for object detection popularized the Convolutional Neural Networks in the area of autonomous driving research and applications. However, the availability of quality and large datasets continues to be a most important contributor to the Deep Learning based model’s performance. With this in mind, this work analyses how a YOLO-based object detection architecture responded to limited data available for training and containing low-quality images. The work focused on pedestrian detection, since vulnerable road user’s safety is a major concern within AV and ADAS research communities. The proposed model was trained and tested on data gathered from Coventry, United Kingdom, city streets. The results show that the original YOLOv3 implementation reaches a 42.18% average precision (AP) and the main challenge was in detecting small objects. Network modifications were made and our final model, based on the original YOLOv3 implementation, achieved 51.6% AP. It is also demonstrated that the employed data augmentation approach is responsible for doubling the average precision of the final model.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131860895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2021.1108
Louis Deschamps, Louis Devillaine, C. Gaffet, R. Lambert, Saifeddine Aloui, J. Boutet, Vincent Brault, E. Labyt, C. Jolly
Dysgraphia is a writing disorder that affects a significant part of the population, especially school aged children and particularly boys. Nowadays, dysgraphia is insufficiently diagnosed, partly because of the cumbersomeness of the existing tests. This study aims at developing an automated pre-diagnosis tool for dysgraphia allowing a wide screening among children. Indeed, a wider screening of the population would allow a better care for children with handwriting deficits. This study is based on the world’s largest known database of handwriting samples and uses supervised learning algorithms (Support Vector Machine). Four graphic tablets and two acquisition software solutions were used, in order to ensure that the tool is not tablet dependent and can be used widely. A total of 580 children from 2nd to 5th grade, among which 122 with dysgraphia, were asked to perform the French version of the BHK test on a graphic tablet. Almost a hundred features were developed from these written tracks. The hyperparameters of the SVM and the most discriminating features between children with and without dysgraphia were selected on the training dataset comprised of 80% of the database (461 children). With these hyperparameters and features, the performances on the test dataset (119 children) were a sensitivity of 91% and a specificity of 81% for the detection of children with dysgraphia. Thus, our tool has an accuracy level similar to a human examiner. Moreover, it is widely usable, because of its independence to the tablet, to the acquisition software and to the age of the children thanks to a careful calibration and the use of a moving z-score calculation.
{"title":"Development of a Pre-Diagnosis Tool Based on Machine Learning Algorithms on the BHK Test to Improve the Diagnosis of Dysgraphia","authors":"Louis Deschamps, Louis Devillaine, C. Gaffet, R. Lambert, Saifeddine Aloui, J. Boutet, Vincent Brault, E. Labyt, C. Jolly","doi":"10.54364/aaiml.2021.1108","DOIUrl":"https://doi.org/10.54364/aaiml.2021.1108","url":null,"abstract":"Dysgraphia is a writing disorder that affects a significant part of the population, especially school aged children and particularly boys. Nowadays, dysgraphia is insufficiently diagnosed, partly because of the cumbersomeness of the existing tests. This study aims at developing an automated pre-diagnosis tool for dysgraphia allowing a wide screening among children. Indeed, a wider screening of the population would allow a better care for children with handwriting deficits. This study is based on the world’s largest known database of handwriting samples and uses supervised learning algorithms (Support Vector Machine). Four graphic tablets and two acquisition software solutions were used, in order to ensure that the tool is not tablet dependent and can be used widely. A total of 580 children from 2nd to 5th grade, among which 122 with dysgraphia, were asked to perform the French version of the BHK test on a graphic tablet. Almost a hundred features were developed from these written tracks. The hyperparameters of the SVM and the most discriminating features between children with and without dysgraphia were selected on the training dataset comprised of 80% of the database (461 children). With these hyperparameters and features, the performances on the test dataset (119 children) were a sensitivity of 91% and a specificity of 81% for the detection of children with dysgraphia. Thus, our tool has an accuracy level similar to a human examiner. Moreover, it is widely usable, because of its independence to the tablet, to the acquisition software and to the age of the children thanks to a careful calibration and the use of a moving z-score calculation.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130278830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.54364/aaiml.2021.1101
E. A. N. Fernandes, Yuniel T. Mazola, G. A. Sarriés, M. Bacchi, P. Bode, Cláudio L. Gonzaga, Silvana R. V. Sarriés
{"title":"Discriminating Beef Producing Countries by Multi-Element Analysis and Machine Learning","authors":"E. A. N. Fernandes, Yuniel T. Mazola, G. A. Sarriés, M. Bacchi, P. Bode, Cláudio L. Gonzaga, Silvana R. V. Sarriés","doi":"10.54364/aaiml.2021.1101","DOIUrl":"https://doi.org/10.54364/aaiml.2021.1101","url":null,"abstract":"","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133588977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}