Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227313
Rachel Martins Ventriglia, L. Bastos, Karla Figueiredo, Marley Vallasco
As embarcações Pipe-laying Support Vessel (PLSV) realizam tarefas de interligação submarinas, que necessitam de diversos recursos materiais. Estes recursos são carregados nos navios, e atualmente o planejamento dos carregamentos é resolvido de forma heurística, com taxas de erros altas, em torno de 84%. Com o objetivo de auxiliar neste planejamento operacional, este trabalho propôs a investigação e seleção de diversos modelos de aprendizado de máquina para prever a duração dos carregamentos. Os modelos que apresentaram melhor desempenho na base de teste foram o Gradient Boosting, Regressão Linear e o Stacking Regressor, com um erro percentual médio absoluto de no máximo 36% nos dados de teste.
{"title":"Previsão da duração de carregamentos de embarcações PLSV","authors":"Rachel Martins Ventriglia, L. Bastos, Karla Figueiredo, Marley Vallasco","doi":"10.5753/eniac.2022.227313","DOIUrl":"https://doi.org/10.5753/eniac.2022.227313","url":null,"abstract":"As embarcações Pipe-laying Support Vessel (PLSV) realizam tarefas de interligação submarinas, que necessitam de diversos recursos materiais. Estes recursos são carregados nos navios, e atualmente o planejamento dos carregamentos é resolvido de forma heurística, com taxas de erros altas, em torno de 84%. Com o objetivo de auxiliar neste planejamento operacional, este trabalho propôs a investigação e seleção de diversos modelos de aprendizado de máquina para prever a duração dos carregamentos. Os modelos que apresentaram melhor desempenho na base de teste foram o Gradient Boosting, Regressão Linear e o Stacking Regressor, com um erro percentual médio absoluto de no máximo 36% nos dados de teste.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227575
Humberto Hayashi Sano, Lilian Berton
Neural networks provide useful approaches for determining solutions to complex nonlinear problems. The use of these models offers a feasible approach to help aircraft maintenance, especially health monitoring and fault detection. The technical complexity of aircraft systems poses many challenges for maintenance lines that need to optimize time, efficiency, and consistency. In this work, we first employ Convolutional Neural Networks (CNN), and Multi-Layer Perceptron (MLP) for the classification of aircraft Pressure Regulated Shutoff Valves (PRSOV). We classify a wide range of defects such as Friction, Charge and Discharge faults considering single and multi-failures. As a result of this work, we observed a significant improvement in the classification accuracy in the case of applying neural networks such as MLP (0.9962) and CNN (0.9937) when compared to a baseline KNN (0.8788).
{"title":"Application of Deep Learning Models for Aircraft Maintenance","authors":"Humberto Hayashi Sano, Lilian Berton","doi":"10.5753/eniac.2022.227575","DOIUrl":"https://doi.org/10.5753/eniac.2022.227575","url":null,"abstract":"Neural networks provide useful approaches for determining solutions to complex nonlinear problems. The use of these models offers a feasible approach to help aircraft maintenance, especially health monitoring and fault detection. The technical complexity of aircraft systems poses many challenges for maintenance lines that need to optimize time, efficiency, and consistency. In this work, we first employ Convolutional Neural Networks (CNN), and Multi-Layer Perceptron (MLP) for the classification of aircraft Pressure Regulated Shutoff Valves (PRSOV). We classify a wide range of defects such as Friction, Charge and Discharge faults considering single and multi-failures. As a result of this work, we observed a significant improvement in the classification accuracy in the case of applying neural networks such as MLP (0.9962) and CNN (0.9937) when compared to a baseline KNN (0.8788).","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131855425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227607
H. S. Lima, Damires Fernandes, Thiago J. M. Moura
Example-dependent cost-sensitive classification methods are suitable to many real-world classification problems, where the costs, due to misclassification, vary among every example of a dataset. Tax administration applications are included in this segment of problems, since they deal with different values involved in the tax payments. To help matters, this work presents an experimental evaluation which aims to verify whether cost-sensitive learning algorithms are more cost-effective on average than traditional ones. This task is accomplished in a tax administration application domain, what implies the need of a cost-matrix regarding debt values. The obtained results show that cost-sensitive methods avoid situations like erroneously granting a request with a debt involving millions of reals. Considering the savings score, the cost-sensitive classification methods achieved higher results than their traditional method versions.
{"title":"On the evaluation of example-dependent cost-sensitive models for tax debts classification","authors":"H. S. Lima, Damires Fernandes, Thiago J. M. Moura","doi":"10.5753/eniac.2022.227607","DOIUrl":"https://doi.org/10.5753/eniac.2022.227607","url":null,"abstract":"Example-dependent cost-sensitive classification methods are suitable to many real-world classification problems, where the costs, due to misclassification, vary among every example of a dataset. Tax administration applications are included in this segment of problems, since they deal with different values involved in the tax payments. To help matters, this work presents an experimental evaluation which aims to verify whether cost-sensitive learning algorithms are more cost-effective on average than traditional ones. This task is accomplished in a tax administration application domain, what implies the need of a cost-matrix regarding debt values. The obtained results show that cost-sensitive methods avoid situations like erroneously granting a request with a debt involving millions of reals. Considering the savings score, the cost-sensitive classification methods achieved higher results than their traditional method versions.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127918670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227310
Leonam R. S. Miranda, F. G. Guimarães
Promising results have been obtained in recent years when using OWA operators to aggregate data within CNNs pool layers, training their weights, instead of using the more usual operators (max and mean). OWA operators were also used to learn channel wise information from a certain layer, and the newly generated information is used to complement the input data for the following layer. The purpose of this article is to analyze and combine the two mentioned ideas. In addition to using the channel wise information generated by trainable OWA operators to complement the input data, replacement will also be analyzed. Several tests have been done to evaluate the performance change when applying OWA operators to classify images using VGG13 model.
{"title":"Application of Learned OWA Operators in Pooling and Channel Aggregation Layers in Convolutional Neural Networks","authors":"Leonam R. S. Miranda, F. G. Guimarães","doi":"10.5753/eniac.2022.227310","DOIUrl":"https://doi.org/10.5753/eniac.2022.227310","url":null,"abstract":"Promising results have been obtained in recent years when using OWA operators to aggregate data within CNNs pool layers, training their weights, instead of using the more usual operators (max and mean). OWA operators were also used to learn channel wise information from a certain layer, and the newly generated information is used to complement the input data for the following layer. The purpose of this article is to analyze and combine the two mentioned ideas. In addition to using the channel wise information generated by trainable OWA operators to complement the input data, replacement will also be analyzed. Several tests have been done to evaluate the performance change when applying OWA operators to classify images using VGG13 model.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114156941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227174
Gustavo F. C. de Castro, R. Tinós
The K-Nearest Neighbors (KNN) is a simple and intuitive nonparametric classification algorithm. In KNN, the K nearest neighbors are determined according to the distance to the example to be classified. Generally, the Euclidean distance is used, which facilitates the formation of hyper-ellipsoid clusters. In this work, we propose using the Nk interaction graph to return the K-nearest neighbors in KNN. The Nk interaction graph, originally used in clustering, is built based on the distance between examples and spatial density in small groups formed by k examples of the training dataset. By using the distance combined with the spatial density, it is possible to form clusters with arbitrary shapes. We propose two variations of the KNN based on the Nk interaction graph. They differ in the way in which the vertices associated with the N examples of the training dataset are visited. The two proposed algorithms are compared to the original KNN in experiments with datasets with different properties.
K-Nearest Neighbors(KNN)是一种简单直观的非参数分类算法。在 KNN 中,K 个近邻是根据与待分类实例的距离确定的。一般情况下,使用欧氏距离,这有利于形成超椭圆形聚类。在这项工作中,我们建议使用 Nk 交互图来返回 KNN 中的 K 近邻。Nk 交互图最初用于聚类,它是根据实例之间的距离和由训练数据集的 k 个实例形成的小群的空间密度建立的。通过使用距离和空间密度,可以形成任意形状的聚类。我们提出了两种基于 Nk 交互图的 KNN 变体。它们的不同之处在于访问与训练数据集 N 个示例相关的顶点的方式。在使用具有不同属性的数据集进行的实验中,我们将这两种算法与原始 KNN 进行了比较。
{"title":"K-Nearest Neighbors based on the Nk Interaction Graph","authors":"Gustavo F. C. de Castro, R. Tinós","doi":"10.5753/eniac.2022.227174","DOIUrl":"https://doi.org/10.5753/eniac.2022.227174","url":null,"abstract":"The K-Nearest Neighbors (KNN) is a simple and intuitive nonparametric classification algorithm. In KNN, the K nearest neighbors are determined according to the distance to the example to be classified. Generally, the Euclidean distance is used, which facilitates the formation of hyper-ellipsoid clusters. In this work, we propose using the Nk interaction graph to return the K-nearest neighbors in KNN. The Nk interaction graph, originally used in clustering, is built based on the distance between examples and spatial density in small groups formed by k examples of the training dataset. By using the distance combined with the spatial density, it is possible to form clusters with arbitrary shapes. We propose two variations of the KNN based on the Nk interaction graph. They differ in the way in which the vertices associated with the N examples of the training dataset are visited. The two proposed algorithms are compared to the original KNN in experiments with datasets with different properties.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227293
Thalita Mendonça Antico, L. F. R. Moreira, Rodrigo Moreira
The diagnosis of diseases in food crops based on machine learning seemed satisfactory and suitable for use on a large scale. The Convolutional Neural Networks (CNNs) perform accurately in the disease prediction considering the image capture of the crop leaf, being extensively enhanced in the literature. These machine learning techniques fall short in data privacy, as they require sharing the data in the training process with a central server, disregarding competitive or regulatory concerns. Thus, Federated Learning (FL) aims to support distributed training to address recognized gaps in centralized training. As far as we know, this paper inaugurates the use and evaluation of FL applied in maize leaf diseases. We evaluated the performance of five CNNs trained under the distributed paradigm and measured their training time compared to the classification performance. In addition, we consider the suitability of distributed training considering the volume of network traffic and the number of parameters of each CNN. Our results indicate that FL potentially enhances data privacy in heterogeneous domains.
{"title":"Evaluating the Potential of Federated Learning for Maize Leaf Disease Prediction","authors":"Thalita Mendonça Antico, L. F. R. Moreira, Rodrigo Moreira","doi":"10.5753/eniac.2022.227293","DOIUrl":"https://doi.org/10.5753/eniac.2022.227293","url":null,"abstract":"The diagnosis of diseases in food crops based on machine learning seemed satisfactory and suitable for use on a large scale. The Convolutional Neural Networks (CNNs) perform accurately in the disease prediction considering the image capture of the crop leaf, being extensively enhanced in the literature. These machine learning techniques fall short in data privacy, as they require sharing the data in the training process with a central server, disregarding competitive or regulatory concerns. Thus, Federated Learning (FL) aims to support distributed training to address recognized gaps in centralized training. As far as we know, this paper inaugurates the use and evaluation of FL applied in maize leaf diseases. We evaluated the performance of five CNNs trained under the distributed paradigm and measured their training time compared to the classification performance. In addition, we consider the suitability of distributed training considering the volume of network traffic and the number of parameters of each CNN. Our results indicate that FL potentially enhances data privacy in heterogeneous domains.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121141537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227356
Douglas Amorim de Oliveira, Karina Valdivia Delgado, M. Lauretto
Com o crescimento exponencial na geração de dados observado nas últimas décadas, a realização de tarefas de classificação sobre esses dados apresenta diversos desafios. Estes conjuntos de dados, por vezes, não são balanceadas quanto às suas classes e podem ocorrer alterações da formação das classes ao longo do tempo, chamadas de mudança de conceito. Dentre os algoritmos que visam solucionar esses problemas, o Kappa Updated Ensemble (KUE) tem apresentado bom desempenho em fluxo de dados com mudança de conceito. Como sua formulação original não é projetada para classes desbalanceadas, neste trabalho foram realizadas modificações no KUE afim de torná-lo mais robusto e aderente ao cenário de desbalanceamento nas bases de dados. Em experimentos realizados sobre oito conjuntos de dados com diferentes taxas de desbalanceamentos, o KUE modificado superou a versão original em cinco conjuntos de dados e produziu desempenho estatisticamente equivalente nos três restantes. Estes resultados são promissores e motivam novos desenvolvimentos para esta abordagem.
{"title":"Algoritmo de Ensemble para Classificação em Fluxo de Dados com Classes Desbalanceadas e Mudanças de Conceito","authors":"Douglas Amorim de Oliveira, Karina Valdivia Delgado, M. Lauretto","doi":"10.5753/eniac.2022.227356","DOIUrl":"https://doi.org/10.5753/eniac.2022.227356","url":null,"abstract":"Com o crescimento exponencial na geração de dados observado nas últimas décadas, a realização de tarefas de classificação sobre esses dados apresenta diversos desafios. Estes conjuntos de dados, por vezes, não são balanceadas quanto às suas classes e podem ocorrer alterações da formação das classes ao longo do tempo, chamadas de mudança de conceito. Dentre os algoritmos que visam solucionar esses problemas, o Kappa Updated Ensemble (KUE) tem apresentado bom desempenho em fluxo de dados com mudança de conceito. Como sua formulação original não é projetada para classes desbalanceadas, neste trabalho foram realizadas modificações no KUE afim de torná-lo mais robusto e aderente ao cenário de desbalanceamento nas bases de dados. Em experimentos realizados sobre oito conjuntos de dados com diferentes taxas de desbalanceamentos, o KUE modificado superou a versão original em cinco conjuntos de dados e produziu desempenho estatisticamente equivalente nos três restantes. Estes resultados são promissores e motivam novos desenvolvimentos para esta abordagem.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123345207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227600
A. P. S. Silva, Lucas R. Abbade, R. D. S. Cunha, T. M. Suller, Eric O. Gomes, E. Gomi, A. H. R. Costa
Multivariate Time Series Classification (MTSC) is a complex problem that has seen great advances in recent years from the application of state-of-the-art machine learning techniques. However, there is still a need for a thorough evaluation of the effect of signal noise in the classification performance of MTSC techniques. To this end, in this paper, we evaluate three current and effective MTSC classifiers – DDTW, ROCKET and InceptionTime – and propose their use in a real-world classification problem: the detection of mooring line failure in offshore platforms. We show that all of them feature state-of-the-art accuracy, with ROCKET presenting very good results, and InceptionTime being marginally more accurate and resilient to noise.
{"title":"Machine learning for noisy multivariate time series classification: a comparison and practical evaluation","authors":"A. P. S. Silva, Lucas R. Abbade, R. D. S. Cunha, T. M. Suller, Eric O. Gomes, E. Gomi, A. H. R. Costa","doi":"10.5753/eniac.2022.227600","DOIUrl":"https://doi.org/10.5753/eniac.2022.227600","url":null,"abstract":"Multivariate Time Series Classification (MTSC) is a complex problem that has seen great advances in recent years from the application of state-of-the-art machine learning techniques. However, there is still a need for a thorough evaluation of the effect of signal noise in the classification performance of MTSC techniques. To this end, in this paper, we evaluate three current and effective MTSC classifiers – DDTW, ROCKET and InceptionTime – and propose their use in a real-world classification problem: the detection of mooring line failure in offshore platforms. We show that all of them feature state-of-the-art accuracy, with ROCKET presenting very good results, and InceptionTime being marginally more accurate and resilient to noise.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134165928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227346
Fabiana Coutinho Boldrin, Adriano Henrique Cantão, R. Tinós, J. A. Baranauskas
Stacking é um dos algoritmos que combina os resultados de diferentes classificadores que foram gerados utilizando o mesmo conjunto de treinamento. Com objetivo de explorar alguns aspectos com relação ao algoritmo de stacking como o número de levels (camadas) de aprendizado, o número de classificadores por level e os algoritmos de utilizados, foi proposto o multi-level stacking. Para este trabalho foram feitos experimentos utilizando três tipos diferentes de indutores para diferentes datasets com dois levels de aprendizado.
{"title":"Multi-Level Stacking","authors":"Fabiana Coutinho Boldrin, Adriano Henrique Cantão, R. Tinós, J. A. Baranauskas","doi":"10.5753/eniac.2022.227346","DOIUrl":"https://doi.org/10.5753/eniac.2022.227346","url":null,"abstract":"Stacking é um dos algoritmos que combina os resultados de diferentes classificadores que foram gerados utilizando o mesmo conjunto de treinamento. Com objetivo de explorar alguns aspectos com relação ao algoritmo de stacking como o número de levels (camadas) de aprendizado, o número de classificadores por level e os algoritmos de utilizados, foi proposto o multi-level stacking. Para este trabalho foram feitos experimentos utilizando três tipos diferentes de indutores para diferentes datasets com dois levels de aprendizado.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134362619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.5753/eniac.2022.227342
Aline Ioste, M. Finger
The decentralized machine learning models face a bottleneck of high-cost communication. Trade-offs between communication and accuracy in decentralized learning have been addressed by theoretical approaches. Here we propose a new practical model that performs several local training operations before a communication round, choosing among several options. We show how to determine a configuration that dramatically reduces the communication burden between participant hosts, with a reduction in communication practice showing robust and accurate results both to IID and NON-IID data distributions.
{"title":"Establishing the Parameters of a Decentralized Neural Machine Learning Model","authors":"Aline Ioste, M. Finger","doi":"10.5753/eniac.2022.227342","DOIUrl":"https://doi.org/10.5753/eniac.2022.227342","url":null,"abstract":"The decentralized machine learning models face a bottleneck of high-cost communication. Trade-offs between communication and accuracy in decentralized learning have been addressed by theoretical approaches. Here we propose a new practical model that performs several local training operations before a communication round, choosing among several options. We show how to determine a configuration that dramatically reduces the communication burden between participant hosts, with a reduction in communication practice showing robust and accurate results both to IID and NON-IID data distributions.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132555630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}