Pub Date : 2018-07-17DOI: 10.22456/2175-2745.76189
Lis Custódio, S. Pesco
The deformation methods based on cage controls became a subject of considerable interest due its simplicity and intuitive results. In this technique, the model is enclosed within a simpler mesh (the cage) and its points are expressed as function of the cage elements. Then, by manipulating the cage, the respective deformation is obtained on the model in its interior.In this direction, in the last years, extensions of barycentric coordinates, such as Mean Value coordinates, Positive Mean Value Coordinates, Harmonic coordinates and Green's coordinates, have been proposed to write the points of the model as a function of the cage elements.The Mean Value coordinates, proposed by Floater in two dimensions and extended later to three dimensions by Ju et al. and also by Floater, stands out from the other coordinates because of their simple derivation. However the existence of negative coordinates in regions bounded by non-convex cage control results in a unexpected behavior of the deformation in some regions of the model.In this work, we propose a modification in the derivation of Mean Value Coordinates proposed by Floater. Our derivation maintains the simplicity of the construction of the coordinates and eliminates the undesired behavior in the deformation by diminishing the negative influence of a control vertex on regions ofthe model not related to it. We also compare the deformation generated with our coordinates and the deformations obtained with the original Mean Value coordinates and Harmonic coordinates.
{"title":"Derivation of Mean Value Coordinates Using Interior Distance and Their Application on Mesh Deformation","authors":"Lis Custódio, S. Pesco","doi":"10.22456/2175-2745.76189","DOIUrl":"https://doi.org/10.22456/2175-2745.76189","url":null,"abstract":"The deformation methods based on cage controls became a subject of considerable interest due its simplicity and intuitive results. In this technique, the model is enclosed within a simpler mesh (the cage) and its points are expressed as function of the cage elements. Then, by manipulating the cage, the respective deformation is obtained on the model in its interior.In this direction, in the last years, extensions of barycentric coordinates, such as Mean Value coordinates, Positive Mean Value Coordinates, Harmonic coordinates and Green's coordinates, have been proposed to write the points of the model as a function of the cage elements.The Mean Value coordinates, proposed by Floater in two dimensions and extended later to three dimensions by Ju et al. and also by Floater, stands out from the other coordinates because of their simple derivation. However the existence of negative coordinates in regions bounded by non-convex cage control results in a unexpected behavior of the deformation in some regions of the model.In this work, we propose a modification in the derivation of Mean Value Coordinates proposed by Floater. Our derivation maintains the simplicity of the construction of the coordinates and eliminates the undesired behavior in the deformation by diminishing the negative influence of a control vertex on regions ofthe model not related to it. We also compare the deformation generated with our coordinates and the deformations obtained with the original Mean Value coordinates and Harmonic coordinates.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75921964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-17DOI: 10.22456/2175-2745.79333
Igor Magalhães Ribeiro, C. Borges, Bruno Zonovelli Silva, W. Arbex
The genome-wide associations studies (GWAS) aims to identify the most influential markers in relation to the phenotype values. One of the substantial challenges is to find a non-linear mapping between genotype and phenotype, also known as epistasis, that usually becomes the process of searching and identifying functional SNPs more complex. Some diseases such as cervical cancer, leukemia and type 2 diabetes have low heritability. The heritability of the sample is directly related to the explanation defined by the genotype, so the lower the heritability the greater the influence of the environmental factors and the less the genotypic explanation. In this work, an algorithm capable of identifying epistatic associations at different levels of heritability is proposed. The developing model is a aplication of genetic programming with a specialized initialization for the initial population consisting of a random forest strategy. The initialization process aims to rank the most important SNPs increasing the probability of their insertion in the initial population of the genetic programming model. The expected behavior of the presented model for the obtainment of the causal markers intends to be robust in relation to the heritability level. The simulated experiments are case-control type with heritability level of 0.4, 0.3, 0.2 and 0.1 considering scenarios with 100 and 1000 markers. Our approach was compared with the GPAS software and a genetic programming algorithm without the initialization step. The results show that the use of an efficient population initialization method based on ranking strategy is very promising compared to other models.
{"title":"A Genetic Programming Model for Association Studies to Detect Epistasis in Low Heritability Data","authors":"Igor Magalhães Ribeiro, C. Borges, Bruno Zonovelli Silva, W. Arbex","doi":"10.22456/2175-2745.79333","DOIUrl":"https://doi.org/10.22456/2175-2745.79333","url":null,"abstract":"The genome-wide associations studies (GWAS) aims to identify the most influential markers in relation to the phenotype values. One of the substantial challenges is to find a non-linear mapping between genotype and phenotype, also known as epistasis, that usually becomes the process of searching and identifying functional SNPs more complex. Some diseases such as cervical cancer, leukemia and type 2 diabetes have low heritability. The heritability of the sample is directly related to the explanation defined by the genotype, so the lower the heritability the greater the influence of the environmental factors and the less the genotypic explanation. In this work, an algorithm capable of identifying epistatic associations at different levels of heritability is proposed. The developing model is a aplication of genetic programming with a specialized initialization for the initial population consisting of a random forest strategy. The initialization process aims to rank the most important SNPs increasing the probability of their insertion in the initial population of the genetic programming model. The expected behavior of the presented model for the obtainment of the causal markers intends to be robust in relation to the heritability level. The simulated experiments are case-control type with heritability level of 0.4, 0.3, 0.2 and 0.1 considering scenarios with 100 and 1000 markers. Our approach was compared with the GPAS software and a genetic programming algorithm without the initialization step. The results show that the use of an efficient population initialization method based on ranking strategy is very promising compared to other models.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82170099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-17DOI: 10.22456/2175-2745.79334
Arthur Lorenzi, V. Schettino, Thiago Jesus Rodrigues Barbosa, P. F. Freitas, Pedro Gabriel Silva Guimarães, W. Arbex
Genotype data manipulation is one of the greatest challenges in bioinformatics and genomics mainly because of high dimensionality and unbalancing characteristics. These peculiarities explains why Relational Database Management Systems (RDBMSs), the "de facto" standard storage solution, have not been presented as the best tools for this kind of data. However, Big Data has been pushing the development of modern database systems that might be able to overcome RDBMSs deficiencies. In this context, we extended our previous works on the evaluation of relative performance among NoSQLs engines from different families, adapting the schema design in order to achieve better performance based on its conclusions, thus being able to store more SNP markers for each individual. Using Yahoo! Cloud Serving Benchmark (YCSB) benchmark framework, we assessed each database system over hypothetical SNP sequences. Results indicate that although Tarantool has the best overall throughput, MongoDB is less impacted by the increase of SNP markers per individual.
{"title":"Relative Scalability of NoSQL Databases for Genotype Data Manipulation","authors":"Arthur Lorenzi, V. Schettino, Thiago Jesus Rodrigues Barbosa, P. F. Freitas, Pedro Gabriel Silva Guimarães, W. Arbex","doi":"10.22456/2175-2745.79334","DOIUrl":"https://doi.org/10.22456/2175-2745.79334","url":null,"abstract":"Genotype data manipulation is one of the greatest challenges in bioinformatics and genomics mainly because of high dimensionality and unbalancing characteristics. These peculiarities explains why Relational Database Management Systems (RDBMSs), the \"de facto\" standard storage solution, have not been presented as the best tools for this kind of data. However, Big Data has been pushing the development of modern database systems that might be able to overcome RDBMSs deficiencies. In this context, we extended our previous works on the evaluation of relative performance among NoSQLs engines from different families, adapting the schema design in order to achieve better performance based on its conclusions, thus being able to store more SNP markers for each individual. Using Yahoo! Cloud Serving Benchmark (YCSB) benchmark framework, we assessed each database system over hypothetical SNP sequences. Results indicate that although Tarantool has the best overall throughput, MongoDB is less impacted by the increase of SNP markers per individual.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83423340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-17DOI: 10.22456/2175-2745.79310
Francisco Oliveira, E. Tavares, E. Sousa, B. Nogueira
Video conferencing is very common nowadays, and it may contemplate heterogenous devices (e.g., smartphones, notebooks, game consoles) and networks in the same session. Developing video conferencing systems for this myriad of devices with different capabilities requires special attention from system designer. Scalable video coding (SVC) is a prominent option to mitigate this heterogeneity issue, but traditional Internet protocol (IP) networks may not fully benefit from such a technology. In contrast, software-defined networking (SDN) may allow better utilization of SVC and improvements on video conferencing components. This paper evaluates the performance of video conferencing systems adopting SVC, SDN and ordinary IP networks, taking into account throughput, delay and peak signal-to-noise ratio (PSNR) as the metrics of interest. The experiments are based on Mininet framework and distinct network infrastructures are also considered. Results indicate SDN with SVC may deliver better video quality with reduced delay and increased throughput.
{"title":"Video Conferencing Evaluation Considering Scalable Video Coding and SDN Network","authors":"Francisco Oliveira, E. Tavares, E. Sousa, B. Nogueira","doi":"10.22456/2175-2745.79310","DOIUrl":"https://doi.org/10.22456/2175-2745.79310","url":null,"abstract":"Video conferencing is very common nowadays, and it may contemplate heterogenous devices (e.g., smartphones, notebooks, game consoles) and networks in the same session. Developing video conferencing systems for this myriad of devices with different capabilities requires special attention from system designer. Scalable video coding (SVC) is a prominent option to mitigate this heterogeneity issue, but traditional Internet protocol (IP) networks may not fully benefit from such a technology. In contrast, software-defined networking (SDN) may allow better utilization of SVC and improvements on video conferencing components. This paper evaluates the performance of video conferencing systems adopting SVC, SDN and ordinary IP networks, taking into account throughput, delay and peak signal-to-noise ratio (PSNR) as the metrics of interest. The experiments are based on Mininet framework and distinct network infrastructures are also considered. Results indicate SDN with SVC may deliver better video quality with reduced delay and increased throughput.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72874091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-17DOI: 10.22456/2175-2745.80598
Eduardo Henrique Silva, Jefferson R. Souza, B. Travençolo
This paper presents a methodology for the classification of subcellular patterns by the extraction of cytomorphometric features in 3D isosurfaces. In order to validate the proposal, we used a database of 3D images of HeLa cells with nine classes. For each cell, several morphological attributes were extracted based on its isosurface. Using the Quadratic Discriminant Analysis (QDA) classifier with the hybrid attribute selector, we achieved 97.59 of accuracy and F1-score of 0.9757 when classifying the subcellular patterns.
{"title":"Use of cytomorphometry for classification of subcellular patterns in 3D images","authors":"Eduardo Henrique Silva, Jefferson R. Souza, B. Travençolo","doi":"10.22456/2175-2745.80598","DOIUrl":"https://doi.org/10.22456/2175-2745.80598","url":null,"abstract":"This paper presents a methodology for the classification of subcellular patterns by the extraction of cytomorphometric features in 3D isosurfaces. In order to validate the proposal, we used a database of 3D images of HeLa cells with nine classes. For each cell, several morphological attributes were extracted based on its isosurface. Using the Quadratic Discriminant Analysis (QDA) classifier with the hybrid attribute selector, we achieved 97.59 of accuracy and F1-score of 0.9757 when classifying the subcellular patterns.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76899265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-18DOI: 10.22456/2175-2745.74992
Paulo André Lima de Castro, Ronald Annoni Junior, Jaime Simão Sichman
Desde os primeiros dias da ciencia da computacao, os pesquisadores se perguntam onde esta a linha que separa as tarefas que maquinas podem fazer, daquelas que apenas seres humanos podem realizar. Varias tarefas foram apontadas como impossiveis para as maquinas e mais tarde conquistadas por novos avancos na Inteligencia Artificial. Hoje em dia, parece que nao estamos longe do dia em que a conducao de carros sera incluida nas tarefas que as maquinas podem fazer de maneira eficiente. Certamente, atividades ainda mais complexas serao dominadas por maquinas no futuro. Neste artigo, argumentamos que a analise de investimentos, o processo de avaliacao e selecao de investimentos em termos de risco e retorno podem estar entre as tarefas executadas de forma eficiente por maquinas em futuro talvez nao distante. Na verdade, ha esforcos de pesquisa significativos para criar algoritmos e metodos quantitativos para analisar investimentos. Apresentamos uma breve revisao sobre eles. Atraves desta revisao, podemos perceber que ha muitos desafios e complexidades a serem enfrentados na busca de analise autonoma de investimentos (AAI). Neste artigo, propomos uma abordagem para simplificar o problema de analise autonoma de investimentos capaz de tratar com as complexidades identificadas (natureza dos ativos, algoritmos de analise multipla por ativo, nao estacionaridade e multiplos horizonte de investimento). Esta abordagem baseia-se no uso simultâneo de diversos agentes autonomos e na discretizacao do problema AAI e sua modelagem como um problema de classificacao. Essa abordagem quebra a complexidade enfrentada pela AAI em problemas que podem ser abordados por um grupo de agentes que trabalham em conjunto para fornecer conselhos de investimento inteligentes e personalizados para individuos. Apresentamos uma implementacao dessa abordagem e resultados obtidos atraves de seu uso com dados historicos do mercado de capitais brasileiro. Acreditamos que tal abordagem pode contribuir para o desenvolvimento de AAI. Alem disso, esta abordagem permite a incorporacao de algoritmos e tecnicas ja conhecidas que podem ajudar a resolver parte do problema.
{"title":"Análise Autônoma de Investimento: Uma Abordagem Multiagente Discreta","authors":"Paulo André Lima de Castro, Ronald Annoni Junior, Jaime Simão Sichman","doi":"10.22456/2175-2745.74992","DOIUrl":"https://doi.org/10.22456/2175-2745.74992","url":null,"abstract":"Desde os primeiros dias da ciencia da computacao, os pesquisadores se perguntam onde esta a linha que separa as tarefas que maquinas podem fazer, daquelas que apenas seres humanos podem realizar. Varias tarefas foram apontadas como impossiveis para as maquinas e mais tarde conquistadas por novos avancos na Inteligencia Artificial. Hoje em dia, parece que nao estamos longe do dia em que a conducao de carros sera incluida nas tarefas que as maquinas podem fazer de maneira eficiente. Certamente, atividades ainda mais complexas serao dominadas por maquinas no futuro. Neste artigo, argumentamos que a analise de investimentos, o processo de avaliacao e selecao de investimentos em termos de risco e retorno podem estar entre as tarefas executadas de forma eficiente por maquinas em futuro talvez nao distante. Na verdade, ha esforcos de pesquisa significativos para criar algoritmos e metodos quantitativos para analisar investimentos. Apresentamos uma breve revisao sobre eles. Atraves desta revisao, podemos perceber que ha muitos desafios e complexidades a serem enfrentados na busca de analise autonoma de investimentos (AAI). Neste artigo, propomos uma abordagem para simplificar o problema de analise autonoma de investimentos capaz de tratar com as complexidades identificadas (natureza dos ativos, algoritmos de analise multipla por ativo, nao estacionaridade e multiplos horizonte de investimento). Esta abordagem baseia-se no uso simultâneo de diversos agentes autonomos e na discretizacao do problema AAI e sua modelagem como um problema de classificacao. Essa abordagem quebra a complexidade enfrentada pela AAI em problemas que podem ser abordados por um grupo de agentes que trabalham em conjunto para fornecer conselhos de investimento inteligentes e personalizados para individuos. Apresentamos uma implementacao dessa abordagem e resultados obtidos atraves de seu uso com dados historicos do mercado de capitais brasileiro. Acreditamos que tal abordagem pode contribuir para o desenvolvimento de AAI. Alem disso, esta abordagem permite a incorporacao de algoritmos e tecnicas ja conhecidas que podem ajudar a resolver parte do problema.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74282686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-18DOI: 10.22456/2175-2745.76452
Emerson B. de Carvalho, E. Goldbarg, M. Goldbarg
The Lin and Kernighan’s algorithm for the single objective Traveling Salesman Problem (TSP) is one of the most efficient heuristics for the symmetric case. Although many algorithms for the TSP were extended to the multi-objective version of the problem (MTSP), the Lin and Kernighan’s algorithm was still not fully explored. Works that applied the Lin and Kernighan’s algorithm for the MTSP were driven to weighted sum versions of the problem. We investigate the LK from a Pareto dominance perspective. The multi-objective LK was implemented within two local search schemes and applied to 2 to 4-objective instances. The results showed that the proposed algorithmic variants obtained better results than a state-of-the-art algorithm.
{"title":"A Multi-objective Version of the Lin-Kernighan Heuristic for the Traveling Salesman Problem","authors":"Emerson B. de Carvalho, E. Goldbarg, M. Goldbarg","doi":"10.22456/2175-2745.76452","DOIUrl":"https://doi.org/10.22456/2175-2745.76452","url":null,"abstract":"The Lin and Kernighan’s algorithm for the single objective Traveling Salesman Problem (TSP) is one of the most efficient heuristics for the symmetric case. Although many algorithms for the TSP were extended to the multi-objective version of the problem (MTSP), the Lin and Kernighan’s algorithm was still not fully explored. Works that applied the Lin and Kernighan’s algorithm for the MTSP were driven to weighted sum versions of the problem. We investigate the LK from a Pareto dominance perspective. The multi-objective LK was implemented within two local search schemes and applied to 2 to 4-objective instances. The results showed that the proposed algorithmic variants obtained better results than a state-of-the-art algorithm.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73083214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-18DOI: 10.22456/2175-2745.77994
Rian Dutra da Cunha, Frâncila Weidt Neiva, Rodrigo L. S. Silva
Since the emergence of virtual reality (VR) technologies, many researchers have argued on the benefits of their use for people with intellectual and multiple disabilities. However, up to this date there is not a single study that presents a detailed overview of the state of the art in virtual reality as a support tool for the treatment of people with intellectual and multiple disabilities, as well as Autism and Down Syndrome. The aim of this study is to provide a detailed overview of the state of the art in the virtual reality area focusing on people with multiple disabilities, that encompasses intellectual and physical disabilities. There is still no consensus on the effectiveness of VR-based treatments. Virtual reality can offer rich environment and features, but most of the researches focuses only in the experience to be inside a virtual place without taking advantage of what benefits VR provide us. Furthermore, most of our selected studies used non-immersive VR and AR. Thus, immersive VR is an open field with many opportunities to be explored. We believe VR has great potential to be effective in the treatment of people with intellectual and multiple disabilities.
{"title":"Virtual Reality as a Support Tool for the Treatment of People with Intellectual and Multiple Disabilities: A Systematic Literature Review","authors":"Rian Dutra da Cunha, Frâncila Weidt Neiva, Rodrigo L. S. Silva","doi":"10.22456/2175-2745.77994","DOIUrl":"https://doi.org/10.22456/2175-2745.77994","url":null,"abstract":"Since the emergence of virtual reality (VR) technologies, many researchers have argued on the benefits of their use for people with intellectual and multiple disabilities. However, up to this date there is not a single study that presents a detailed overview of the state of the art in virtual reality as a support tool for the treatment of people with intellectual and multiple disabilities, as well as Autism and Down Syndrome. The aim of this study is to provide a detailed overview of the state of the art in the virtual reality area focusing on people with multiple disabilities, that encompasses intellectual and physical disabilities. There is still no consensus on the effectiveness of VR-based treatments. Virtual reality can offer rich environment and features, but most of the researches focuses only in the experience to be inside a virtual place without taking advantage of what benefits VR provide us. Furthermore, most of our selected studies used non-immersive VR and AR. Thus, immersive VR is an open field with many opportunities to be explored. We believe VR has great potential to be effective in the treatment of people with intellectual and multiple disabilities.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72604975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-18DOI: 10.22456/2175-2745.77654
Marco Aurélio Spohn, Mateus Trebien
Ao permitir a execucao de aplicacoes em um contexto totalmente protegido (i.e., dentro de enclaves), amplia-se as possibilidades para as novas geracoes de processadores Intel da familia x86 com a extensao Software Guard Extensions (SGX). Por se tratar de uma tecnologia recente, as maquinas que contam com essa tecnologia ainda sao minoria. Objetivando avaliar o SGX, utilizou-se um emulador dessa tecnologia denominado OpenSGX, o qual implementa e reproduz as principais funcionalidades e estruturas utilizadas no SGX. O enfoque consistiu em avaliar o overhead, em termos de processamento, resultante da execucao de uma aplicacao em um ambiente com o SGX emulado. Para a avaliacao, empregou-se aplicacoes de benchmark da plataforma MiBench, modificando-as para compatibilizar a execucao em enclaves no OpenSGX. Como metricas de desempenho, coletou-se o numero total de instrucoes e o numero total de ciclos de CPU para a execucao completa de cada aplicacao com e sem o OpenSGX.
{"title":"Avaliação do Intel Software Guard Extensions via Emulação","authors":"Marco Aurélio Spohn, Mateus Trebien","doi":"10.22456/2175-2745.77654","DOIUrl":"https://doi.org/10.22456/2175-2745.77654","url":null,"abstract":"Ao permitir a execucao de aplicacoes em um contexto totalmente protegido (i.e., dentro de enclaves), amplia-se as possibilidades para as novas geracoes de processadores Intel da familia x86 com a extensao Software Guard Extensions (SGX). Por se tratar de uma tecnologia recente, as maquinas que contam com essa tecnologia ainda sao minoria. Objetivando avaliar o SGX, utilizou-se um emulador dessa tecnologia denominado OpenSGX, o qual implementa e reproduz as principais funcionalidades e estruturas utilizadas no SGX. O enfoque consistiu em avaliar o overhead, em termos de processamento, resultante da execucao de uma aplicacao em um ambiente com o SGX emulado. Para a avaliacao, empregou-se aplicacoes de benchmark da plataforma MiBench, modificando-as para compatibilizar a execucao em enclaves no OpenSGX. Como metricas de desempenho, coletou-se o numero total de instrucoes e o numero total de ciclos de CPU para a execucao completa de cada aplicacao com e sem o OpenSGX.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76918199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-18DOI: 10.22456/2175-2745.76387
M. Claro, Rodrigo M. S. Veras, A. M. Santana, L. Vogado, L. Sousa
Glaucoma e uma doenca ocular que danifica o nervo optico causando a perda da visao. Ela e a segunda principal causa de cegueira no mundo. Varios sistemas de diagnostico automatico de glaucoma tem sido propostos, contudo e possivel realizar melhorias nestas tecnicas, visto que, os sistemas atuais nao lidam com uma grande diversidade de imagens. Assim, este trabalho visa realizar a deteccao automatica do glaucoma nas imagens da retina, atraves do uso de descritores de textura e Redes Neurais Convolucionais (CNNs). Os resultados mostraram que a juncao dos descritores GLCM e CNNs e a utilizacao do classificador Random Forest sao promissores na deteccao dessa patologia, obtendo uma acuracia de 91,06% em 873 imagens de 4 bases de dados publicas.
{"title":"Diagnóstico de Glaucoma Utilizando Atributos de Textura e CNN's Pré-treinadas","authors":"M. Claro, Rodrigo M. S. Veras, A. M. Santana, L. Vogado, L. Sousa","doi":"10.22456/2175-2745.76387","DOIUrl":"https://doi.org/10.22456/2175-2745.76387","url":null,"abstract":"Glaucoma e uma doenca ocular que danifica o nervo optico causando a perda da visao. Ela e a segunda principal causa de cegueira no mundo. Varios sistemas de diagnostico automatico de glaucoma tem sido propostos, contudo e possivel realizar melhorias nestas tecnicas, visto que, os sistemas atuais nao lidam com uma grande diversidade de imagens. Assim, este trabalho visa realizar a deteccao automatica do glaucoma nas imagens da retina, atraves do uso de descritores de textura e Redes Neurais Convolucionais (CNNs). Os resultados mostraram que a juncao dos descritores GLCM e CNNs e a utilizacao do classificador Random Forest sao promissores na deteccao dessa patologia, obtendo uma acuracia de 91,06% em 873 imagens de 4 bases de dados publicas.","PeriodicalId":82472,"journal":{"name":"Research initiative, treatment action : RITA","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76928939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}