This paper presents a recommender system on clinical cases and materials that may help the student in the learning process. The experiment involves the development of collaborative and content based filters and also three hybrid methods. The system was qualitatively evaluated, using accuracy metrics in prediction and classification tasks. The obtained results show promising values on the collaborative filters.
{"title":"Development of a Recommender System to the Virtual Patient Simulator Health Simulator","authors":"D. Reidel, P. R. Barros, S. Rigo, M. Bez","doi":"10.1145/3126858.3131577","DOIUrl":"https://doi.org/10.1145/3126858.3131577","url":null,"abstract":"This paper presents a recommender system on clinical cases and materials that may help the student in the learning process. The experiment involves the development of collaborative and content based filters and also three hybrid methods. The system was qualitatively evaluated, using accuracy metrics in prediction and classification tasks. The obtained results show promising values on the collaborative filters.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Brilhante, Rostand E. O. Costa, T. Araújo
To achieve scalability and flexibility in larger applications a new approach arises, named by Microservices (MS). However MS architectures are at their inception and are even more a concept than a fully mature design pattern. One of the hardest topics in this approach is how to properly migrate or develop a single microservice, in terms of scope, efficiency and dependability. In this sense, this work proposes a new architectural model based on high-level architecture pattern of reactive programming to the internal structure of a new microservice. The new model of microservices are internally coordinated by asynchronous queues, which allowed to preserve compatibility with most monolithic components and provide an encapsulation process to enable its continuity. A comparative study between the standard approach and the proposed architecture was carried out to measure the eventual performance improvement of the new strategy.
{"title":"Asynchronous Queue Based Approach for Building Reactive Microservices","authors":"Jonathan Brilhante, Rostand E. O. Costa, T. Araújo","doi":"10.1145/3126858.3126873","DOIUrl":"https://doi.org/10.1145/3126858.3126873","url":null,"abstract":"To achieve scalability and flexibility in larger applications a new approach arises, named by Microservices (MS). However MS architectures are at their inception and are even more a concept than a fully mature design pattern. One of the hardest topics in this approach is how to properly migrate or develop a single microservice, in terms of scope, efficiency and dependability. In this sense, this work proposes a new architectural model based on high-level architecture pattern of reactive programming to the internal structure of a new microservice. The new model of microservices are internally coordinated by asynchronous queues, which allowed to preserve compatibility with most monolithic components and provide an encapsulation process to enable its continuity. A comparative study between the standard approach and the proposed architecture was carried out to measure the eventual performance improvement of the new strategy.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122636083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kamila R. H. Rodrigues, C. C. Viel, Isabela Zaine, Bruna C. R. Cunha, L. Scalco, M. G. Pimentel
Mobile computing can be a facilitator for collecting data due to the fact that users carry their smartphones almost everywhere and all the time and because they can collect a wide range of data -- textual, audiovisual and data collected automatically by sensors. Considering this opportunity, we developed the ESPIM (Experience Sampling and Programmed Intervention Method), an computer-aided method for programming multimedia data collection forms and carry out remote interventions. Using the ESPIM, professionals of areas such as healthcare and education can plan data collection and define intervention programs using methods and procedures from their own areas. The programs containing the queries and tasks are retrieved by a mobile application installed in the devices of users who participate in the data collection. The mobile application runs the programs according to queries and tasks planned by the specialists. Both queries and responses can contain text, audio, and video data. In this paper we discuss about the technological infrastructure used in ESPIM system and also about the preliminary results obtained through tests and evaluation carried out with stakeholders of the target population. These results allowed us carried out improvements in the system.
{"title":"Data Collection and Intervention Personalized as Interactive Multimedia Documents","authors":"Kamila R. H. Rodrigues, C. C. Viel, Isabela Zaine, Bruna C. R. Cunha, L. Scalco, M. G. Pimentel","doi":"10.1145/3126858.3131574","DOIUrl":"https://doi.org/10.1145/3126858.3131574","url":null,"abstract":"Mobile computing can be a facilitator for collecting data due to the fact that users carry their smartphones almost everywhere and all the time and because they can collect a wide range of data -- textual, audiovisual and data collected automatically by sensors. Considering this opportunity, we developed the ESPIM (Experience Sampling and Programmed Intervention Method), an computer-aided method for programming multimedia data collection forms and carry out remote interventions. Using the ESPIM, professionals of areas such as healthcare and education can plan data collection and define intervention programs using methods and procedures from their own areas. The programs containing the queries and tasks are retrieved by a mobile application installed in the devices of users who participate in the data collection. The mobile application runs the programs according to queries and tasks planned by the specialists. Both queries and responses can contain text, audio, and video data. In this paper we discuss about the technological infrastructure used in ESPIM system and also about the preliminary results obtained through tests and evaluation carried out with stakeholders of the target population. These results allowed us carried out improvements in the system.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, mobility is a key feature in the new generation of Internet, which provides a set of custom services through numerous terminals. Smartphones, for example, are a tendency and almost mandatory to anyone living in an urban and modern context. Most of the developed cities have at least one shopping mall full of mobile devices users. These shopping malls provide a number of stores, and people tend to have difficult in finding what they really need. This paper proposes a solution called RecStore. RecStore is a recommendation model to assist customers in reaching what they consider relevant at malls. The recommendation model considers user activities, 330 stores, 30 users and 3 baseline models. The precision, recall and f-measure improved at rates of 118%, 70% and 88% respectively in comparison to the second best model of each metric. Additionally, a mobile application - called InMap - was implemented based on RecStore.
{"title":"RecStore: Recommending Stores for Shopping Mall Customers","authors":"D. V. D. S. Silva, R. S. D. Silva, F. Durão","doi":"10.1145/3126858.3126888","DOIUrl":"https://doi.org/10.1145/3126858.3126888","url":null,"abstract":"Today, mobility is a key feature in the new generation of Internet, which provides a set of custom services through numerous terminals. Smartphones, for example, are a tendency and almost mandatory to anyone living in an urban and modern context. Most of the developed cities have at least one shopping mall full of mobile devices users. These shopping malls provide a number of stores, and people tend to have difficult in finding what they really need. This paper proposes a solution called RecStore. RecStore is a recommendation model to assist customers in reaching what they consider relevant at malls. The recommendation model considers user activities, 330 stores, 30 users and 3 baseline models. The precision, recall and f-measure improved at rates of 118%, 70% and 88% respectively in comparison to the second best model of each metric. Additionally, a mobile application - called InMap - was implemented based on RecStore.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129358706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mateus Melo, J. Goebel, Daniel Farias, Cristiano Santos, Tatiana Tavares, G. Corrêa, B. Zatt, M. Porto
This paper discusses results from a quality evaluation experiment involving videos on mobile devices encoded with different configurations of H.264/AVC. The impact of not employing the Fractional Motion Estimation (FME) and the Deblocking Filter (DBF) during the encoding process was also analyzed in the experiments presented in this paper. In order to perform the quality assessment, the objective metrics Structural Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) were used. The subjective evaluation was conducted in two different mobile devices by employing the Mean Opinion Score (MOS) with single stimulus. The obtained results have shown different levels of quality degradation for both modifications. In addition, they led to the conclusion that larger screen devices present a more accentuated drop in subjective quality than small screen devices.
{"title":"Objective and Subjective Video Quality Assessment in Mobile Devices for Low-Complexity H.264/AVC Codecs","authors":"Mateus Melo, J. Goebel, Daniel Farias, Cristiano Santos, Tatiana Tavares, G. Corrêa, B. Zatt, M. Porto","doi":"10.1145/3126858.3131596","DOIUrl":"https://doi.org/10.1145/3126858.3131596","url":null,"abstract":"This paper discusses results from a quality evaluation experiment involving videos on mobile devices encoded with different configurations of H.264/AVC. The impact of not employing the Fractional Motion Estimation (FME) and the Deblocking Filter (DBF) during the encoding process was also analyzed in the experiments presented in this paper. In order to perform the quality assessment, the objective metrics Structural Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) were used. The subjective evaluation was conducted in two different mobile devices by employing the Mean Opinion Score (MOS) with single stimulus. The obtained results have shown different levels of quality degradation for both modifications. In addition, they led to the conclusion that larger screen devices present a more accentuated drop in subjective quality than small screen devices.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. V. Araujo, Rayol Mendonca-Neto, F. Nakamura, E. Nakamura
In this paper, we aim at determining whether or not we can predict the success of a music album, based on the comments posted on social networks during 30 days before the album release. For that matter, we focused on the Twitter network for gathering the user comments. As success measures, we considered the Spotify Popularity and the Billboard Units. The reason for those choices is that Spotify represents the most popular type of music consumption today (audio streaming), while Billboard ranking still favors the old school market (physical albums). As a result, we found out that the amount of Positive Tweets (30 days before the album release) can explain 95.5% of the variation in the Spotify Popularity with a simple linear model. On the other hand, we could not find statistical evidence that the volume of comments on Twitter correlates with the album success measured by the Billboard magazine.
{"title":"Predicting Music Success Based on Users' Comments on Online Social Networks","authors":"C. V. Araujo, Rayol Mendonca-Neto, F. Nakamura, E. Nakamura","doi":"10.1145/3126858.3126885","DOIUrl":"https://doi.org/10.1145/3126858.3126885","url":null,"abstract":"In this paper, we aim at determining whether or not we can predict the success of a music album, based on the comments posted on social networks during 30 days before the album release. For that matter, we focused on the Twitter network for gathering the user comments. As success measures, we considered the Spotify Popularity and the Billboard Units. The reason for those choices is that Spotify represents the most popular type of music consumption today (audio streaming), while Billboard ranking still favors the old school market (physical albums). As a result, we found out that the amount of Positive Tweets (30 days before the album release) can explain 95.5% of the variation in the Spotify Popularity with a simple linear model. On the other hand, we could not find statistical evidence that the volume of comments on Twitter correlates with the album success measured by the Billboard magazine.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123418883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hugo Schroter Lazzari, R. I. T. D. C. Filho, V. Roesler
This paper presents a framework for the creation of a knowledge database on QoE. The QoE Analyzer framework enables the simulation of degradations in video playout and also the application of a survey to evaluate the impact of degradations on the user Quality of Experience. In order to show the versatility of the framework, an instantiation of the framework and its application to a group of 62 users was carried out. The framework was implemented using the JavaScript language and, through it, it was possible to show the impacts of degradation patterns on the user experience. The framework was released under GNU GPLv3 license and is available in github (https://github.com/hugoschroterl/qoe-analyser).
{"title":"QoE Analyser: A Framework to QoE Knowledge Base Generation","authors":"Hugo Schroter Lazzari, R. I. T. D. C. Filho, V. Roesler","doi":"10.1145/3126858.3131598","DOIUrl":"https://doi.org/10.1145/3126858.3131598","url":null,"abstract":"This paper presents a framework for the creation of a knowledge database on QoE. The QoE Analyzer framework enables the simulation of degradations in video playout and also the application of a survey to evaluate the impact of degradations on the user Quality of Experience. In order to show the versatility of the framework, an instantiation of the framework and its application to a group of 62 users was carried out. The framework was implemented using the JavaScript language and, through it, it was possible to show the impacts of degradation patterns on the user experience. The framework was released under GNU GPLv3 license and is available in github (https://github.com/hugoschroterl/qoe-analyser).","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114964713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the several vehicles of social communication, digital signage displays have been playing a remarkable role in both public and private spaces. Such Digital Out-of-Home (DOOH) media allows for the rapid dissemination of collective information to a large number of people. It is observed, however, that there is a large distance between the graphical abstractions offered by DOOH authoring tools and the underlying language for the representation of hyper/multimedia content. Document representation becomes complex, sometimes makes use of scripting languages, and therefore is illegible by authors and even difficult for automated information extraction. In this context, this paper proposes STorM, a hypermedia model and its language STorML that defines higher-level entities related to the concepts found in the audiovisual industry, such as scenes, tracks and media.
{"title":"STorM: A Hypermedia Authoring Model for Interactive Digital Out-of-Home Media","authors":"Marco A. Freesz, L. Yung, M. Moreno","doi":"10.1145/3126858.3126889","DOIUrl":"https://doi.org/10.1145/3126858.3126889","url":null,"abstract":"Among the several vehicles of social communication, digital signage displays have been playing a remarkable role in both public and private spaces. Such Digital Out-of-Home (DOOH) media allows for the rapid dissemination of collective information to a large number of people. It is observed, however, that there is a large distance between the graphical abstractions offered by DOOH authoring tools and the underlying language for the representation of hyper/multimedia content. Document representation becomes complex, sometimes makes use of scripting languages, and therefore is illegible by authors and even difficult for automated information extraction. In this context, this paper proposes STorM, a hypermedia model and its language STorML that defines higher-level entities related to the concepts found in the audiovisual industry, such as scenes, tracks and media.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113952348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giovani Melo Marzano, Pedro Henrique Batista Ruas da Silveira, G. B. Fonseca, Pasteur Ottoni M., S. Guimarães
A social network consists on a finite set of social entities and the relationships between them. These entities are represented as vertices in a graph which represents this network. Usually, the entities (or vertices) can be classified according to their features, like interactions (comments, posts, likes, etc.) for example. However, to work directly with these graphs and understand the relationships between the several pre-defined classes are not easy tasks due to, for instance, the graph's size. In this work, we propose metrics for evaluating how good is a graph transformation based on graph homomorphism, measuring how much the relationships of the original one are preserved after the transformation. The proposed metrics measure the edge regularity indices and indicate the proportion of the original graph's vertices that participates in the relations, moreover they measure how close to a regular homomorphism is the graph transformation. For assessing the regularity indices, some experiments taking into account synthetic and real social network data are given.
{"title":"Using Graph Homomorphisms for Vertex Classification Analysis in Social Networks","authors":"Giovani Melo Marzano, Pedro Henrique Batista Ruas da Silveira, G. B. Fonseca, Pasteur Ottoni M., S. Guimarães","doi":"10.1145/3126858.3126895","DOIUrl":"https://doi.org/10.1145/3126858.3126895","url":null,"abstract":"A social network consists on a finite set of social entities and the relationships between them. These entities are represented as vertices in a graph which represents this network. Usually, the entities (or vertices) can be classified according to their features, like interactions (comments, posts, likes, etc.) for example. However, to work directly with these graphs and understand the relationships between the several pre-defined classes are not easy tasks due to, for instance, the graph's size. In this work, we propose metrics for evaluating how good is a graph transformation based on graph homomorphism, measuring how much the relationships of the original one are preserved after the transformation. The proposed metrics measure the edge regularity indices and indicate the proportion of the original graph's vertices that participates in the relations, moreover they measure how close to a regular homomorphism is the graph transformation. For assessing the regularity indices, some experiments taking into account synthetic and real social network data are given.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134146842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José C. Gutiérrez, Rodolfo Valiente, M. T. Sadaike, Daniel F. Soriano, G. Bressan, W. Ruggiero
Nowadays, the enormous variety of identity documents that exist makes it difficult to standardize a system capable of extracting all the information of interest presented by them. Therefore, systems that use templates to classify information based on their positions are limited by the number of templates they could recognize. Thus, in this paper, a novel mechanism intended to automatically classify the major information of interest exposed by generic identity documents is presented. The proposal is created to be easily adaptable to any system capable of detecting and extracting text information from an identity document image. To assign meaning to the text extracted from the identity document, the proposal is based on a novel mechanism to structuring the data using semantic analysis. The mechanism consists of two main steps, first, all the text data are classified as sentences or near sentences based on the Euclidean distance between words; second, the sentences are analyzed to find keywords that allow structuring the information based on its semantic to show it as abstractions. The proposal has been designed to be able to store the data as abstractions of its meaning. This allows improving the scalability of the system and a better use of this information by different services, by the end user or to be interpreted by an automated process of decision-making.
{"title":"Mechanism for Structuring the Data from a Generic Identity Document Image using Semantic Analysis","authors":"José C. Gutiérrez, Rodolfo Valiente, M. T. Sadaike, Daniel F. Soriano, G. Bressan, W. Ruggiero","doi":"10.1145/3126858.3131594","DOIUrl":"https://doi.org/10.1145/3126858.3131594","url":null,"abstract":"Nowadays, the enormous variety of identity documents that exist makes it difficult to standardize a system capable of extracting all the information of interest presented by them. Therefore, systems that use templates to classify information based on their positions are limited by the number of templates they could recognize. Thus, in this paper, a novel mechanism intended to automatically classify the major information of interest exposed by generic identity documents is presented. The proposal is created to be easily adaptable to any system capable of detecting and extracting text information from an identity document image. To assign meaning to the text extracted from the identity document, the proposal is based on a novel mechanism to structuring the data using semantic analysis. The mechanism consists of two main steps, first, all the text data are classified as sentences or near sentences based on the Euclidean distance between words; second, the sentences are analyzed to find keywords that allow structuring the information based on its semantic to show it as abstractions. The proposal has been designed to be able to store the data as abstractions of its meaning. This allows improving the scalability of the system and a better use of this information by different services, by the end user or to be interpreted by an automated process of decision-making.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134153915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}