Object detection is an essential capacity of computer vision solutions. It has gained attention over the last years by using a core component of the “Once learning” and “Few-shot learning” mechanism. This research analyzes the ability of a machine learning framework named “You Only Look Once,” to perform object localization task in a “Heuristic once learning” context. It will also study the advantages and practical limitations of YOLO by experimenting with two types of implementation: 1) the simplest one (a.k.a tiny YOLO), and 2) the first version of YOLO. The case studies are carried out in various visual data types and object contexts, such as object deformation caused by fast-forward frame, spatial distortion caused by isometric projection, and gaming images with abnormal objects. Finally, we build a dataset accounting for a new task so-called “Heuristic once learning”. Results using YOLO-v5 in such conditions showed that YOLO had difficulties to generalize simple abstractions of the characters, pointing to the necessity of new approaches to solve such challenges.
{"title":"Once Learning for Looking and Identifying Based on YOLO-v5 Object Detection","authors":"Lucas S. Althoff, Mylène C. Q. Farias, L. Weigang","doi":"10.1145/3539637.3557929","DOIUrl":"https://doi.org/10.1145/3539637.3557929","url":null,"abstract":"Object detection is an essential capacity of computer vision solutions. It has gained attention over the last years by using a core component of the “Once learning” and “Few-shot learning” mechanism. This research analyzes the ability of a machine learning framework named “You Only Look Once,” to perform object localization task in a “Heuristic once learning” context. It will also study the advantages and practical limitations of YOLO by experimenting with two types of implementation: 1) the simplest one (a.k.a tiny YOLO), and 2) the first version of YOLO. The case studies are carried out in various visual data types and object contexts, such as object deformation caused by fast-forward frame, spatial distortion caused by isometric projection, and gaming images with abnormal objects. Finally, we build a dataset accounting for a new task so-called “Heuristic once learning”. Results using YOLO-v5 in such conditions showed that YOLO had difficulties to generalize simple abstractions of the characters, pointing to the necessity of new approaches to solve such challenges.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115884290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Professionals or researchers who, in diverse areas, need to accompany users (e.g., patients or students), and they use approaches that allow to collect daily data from their users. These specialists accompany and carry out collection through the planning and implementation of intervention programs. The objective of this work is to understand how the specialists visualize and analyze data, offering an alternative visualization form, based on the combination of different techniques, that allows the specialists to make use of the structure of the intervention programs to follow the application of these programs. A study was carried out with healthcare professionals and through the analysis of a visualization prototype and graph structures, it was possible to understand how these specialists interpret their data. We also identified requirements for our visualization interface.
{"title":"An Interface for Visualizing Applied Interventions Data through Mobile Devices","authors":"L. Scalco, Kamila R. H. Rodrigues, M. G. Pimentel","doi":"10.1145/3539637.3558231","DOIUrl":"https://doi.org/10.1145/3539637.3558231","url":null,"abstract":"Professionals or researchers who, in diverse areas, need to accompany users (e.g., patients or students), and they use approaches that allow to collect daily data from their users. These specialists accompany and carry out collection through the planning and implementation of intervention programs. The objective of this work is to understand how the specialists visualize and analyze data, offering an alternative visualization form, based on the combination of different techniques, that allows the specialists to make use of the structure of the intervention programs to follow the application of these programs. A study was carried out with healthcare professionals and through the analysis of a visualization prototype and graph structures, it was possible to understand how these specialists interpret their data. We also identified requirements for our visualization interface.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129800531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narúsci S. Bastos, Lucas Seidy Ribeiro Dos Santos Ikenoue, D. Palomino, G. Corrêa, Tatiana Tavares, B. Zatt
Decades of research on Image Quality Assessment (IQA) have promoted the creation of a variety of objective quality metrics that strongly correlate to subjective image quality. However, challenges remain when considering quality assessment of 3D/stereo images. Multiple objective quality metrics for 3D images were designed by extending the well-known 2D metrics. As a result, these solutions tend to present weaknesses under 3D-specific artifacts. Recent works demonstrate the effectiveness of machine-learning techniques in the design of 3D quality metrics. Although effective, some machine learning-based solutions may lead to high computational effort and restrict its adoption in low-latency lightweight systems/applications. This paper presents a study on full-reference stereoscopic objective quality assessment considering lightweight machine learning. We evaluated four different decision tree-based algorithms considering eight distinct sets of image features. The classifiers were trained using data from the Waterloo IVC 3D Image Quality Database to determine the subjective quality score measured using Mean Opinion Score (MOS). The results show that RandomForest generally obtains the best accuracy. Our study demonstrates the feasibility of decision tree-based solutions as an accurate and lightweight approach for 3D image quality assessment.
{"title":"Full Reference Stereoscopic Objective Quality Assessment using Lightweight Machine Learning","authors":"Narúsci S. Bastos, Lucas Seidy Ribeiro Dos Santos Ikenoue, D. Palomino, G. Corrêa, Tatiana Tavares, B. Zatt","doi":"10.1145/3539637.3557936","DOIUrl":"https://doi.org/10.1145/3539637.3557936","url":null,"abstract":"Decades of research on Image Quality Assessment (IQA) have promoted the creation of a variety of objective quality metrics that strongly correlate to subjective image quality. However, challenges remain when considering quality assessment of 3D/stereo images. Multiple objective quality metrics for 3D images were designed by extending the well-known 2D metrics. As a result, these solutions tend to present weaknesses under 3D-specific artifacts. Recent works demonstrate the effectiveness of machine-learning techniques in the design of 3D quality metrics. Although effective, some machine learning-based solutions may lead to high computational effort and restrict its adoption in low-latency lightweight systems/applications. This paper presents a study on full-reference stereoscopic objective quality assessment considering lightweight machine learning. We evaluated four different decision tree-based algorithms considering eight distinct sets of image features. The classifiers were trained using data from the Waterloo IVC 3D Image Quality Database to determine the subjective quality score measured using Mean Opinion Score (MOS). The results show that RandomForest generally obtains the best accuracy. Our study demonstrates the feasibility of decision tree-based solutions as an accurate and lightweight approach for 3D image quality assessment.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130300563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thiago Silva, N. Silva, Carlos Mito, A. Pereira, Leonardo Rocha
Nowadays, instead of the traditional batch paradigm where the system trains and predicts a model at scheduled times, new Recommender Systems (RSs) have become interactive models. In this case, the RS should continually recommend the most relevant item(s), receive the user feedback(s), and constantly update itself as a sequential decision model. Thus, the literature has modeled each recommender as a Multi-Armed Bandit (MAB) problem to select new arms (items) at each iteration. However, despite recent advances, MAB models have not yet been studied in some classical scenarios, such as the points-of-interest (POIs) recommendation. For this reason, this work intends to fill this scientific gap, adapting classical MAB algorithms for this context. This process is performed through an interactive recommendation framework called iRec. iRec provides three modules to prepare the dataset, create new recommendation agents, and simulate the interactive scenario. This framework contains several MAB state-of-the-art algorithms, a hyperparameter adjustment module, different evaluation metrics, different visual metaphors to present the results, and statistical validation. By instantiating and adapting iRec to our context, we can assess the quality of different interactive recommenders for the POI recommendation scenario.
{"title":"Interactive POI Recommendation: applying a Multi-Armed Bandit framework to characterise and create new models for this scenario","authors":"Thiago Silva, N. Silva, Carlos Mito, A. Pereira, Leonardo Rocha","doi":"10.1145/3539637.3557060","DOIUrl":"https://doi.org/10.1145/3539637.3557060","url":null,"abstract":"Nowadays, instead of the traditional batch paradigm where the system trains and predicts a model at scheduled times, new Recommender Systems (RSs) have become interactive models. In this case, the RS should continually recommend the most relevant item(s), receive the user feedback(s), and constantly update itself as a sequential decision model. Thus, the literature has modeled each recommender as a Multi-Armed Bandit (MAB) problem to select new arms (items) at each iteration. However, despite recent advances, MAB models have not yet been studied in some classical scenarios, such as the points-of-interest (POIs) recommendation. For this reason, this work intends to fill this scientific gap, adapting classical MAB algorithms for this context. This process is performed through an interactive recommendation framework called iRec. iRec provides three modules to prepare the dataset, create new recommendation agents, and simulate the interactive scenario. This framework contains several MAB state-of-the-art algorithms, a hyperparameter adjustment module, different evaluation metrics, different visual metaphors to present the results, and statistical validation. By instantiating and adapting iRec to our context, we can assess the quality of different interactive recommenders for the POI recommendation scenario.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125105060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. S. Oliveira, Wesley Costa, Pedro O. S. Vaz de Melo, Fabrício Benevenuto
With the advent of social media, politicians have access to a new way of communicating with their constituents. It made it possible for politicians without television time, considered essential for a campaign, to expose their ideas and reach a large part of the electorate. As it is a relatively recent phenomenon, researchers from different areas of knowledge have found fertile ground to carry out their research. However, most studies focus on analyzing politicians’ communication on only one social media. In this work, we performed a cross-platform analysis of the communication of Brazilian politicians on Facebook, Instagram, and Twitter. We quantified the posts of these politicians, how much they replicate content and the level of engagement on each of the three social networks. In addition, we grouped and characterized the profiles and communication strategies used by these politicians. As a result, we observed that the majority post on the three social networks, there is diversity in the strategic choice of the main media and there are many politicians who only replicate content on the three networks. However, some post a lot of messages and adapt the content for each network, which suggests management by communication professionals on social media. We also verified that there are several communication profiles, from comical to controversial, with the eclectic profile prevailing, which diversifies the style of the posts.
{"title":"How Politicians Communicate in Social Media: A Cross-Platform Study","authors":"L. S. Oliveira, Wesley Costa, Pedro O. S. Vaz de Melo, Fabrício Benevenuto","doi":"10.1145/3539637.3558232","DOIUrl":"https://doi.org/10.1145/3539637.3558232","url":null,"abstract":"With the advent of social media, politicians have access to a new way of communicating with their constituents. It made it possible for politicians without television time, considered essential for a campaign, to expose their ideas and reach a large part of the electorate. As it is a relatively recent phenomenon, researchers from different areas of knowledge have found fertile ground to carry out their research. However, most studies focus on analyzing politicians’ communication on only one social media. In this work, we performed a cross-platform analysis of the communication of Brazilian politicians on Facebook, Instagram, and Twitter. We quantified the posts of these politicians, how much they replicate content and the level of engagement on each of the three social networks. In addition, we grouped and characterized the profiles and communication strategies used by these politicians. As a result, we observed that the majority post on the three social networks, there is diversity in the strategic choice of the main media and there are many politicians who only replicate content on the three networks. However, some post a lot of messages and adapt the content for each network, which suggests management by communication professionals on social media. We also verified that there are several communication profiles, from comical to controversial, with the eclectic profile prevailing, which diversifies the style of the posts.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariana O. Silva, Gabriel P. Oliveira, Danilo B. Seufitelli, A. Lacerda, M. Moro
The Web has transformed many services and products, including the way we consume music. In a currently streaming-oriented era, predicting hit songs is a major open issue for the music industry. Indeed, there are many efforts in finding the driving factors that shape the success of songs. Yet another feature that may improve such efforts is artistic collaboration, as it allows the songs to reach a wider audience. Therefore, we propose a multi-perspective approach that includes collaboration between artists as a factor for hit song prediction. Specifically, by combining online data from Billboard and Spotify, we model the issue as a binary classification task by using different model variants. Our results show that relying only on music-related features is not enough, whereas models that also consider collaboration features produce better results.
{"title":"Collaboration as a Driving Factor for Hit Song Classification","authors":"Mariana O. Silva, Gabriel P. Oliveira, Danilo B. Seufitelli, A. Lacerda, M. Moro","doi":"10.1145/3539637.3556993","DOIUrl":"https://doi.org/10.1145/3539637.3556993","url":null,"abstract":"The Web has transformed many services and products, including the way we consume music. In a currently streaming-oriented era, predicting hit songs is a major open issue for the music industry. Indeed, there are many efforts in finding the driving factors that shape the success of songs. Yet another feature that may improve such efforts is artistic collaboration, as it allows the songs to reach a wider audience. Therefore, we propose a multi-perspective approach that includes collaboration between artists as a factor for hit song prediction. Specifically, by combining online data from Billboard and Spotify, we model the issue as a binary classification task by using different model variants. Our results show that relying only on music-related features is not enough, whereas models that also consider collaboration features produce better results.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127350403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this work is to analyze the required steps for automated migration of Virtual Machines (VMs) using a proposed solution, called Kumo, through scenarios using public clouds, such as Amazon Web Services (AWS), Microsoft Azure (AZ) and Google Cloud Platform (GCP). A performance evaluation is carried out considering the Total Migration Time (TTM) metric between homogeneous and heterogeneous clouds. Among the homogeneous scenarios, which are those in which the source and destination clouds are from the same provider, but in different data centers, the best result occurred in migrations between Azure clouds, with average TTM of 45m59s. For heterogeneous, the best scenario was the GCP-AWS migration, with TTM of 45m56s. The nine steps for the automated migration of VMs were analyzed, showing that five of them combined significantly impacted, between 94.01% and 99.44%, the TTM of the 9 scenarios tested.
{"title":"Homogeneous and Automated Migration of Virtual Machines Between Multiple Public Clouds","authors":"Marc Xavier, I. S. Sette, C. Ferraz","doi":"10.1145/3539637.3558043","DOIUrl":"https://doi.org/10.1145/3539637.3558043","url":null,"abstract":"The objective of this work is to analyze the required steps for automated migration of Virtual Machines (VMs) using a proposed solution, called Kumo, through scenarios using public clouds, such as Amazon Web Services (AWS), Microsoft Azure (AZ) and Google Cloud Platform (GCP). A performance evaluation is carried out considering the Total Migration Time (TTM) metric between homogeneous and heterogeneous clouds. Among the homogeneous scenarios, which are those in which the source and destination clouds are from the same provider, but in different data centers, the best result occurred in migrations between Azure clouds, with average TTM of 45m59s. For heterogeneous, the best scenario was the GCP-AWS migration, with TTM of 45m56s. The nine steps for the automated migration of VMs were analyzed, showing that five of them combined significantly impacted, between 94.01% and 99.44%, the TTM of the 9 scenarios tested.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128403318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multilabel text classification can be defined as a mapping function that categorizes a text in natural language into one or more labels defined by the scope of a problem. In this work we propose an architecture of stacked classifiers for multilabel text classification. The proposed models use an LSTM recurrent neural network in the first stage of the stack and different multilabel classifiers in the second stage. We evaluated our proposal in two datasets well-known in the literature (TMDB and EUR-LEX Subject Matters), and the results showed that the proposed stack consistently outperforms the baselines.
{"title":"Improving Multilabel Text Classification with Stacking and Recurrent Neural Networks","authors":"R. M. Nunes, M. A. Domingues, V. D. Feltrim","doi":"10.1145/3539637.3557000","DOIUrl":"https://doi.org/10.1145/3539637.3557000","url":null,"abstract":"Multilabel text classification can be defined as a mapping function that categorizes a text in natural language into one or more labels defined by the scope of a problem. In this work we propose an architecture of stacked classifiers for multilabel text classification. The proposed models use an LSTM recurrent neural network in the first stage of the stack and different multilabel classifiers in the second stage. We evaluated our proposal in two datasets well-known in the literature (TMDB and EUR-LEX Subject Matters), and the results showed that the proposed stack consistently outperforms the baselines.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129905971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cora Silberschneider, Samuel S. Guimarães, Fabrício Benevenuto, Márcio Silva
Most of politicians, public figures and political candidates use online advertising platforms to spread their political values and messages. Since 2018, Facebook has made available an Ad Library providing advertising transparency to prevent interference in elections and other political issues. However, it is not explicit how the ads are selected to incorporate this database and to what extent there is an artificial intelligence applied to this selection. In this work, we provide a categorization of the ads data in Brazil to understand the dynamic of political advertisements and what type of ads are present in this ad library. We analyze impressions, the money spent and who are the advertisers on ads from 2018 to 2021. Among our findings, we show that during the election months of 2018 and 2020 the volume of ads correspond to approximately 30% of the ads in the dataset and the moving average of the money spent per ads increases about 200% after the first round of brazilian elections.
{"title":"Characterizing Brazilian Political Ads on Facebook","authors":"Cora Silberschneider, Samuel S. Guimarães, Fabrício Benevenuto, Márcio Silva","doi":"10.1145/3539637.3557935","DOIUrl":"https://doi.org/10.1145/3539637.3557935","url":null,"abstract":"Most of politicians, public figures and political candidates use online advertising platforms to spread their political values and messages. Since 2018, Facebook has made available an Ad Library providing advertising transparency to prevent interference in elections and other political issues. However, it is not explicit how the ads are selected to incorporate this database and to what extent there is an artificial intelligence applied to this selection. In this work, we provide a categorization of the ads data in Brazil to understand the dynamic of political advertisements and what type of ads are present in this ad library. We analyze impressions, the money spent and who are the advertisers on ads from 2018 to 2021. Among our findings, we show that during the election months of 2018 and 2020 the volume of ads correspond to approximately 30% of the ads in the dataset and the moving average of the money spent per ads increases about 200% after the first round of brazilian elections.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126770061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renato O. Rodrigues, José Ricardo da Silva, Diego N. Brandão, J. Santos
The inclusion of sensory effects in multimedia applications has the potential to increase the Quality of Experience (QoE) and improve users immersion. However, authoring such applications presents challenges arising from the need to control the rendering of sensory effects in the physical environment along with the presentation of their counterpart in the multimedia application and the constantly changing sensory effects state according to user interaction. Computational Fluid Dynamics (CFD) techniques can be used to simulate the sensory effects in a virtual environment and use the generated data to automatically control actuators. In this work, we propose an architecture to simulate wind sensory effects in an interactive real-time application and validate it using a CFD method. Data from simulation is then used to infer propagation delay and the wind temperature at the user position.
{"title":"An Approach for Sensory Effects Dispersion Simulation with Computational Fluid Dynamics","authors":"Renato O. Rodrigues, José Ricardo da Silva, Diego N. Brandão, J. Santos","doi":"10.1145/3539637.3556999","DOIUrl":"https://doi.org/10.1145/3539637.3556999","url":null,"abstract":"The inclusion of sensory effects in multimedia applications has the potential to increase the Quality of Experience (QoE) and improve users immersion. However, authoring such applications presents challenges arising from the need to control the rendering of sensory effects in the physical environment along with the presentation of their counterpart in the multimedia application and the constantly changing sensory effects state according to user interaction. Computational Fluid Dynamics (CFD) techniques can be used to simulate the sensory effects in a virtual environment and use the generated data to automatically control actuators. In this work, we propose an architecture to simulate wind sensory effects in an interactive real-time application and validate it using a CFD method. Data from simulation is then used to infer propagation delay and the wind temperature at the user position.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}