We compare the behavior of avatars in videogames with the expected behavior of humans in smart environments, particularly smart urban environments. As a result of this comparison we conclude that many aspects of controlling an avatar in a game environment will also be seen in controlling human behavior in smart urban environments. We predict a convergence of videogame environments with smart urban environments where the Artificial Intelligence (AI) of the game environment can be compared with the AI of the smart urban environment that is responsible for the functioning of the smart city. Game characteristics such as immediate rewards for good behavior can also be foreseen.
{"title":"Humans as Avatars in Smart and Playable Cities","authors":"A. Nijholt","doi":"10.1109/CW.2017.23","DOIUrl":"https://doi.org/10.1109/CW.2017.23","url":null,"abstract":"We compare the behavior of avatars in videogames with the expected behavior of humans in smart environments, particularly smart urban environments. As a result of this comparison we conclude that many aspects of controlling an avatar in a game environment will also be seen in controlling human behavior in smart urban environments. We predict a convergence of videogame environments with smart urban environments where the Artificial Intelligence (AI) of the game environment can be compared with the AI of the smart urban environment that is responsible for the functioning of the smart city. Game characteristics such as immediate rewards for good behavior can also be foreseen.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128262935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the relationship between audio models for Virtual Reality (VR) video with respect to the senses of immersion and realism that each model delivers. Mono, Stereo, 5.1 Surround Sound, and a Virtual Spatialised Position configuration was developed for testing in a VR music video and evaluated with a user study. Participants experienced the VR video with these differing audio models as accompaniment a total of four times. Qualitative and quantitative data were recorded to evaluate user experience. The results indicate that no statistical significance was present between the four models in relation to immersion or realism, suggesting that complex audio renderings are not always necessary for effective user experience.
{"title":"A Comparison of Audio Models for Virtual Reality Video","authors":"Steven Davies, Stuart Cunningham, R. Picking","doi":"10.1109/CW.2017.41","DOIUrl":"https://doi.org/10.1109/CW.2017.41","url":null,"abstract":"This paper investigates the relationship between audio models for Virtual Reality (VR) video with respect to the senses of immersion and realism that each model delivers. Mono, Stereo, 5.1 Surround Sound, and a Virtual Spatialised Position configuration was developed for testing in a VR music video and evaluated with a user study. Participants experienced the VR video with these differing audio models as accompaniment a total of four times. Qualitative and quantitative data were recorded to evaluate user experience. The results indicate that no statistical significance was present between the four models in relation to immersion or realism, suggesting that complex audio renderings are not always necessary for effective user experience.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a model for procedurally generating virtual settlements populated with roads, land parcels and buildings. Our model improves on existing research by considering historical influence on settlement growth. To do this, an interactive time-line is used, allowing for a designer to specify a number of architectural periods. These architectural periods are then used in the generation process, giving the designer a robust tool to interactively generate photo-realistic urban scenes. Our results show that a variety of settlement types and sizes can be generated. In addition, we demonstrate that road patterns within real-world settlements can be created using our system.
{"title":"A Time-Line Approach for the Generation of Simulated Settlements","authors":"Benjamin Williams, C. Headleand","doi":"10.1109/CW.2017.32","DOIUrl":"https://doi.org/10.1109/CW.2017.32","url":null,"abstract":"In this paper we present a model for procedurally generating virtual settlements populated with roads, land parcels and buildings. Our model improves on existing research by considering historical influence on settlement growth. To do this, an interactive time-line is used, allowing for a designer to specify a number of architectural periods. These architectural periods are then used in the generation process, giving the designer a robust tool to interactively generate photo-realistic urban scenes. Our results show that a variety of settlement types and sizes can be generated. In addition, we demonstrate that road patterns within real-world settlements can be created using our system.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129393935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital recreations of historical sites and events are important tools both for academic researchers [6,7] and for public interpretation [7,9]. Current 3D visualization and VR technologies enable these recreations to be increasingly immersive and engaging [10,14]. This poster describes a case study based on a mid-twentieth century Chester dance hall, examining the possibilities and limitations of 3D VR for recreating a public music venue which no longer physically exists, and also for visualizing and analyzing the professional network of musicians who played there, and at many other local venues.
{"title":"Traversing Social Networks in the Virtual Dance Hall: Visualizing History in VR","authors":"H. Southall, Lee Beever, P. Butcher","doi":"10.1109/CW.2017.48","DOIUrl":"https://doi.org/10.1109/CW.2017.48","url":null,"abstract":"Digital recreations of historical sites and events are important tools both for academic researchers [6,7] and for public interpretation [7,9]. Current 3D visualization and VR technologies enable these recreations to be increasingly immersive and engaging [10,14]. This poster describes a case study based on a mid-twentieth century Chester dance hall, examining the possibilities and limitations of 3D VR for recreating a public music venue which no longer physically exists, and also for visualizing and analyzing the professional network of musicians who played there, and at many other local venues.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121781982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we use grammar-directed procedural content generation (PCG) techniques to develop folklore; based on the seven basic story plots, for a simulated religion. A hierarchy of values for a simulated community was first generated. Using these values, a variety of deities were procedurally generated, each with their own reflected values and persona. A Context-Free Grammar is then traversed in order to generate fables appropriate for each deities persona. The intention of this work is to generate fables which can be used to contextualize a given simulated culture's beliefs.
{"title":"Artificial Folklore for Simulated Religions","authors":"Jason Hall, Benjamin Williams, C. Headleand","doi":"10.1109/CW.2017.28","DOIUrl":"https://doi.org/10.1109/CW.2017.28","url":null,"abstract":"In this paper we use grammar-directed procedural content generation (PCG) techniques to develop folklore; based on the seven basic story plots, for a simulated religion. A hierarchy of values for a simulated community was first generated. Using these values, a variety of deities were procedurally generated, each with their own reflected values and persona. A Context-Free Grammar is then traversed in order to generate fables appropriate for each deities persona. The intention of this work is to generate fables which can be used to contextualize a given simulated culture's beliefs.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129255528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quality of facial images has a great impact on the accuracy of the automated face recognition system. The intraclass variations introduced by varied facial quality due to variation in illumination conditions may degrade the performance of a face recognition system significantly. In this paper, we proposed an adaptive discrete wavelet transform (DWT) based face recognition approach which will normalize the illumination distortion using regional contrast limited adaptive histogram equalization (CLAHE) and discrete cosine transform (DCT) normalization based on the illumination quality. The DWT based approach is used to extract the low and high frequency facial features at different scales. In the proposed method, a weighted fusion of the low and high frequency subbands is computed to improve the identification accuracy under varying lighting conditions. The selection of fusion parameters is made using fuzzy membership functions. The performance of the proposed method was validated on the Extended Yale Database B. Experimental results depict that the proposed method outperforms some well known face recognition approaches.
{"title":"Adaptive Face Recognition Based on Image Quality","authors":"F. Zohra, M. Gavrilova","doi":"10.1109/CW.2017.35","DOIUrl":"https://doi.org/10.1109/CW.2017.35","url":null,"abstract":"Quality of facial images has a great impact on the accuracy of the automated face recognition system. The intraclass variations introduced by varied facial quality due to variation in illumination conditions may degrade the performance of a face recognition system significantly. In this paper, we proposed an adaptive discrete wavelet transform (DWT) based face recognition approach which will normalize the illumination distortion using regional contrast limited adaptive histogram equalization (CLAHE) and discrete cosine transform (DCT) normalization based on the illumination quality. The DWT based approach is used to extract the low and high frequency facial features at different scales. In the proposed method, a weighted fusion of the low and high frequency subbands is computed to improve the identification accuracy under varying lighting conditions. The selection of fusion parameters is made using fuzzy membership functions. The performance of the proposed method was validated on the Extended Yale Database B. Experimental results depict that the proposed method outperforms some well known face recognition approaches.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115324146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Jilani, H. Ugail, A. M. Bukar, Andrew Logan, T. Munshi
Ethnicity is one of the most salient clues to face identity. Analysis of ethnicity-specific facial data is a challenging problem and predominantly carried out using computer-based algorithms. Current published literature focusses on the use of frontal face images. We addressed the challenge of binary (British Pakistani or other ethnicity) ethnicity classification using profile facial images. The proposed framework is based on the extraction of geometric features using 10 anthropometric facial landmarks, within a purpose-built, novel database of 135 multi-ethnic and multi-racial subjects and a total of 675 face images. Image dimensionality was reduced using Principle Component Analysis and Partial Least Square Regression. Classification was performed using Linear Support Vector Machine. The results of this framework are promising with 71.11% ethnic classification accuracy using a PCA algorithm + SVM as a classifier, and 76.03% using PLS algorithm + SVM as a classifier.
{"title":"A Machine Learning Approach for Ethnic Classification: The British Pakistani Face","authors":"S. Jilani, H. Ugail, A. M. Bukar, Andrew Logan, T. Munshi","doi":"10.1109/CW.2017.27","DOIUrl":"https://doi.org/10.1109/CW.2017.27","url":null,"abstract":"Ethnicity is one of the most salient clues to face identity. Analysis of ethnicity-specific facial data is a challenging problem and predominantly carried out using computer-based algorithms. Current published literature focusses on the use of frontal face images. We addressed the challenge of binary (British Pakistani or other ethnicity) ethnicity classification using profile facial images. The proposed framework is based on the extraction of geometric features using 10 anthropometric facial landmarks, within a purpose-built, novel database of 135 multi-ethnic and multi-racial subjects and a total of 675 face images. Image dimensionality was reduced using Principle Component Analysis and Partial Least Square Regression. Classification was performed using Linear Support Vector Machine. The results of this framework are promising with 71.11% ethnic classification accuracy using a PCA algorithm + SVM as a classifier, and 76.03% using PLS algorithm + SVM as a classifier.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Buccafurri, G. Lax, Denis Migdal, S. Nicolazzo, Antonino Nocera, C. Rosenberger
Fake identities and identity theft are issues whose relevance is increasing in the social network domain. This paper deals with this problem by proposing an innovative approach which combines a collaborative mechanism implementing a trust graph with keystroke-dynamic-recognition techniques to trust identities. The trust of each node is computed on the basis of neighborhood recognition and behavioral biometric support. The model leverages the word of mouth propagation and a settable degree of redundancy to obtain robustness. Experimental results show the benefit of the proposed solution even if attack nodes are present in the social network.
{"title":"Contrasting False Identities in Social Networks by Trust Chains and Biometric Reinforcement","authors":"F. Buccafurri, G. Lax, Denis Migdal, S. Nicolazzo, Antonino Nocera, C. Rosenberger","doi":"10.1109/CW.2017.42","DOIUrl":"https://doi.org/10.1109/CW.2017.42","url":null,"abstract":"Fake identities and identity theft are issues whose relevance is increasing in the social network domain. This paper deals with this problem by proposing an innovative approach which combines a collaborative mechanism implementing a trust graph with keystroke-dynamic-recognition techniques to trust identities. The trust of each node is computed on the basis of neighborhood recognition and behavioral biometric support. The model leverages the word of mouth propagation and a settable degree of redundancy to obtain robustness. Experimental results show the benefit of the proposed solution even if attack nodes are present in the social network.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126749635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present our early work on building prototype applications for Immersive Analytics using emerging standards-based web technologies for VR. For our preliminary investigations we visualize 3D bar charts that attempt to resemble recent physical visualizations built in the visualization community. We explore some of the challenges faced by developers in working with emerging VR tools for the web, and in building effective and informative immersive 3D visualizations.
{"title":"Building Immersive Data Visualizations for the Web","authors":"P. Butcher, Panagiotis D. Ritsos","doi":"10.1109/CW.2017.11","DOIUrl":"https://doi.org/10.1109/CW.2017.11","url":null,"abstract":"We present our early work on building prototype applications for Immersive Analytics using emerging standards-based web technologies for VR. For our preliminary investigations we visualize 3D bar charts that attempt to resemble recent physical visualizations built in the visualization community. We explore some of the challenges faced by developers in working with emerging VR tools for the web, and in building effective and informative immersive 3D visualizations.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124690314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tilt Brush, the Virtual Reality (VR) painting application developed by Google, is an impressive tool for creating digital 3D imagery, offering great new possibilities for art making and the creative industries. At the moment, Tilt Brush is still far from ubiquitous due to constraints such as the high cost of purchasing a VR system and the physical space required to run it. This means, on the one hand, that only a small number of creatives have the opportunity to work with this application and, on the other hand, that their artworks are inaccessible to most people, unless reduced to a 2D experience. However, if images created in VR were to be turned into holograms, the three-dimensionality of these productions could be preserved and experienced without the use of a VR headset, which would make them accessible to a wider public. This paper describes the process of transferring a project produced in Tilt Brush onto holographic format in order to obtain a double parallax hologram, and the difficulties encountered along the way.
{"title":"Visualizing Virtual Reality Imagery through Digital Holography","authors":"I. Pioaru","doi":"10.1109/CW.2017.50","DOIUrl":"https://doi.org/10.1109/CW.2017.50","url":null,"abstract":"Tilt Brush, the Virtual Reality (VR) painting application developed by Google, is an impressive tool for creating digital 3D imagery, offering great new possibilities for art making and the creative industries. At the moment, Tilt Brush is still far from ubiquitous due to constraints such as the high cost of purchasing a VR system and the physical space required to run it. This means, on the one hand, that only a small number of creatives have the opportunity to work with this application and, on the other hand, that their artworks are inaccessible to most people, unless reduced to a 2D experience. However, if images created in VR were to be turned into holograms, the three-dimensionality of these productions could be preserved and experienced without the use of a VR headset, which would make them accessible to a wider public. This paper describes the process of transferring a project produced in Tilt Brush onto holographic format in order to obtain a double parallax hologram, and the difficulties encountered along the way.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"105 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120908826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}