D. Roberts, Norman Murray, C. Moore, Toby Duckworth
Summary form only given. The complete presentation was not made available for publication as part of the conference proceedings. A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This tutorial begins by explaining some of the primary challenges in reproducing the face to face meeting and goes on to show how our research is examining both the problems and solutions. We compare the approaches of “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion. A central theme is the communication of appearance and attention. We explain why video conferencing can only faithfully reproduce the first, while virtual reality only the second, and how close free viewpoint 3D video is coming to doing both. We look at tracking technologies for driving avatars, ranging from from eye-trackers to the Kinect, and various ways of capturing people with multi-stream video and reproducing them in 3D video.
{"title":"DS-RT 2011 Tutorial: Telepresent Humans","authors":"D. Roberts, Norman Murray, C. Moore, Toby Duckworth","doi":"10.1109/DS-RT.2011.39","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.39","url":null,"abstract":"Summary form only given. The complete presentation was not made available for publication as part of the conference proceedings. A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This tutorial begins by explaining some of the primary challenges in reproducing the face to face meeting and goes on to show how our research is examining both the problems and solutions. We compare the approaches of “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion. A central theme is the communication of appearance and attention. We explain why video conferencing can only faithfully reproduce the first, while virtual reality only the second, and how close free viewpoint 3D video is coming to doing both. We look at tracking technologies for driving avatars, ranging from from eye-trackers to the Kinect, and various ways of capturing people with multi-stream video and reproducing them in 3D video.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131570633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the scale of Distributed Virtual Environments (DVEs) grows in terms of participants and virtual entities, using interest management schemes to reduce bandwidth consumption becomes increasingly common for DVE development. The interest matching process is essential for most of the interest management schemes which determines what data should be sent to the participants as well as what data should be filtered. However, if the computational overhead of interest matching is too high, it would be unsuitable for real-time DVEs for which runtime performance is important. This paper presents a new approach of interest matching which divides the workload of matching process among a cluster of computers. Experimental evidence shows that our approach is an effective solution for the real-time applications.
{"title":"A Parallel Interest Matching Algorithm for Distributed-Memory Systems","authors":"Elvis S. Liu, G. Theodoropoulos","doi":"10.1109/DS-RT.2011.34","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.34","url":null,"abstract":"As the scale of Distributed Virtual Environments (DVEs) grows in terms of participants and virtual entities, using interest management schemes to reduce bandwidth consumption becomes increasingly common for DVE development. The interest matching process is essential for most of the interest management schemes which determines what data should be sent to the participants as well as what data should be filtered. However, if the computational overhead of interest matching is too high, it would be unsuitable for real-time DVEs for which runtime performance is important. This paper presents a new approach of interest matching which divides the workload of matching process among a cluster of computers. Experimental evidence shows that our approach is an effective solution for the real-time applications.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128433134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Gasparello, G. Marino, F. Banno, F. Tecchia, M. Bergamasco
Real-time 3D content distribution over a network (either LAN or WAN) has many possible applications, but requires facing several challenges, most notably the handling of the large amount of data usually associated with 3D meshes. The scope of the present paper falls within the well-established context of real-time capture and streaming of OpenGL command sequences, focusing in particular on data compression schemes. However, we advance beyond the state-of-the-art improving over previous attempts of "in-frame" geometric compression on 3D structures inferred from generic OpenGL command sequences and adding "inter-frame" redundancy exploitation of the traffic generated by the typical architecture of interactive applications. Measurements reveal for this combination of techniques a very effective reduction of network traffic and a CPU overhead compatible with the requirements of interactive applications, suggesting a significant application potential for Internet-based 3D content streaming.
{"title":"Real-Time Network Streaming of Dynamic 3D Content with In-frame and Inter-frame Compression","authors":"P. Gasparello, G. Marino, F. Banno, F. Tecchia, M. Bergamasco","doi":"10.1109/DS-RT.2011.24","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.24","url":null,"abstract":"Real-time 3D content distribution over a network (either LAN or WAN) has many possible applications, but requires facing several challenges, most notably the handling of the large amount of data usually associated with 3D meshes. The scope of the present paper falls within the well-established context of real-time capture and streaming of OpenGL command sequences, focusing in particular on data compression schemes. However, we advance beyond the state-of-the-art improving over previous attempts of \"in-frame\" geometric compression on 3D structures inferred from generic OpenGL command sequences and adding \"inter-frame\" redundancy exploitation of the traffic generated by the typical architecture of interactive applications. Measurements reveal for this combination of techniques a very effective reduction of network traffic and a CPU overhead compatible with the requirements of interactive applications, suggesting a significant application potential for Internet-based 3D content streaming.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124638271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Environment (VE) offers an effective way to study virtual prototypes, also from user-centered perspective. This paper focuses on the theoretical models which can be used in studying VE user, We present three alternatives frames, in which the focus varies. The first one is the traditional frame in VE studies, which is the concept of presence. It focuses on the technological components, which are needed for providing a VE user the feeling of s/he is acting with a real product. The second frame focuses on the work task by using the extended version of activity theory. The third frame focuses on the individual user and her/his behavior and feelings within acting with the (virtual) product. The third frame is based on the studies of user experience (UX) especially those UX studies which focus on users' emotions. We outline the alternative frames by a case of simulating the control cabins of mobile machine.
{"title":"Three Frames for Studying Users in Virtual Environments: Case of Simulated Mobile Machines","authors":"T. Tiainen, A. Ellman, Taina Kaapu","doi":"10.1109/DS-RT.2011.21","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.21","url":null,"abstract":"Virtual Environment (VE) offers an effective way to study virtual prototypes, also from user-centered perspective. This paper focuses on the theoretical models which can be used in studying VE user, We present three alternatives frames, in which the focus varies. The first one is the traditional frame in VE studies, which is the concept of presence. It focuses on the technological components, which are needed for providing a VE user the feeling of s/he is acting with a real product. The second frame focuses on the work task by using the extended version of activity theory. The third frame focuses on the individual user and her/his behavior and feelings within acting with the (virtual) product. The third frame is based on the studies of user experience (UX) especially those UX studies which focus on users' emotions. We outline the alternative frames by a case of simulating the control cabins of mobile machine.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of Coalition Battle Management Language (C-BML) has provided a new set of tools to enable C2-Simulation interoperability to be achieved using common, international standards rather than the customised and sometimes ad hoc methods used previously. Furthermore, it permits complex, multi-national coalition federations to be designed and executed. NATO MSG-048 was established in 2006 to evaluate C-BML as a practical means of achieving C2-Simulation interoperability and as part of its chartered activities conducted a number of C-BML experiments culminating in an ambitious experiment in November 2009. MSG-085 has been established to build on the earlier work of MSG-048 and its programme of work includes focussed experimentation and evaluation of different C-BML systems. This paper addresses issues in early implementation of C-BML for the next generation of C-BML, including multiple simulations, translators, the Military Scenario Definition Language, and C-BML middleware. This paper is intended to complement companion papers on servers and grammar in a session on distributed military simulation in coalitions.
{"title":"UK Experiences of Using Coalition Battle Management Language","authors":"Adam Brook","doi":"10.1109/DS-RT.2011.25","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.25","url":null,"abstract":"The development of Coalition Battle Management Language (C-BML) has provided a new set of tools to enable C2-Simulation interoperability to be achieved using common, international standards rather than the customised and sometimes ad hoc methods used previously. Furthermore, it permits complex, multi-national coalition federations to be designed and executed. NATO MSG-048 was established in 2006 to evaluate C-BML as a practical means of achieving C2-Simulation interoperability and as part of its chartered activities conducted a number of C-BML experiments culminating in an ambitious experiment in November 2009. MSG-085 has been established to build on the earlier work of MSG-048 and its programme of work includes focussed experimentation and evaluation of different C-BML systems. This paper addresses issues in early implementation of C-BML for the next generation of C-BML, including multiple simulations, translators, the Military Scenario Definition Language, and C-BML middleware. This paper is intended to complement companion papers on servers and grammar in a session on distributed military simulation in coalitions.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128448075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last decade the shift of information and communication systems from a purely system level to the social level is already observable. However, the integration of social / cognitive aspects into so-called social computing is not an easy task due to conceptual differences in domains. A Socio-Technical System (STS) is a recent term intending to differentiate between a social system mediated by natural sciences or information technology. Even if the mediation of social / cognitive aspects is "theoretically" governed by technology, the gap between "socio" and "technical" is historical and huge. Furthermore the fact that with every passing year, the technical systems become more intelligent with respect to interaction with people and their pervasiveness, a special attention should be given to modelling both social and technical components and interaction between them. For example while modelling (and simulating) an emergency situation from a public facility, the possible availability of technology at the environment (e.g. situation-aware exit signs, interactive displays, etc.) and personal (e.g. cell phones, specialized wearables etc.) level, along with its social / cognitive influence must not be overruled. To address this challenge, we have integrated cognitive decision making model abstracted from psychological, neurological and social theories of human behaviour during evacuation situations into CA based simulation. Keeping focus on a scenario in which a small population of agents is technologically assisted, some of the most interesting finding are: (i) the inclusion of a representative and authentic social behaviour model into modelling a socio-technical system essentially produces fundamental differences in methodologies, (ii) the technologically assisted agents emerge as leaders during evacuation changing the intentions of many agents within their influence (iii) even a small population of such leaders in sufficiently large population is enough to guarantee a remarkable difference, particularly improving usage of possibly under-utilized exits.
{"title":"Evacuation Simulation Based on Cognitive Decision Making Model in a Socio-Technical System","authors":"K. Zia, A. Riener, A. Ferscha, A. Sharpanskykh","doi":"10.1109/DS-RT.2011.16","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.16","url":null,"abstract":"In the last decade the shift of information and communication systems from a purely system level to the social level is already observable. However, the integration of social / cognitive aspects into so-called social computing is not an easy task due to conceptual differences in domains. A Socio-Technical System (STS) is a recent term intending to differentiate between a social system mediated by natural sciences or information technology. Even if the mediation of social / cognitive aspects is \"theoretically\" governed by technology, the gap between \"socio\" and \"technical\" is historical and huge. Furthermore the fact that with every passing year, the technical systems become more intelligent with respect to interaction with people and their pervasiveness, a special attention should be given to modelling both social and technical components and interaction between them. For example while modelling (and simulating) an emergency situation from a public facility, the possible availability of technology at the environment (e.g. situation-aware exit signs, interactive displays, etc.) and personal (e.g. cell phones, specialized wearables etc.) level, along with its social / cognitive influence must not be overruled. To address this challenge, we have integrated cognitive decision making model abstracted from psychological, neurological and social theories of human behaviour during evacuation situations into CA based simulation. Keeping focus on a scenario in which a small population of agents is technologically assisted, some of the most interesting finding are: (i) the inclusion of a representative and authentic social behaviour model into modelling a socio-technical system essentially produces fundamental differences in methodologies, (ii) the technologically assisted agents emerge as leaders during evacuation changing the intentions of many agents within their influence (iii) even a small population of such leaders in sufficiently large population is enough to guarantee a remarkable difference, particularly improving usage of possibly under-utilized exits.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122829090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Hierarchically Distributed Tree (HD Tree) is a novel distributed data structure built over a complete tree. The purpose of proposing this new data structure is to better support multi-dimensional range query in the distributed environment. HD Tree doubles the number of neighbors at the cost of doubling total links of a tree. The routing operation in HD Tree is supposed to be highly error-resilient. In HD Tree, the routing table size is determined by the system parameter k, and the performance of all basic operations are bound by O(lg(n)). Multiple routing options can be found between any two nodes in the system. This paper explores fault tolerant routing strategies in HD Tree. The experimental results produce very limited and unnoticeable increases in routing cost when conducting range queries in an error-prone routing environment. The maximum failures we have tested are about 5 percent of routing nodes. The experimental results also indicate that higher fault tolerant capability requires finer consideration in the design of the error-resilient routing strategy.
{"title":"Error-Resilient Routing for Supporting Multi-dimensional Range Query in HD Tree","authors":"YunFeng Gu, A. Boukerche","doi":"10.1109/DS-RT.2011.14","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.14","url":null,"abstract":"The Hierarchically Distributed Tree (HD Tree) is a novel distributed data structure built over a complete tree. The purpose of proposing this new data structure is to better support multi-dimensional range query in the distributed environment. HD Tree doubles the number of neighbors at the cost of doubling total links of a tree. The routing operation in HD Tree is supposed to be highly error-resilient. In HD Tree, the routing table size is determined by the system parameter k, and the performance of all basic operations are bound by O(lg(n)). Multiple routing options can be found between any two nodes in the system. This paper explores fault tolerant routing strategies in HD Tree. The experimental results produce very limited and unnoticeable increases in routing cost when conducting range queries in an error-prone routing environment. The maximum failures we have tested are about 5 percent of routing nodes. The experimental results also indicate that higher fault tolerant capability requires finer consideration in the design of the error-resilient routing strategy.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131400185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A feature of standard video-mediated Communication systems (VMC) is that participants see into each other's spaces from the viewpoint of a camera. Consequently, participants' capacity to use the spatially-based resources that exist in co-located settings (eg the production and comprehension of pointing and eye gaze direction) can be compromised. Whilst positioning cameras close to displays, or switching or interpolating between multiple cameras to provide appropriately aligned views can reduce this problem, an alternative paradigm is the use of immersive projection technology to locate participants within an immersive collaborative virtual environment (ICVE), in which remote participants appear as 3D graphical representations. Two approaches toward representation of remote participants in ICVEs have been studied: embodied avatars animated using participants' tracked body motion, and vision-based techniques that reconstruct 3D models from multiple streams of live video input. Drawing on empirical evaluations of an avatar-based ICVE system that both captures and displays eye-movement, together with an examination of previous research into gaze, we provide a specification of gaze practices and the cues used in the perception of gaze that should be supported in ICVEs. We delineate some of the challenges for vision-based ICVE and discuss the potential for combining different approaches in the development of such systems.
{"title":"Some Implications of Eye Gaze Behavior and Perception for the Design of Immersive Telecommunication Systems","authors":"John P Rae, W. Steptoe, D. Roberts","doi":"10.1109/DS-RT.2011.37","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.37","url":null,"abstract":"A feature of standard video-mediated Communication systems (VMC) is that participants see into each other's spaces from the viewpoint of a camera. Consequently, participants' capacity to use the spatially-based resources that exist in co-located settings (eg the production and comprehension of pointing and eye gaze direction) can be compromised. Whilst positioning cameras close to displays, or switching or interpolating between multiple cameras to provide appropriately aligned views can reduce this problem, an alternative paradigm is the use of immersive projection technology to locate participants within an immersive collaborative virtual environment (ICVE), in which remote participants appear as 3D graphical representations. Two approaches toward representation of remote participants in ICVEs have been studied: embodied avatars animated using participants' tracked body motion, and vision-based techniques that reconstruct 3D models from multiple streams of live video input. Drawing on empirical evaluations of an avatar-based ICVE system that both captures and displays eye-movement, together with an examination of previous research into gaze, we provide a specification of gaze practices and the cues used in the perception of gaze that should be supported in ICVEs. We delineate some of the challenges for vision-based ICVE and discuss the potential for combining different approaches in the development of such systems.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122216045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing use of 3D displays and input devices we need to be sure that when 3D worlds are created that their users can easily learn how to operate within these 3D worlds. To do this we can provide the user with a contextal interaction support within the environment. Within virtual worlds where you are free to move around and especially when you are immersed, trying to refer to a manual to ascertain your next course of action within the world would not be best received by the user. Instead of manuals separate from the computer system, the computer system should be able to interrogate itself to provide the user with information on what the system can do. For computer systems to be able to do this we need to move away from defining interaction using an event based model to formally defining the interaction dialogue. We have shown how by using ATNs you can allow the user to ask what they can do within the current context. The user can also query the system to see how they can perform a specific task. The help provided can also identify to the user the components within the environment that they need to interact with. Further work has begun to examine how the user could adapt the interaction within the system by visualising the ATN.
{"title":"Contextual Interaction Support in 3D Worlds","authors":"Norman Murray","doi":"10.1109/DS-RT.2011.19","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.19","url":null,"abstract":"With the increasing use of 3D displays and input devices we need to be sure that when 3D worlds are created that their users can easily learn how to operate within these 3D worlds. To do this we can provide the user with a contextal interaction support within the environment. Within virtual worlds where you are free to move around and especially when you are immersed, trying to refer to a manual to ascertain your next course of action within the world would not be best received by the user. Instead of manuals separate from the computer system, the computer system should be able to interrogate itself to provide the user with information on what the system can do. For computer systems to be able to do this we need to move away from defining interaction using an event based model to formally defining the interaction dialogue. We have shown how by using ATNs you can allow the user to ask what they can do within the current context. The user can also query the system to see how they can perform a specific task. The help provided can also identify to the user the components within the environment that they need to interact with. Further work has begun to examine how the user could adapt the interaction within the system by visualising the ATN.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116914624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulation-based decision support is an important tool in business, science, engineering, and many other areas. Although traditional simulation analysis can be used to generate and test possible plans, it suffers from a long cycletime for model update, analysis and verification. It is thus very difficult to carry out prompt “what-if” analysis to respond to abrupt changes in the physical systems being modeled. Symbiotic simulation has been proposed as a way of solving this problem by having the simulation system and the physical system interact in a mutually beneficial manner. The simulation system benefits from real-time input data which is used to adapt the model and the physical system benefits from the optimized performance that is obtained from the analysis of simulation results. This talk will present a classification of symbiotic simulation systems with examples of applications from the literature. An analysis of these applications reveals some common aspects and issues that are important for symbiotic simulation systems. From this analysis, we have specified an agent-based generic framework for symbiotic simulation. We show that it is possible to identify a few basic functionalities that can be provided by corresponding agents in our framework. These can then be composed together by a specific workflow to form a particular symbiotic simulation system. Finally, the talk will discuss the use of symbiotic simulation as a decision support tool in understanding and steering complex adaptive systems. Some examples of current applications being developed at Nanyang Technological University will be described.
{"title":"Symbiotic Simulation and Its Application to Complex Adaptive Systems","authors":"S. Turner","doi":"10.1109/DS-RT.2011.36","DOIUrl":"https://doi.org/10.1109/DS-RT.2011.36","url":null,"abstract":"Simulation-based decision support is an important tool in business, science, engineering, and many other areas. Although traditional simulation analysis can be used to generate and test possible plans, it suffers from a long cycletime for model update, analysis and verification. It is thus very difficult to carry out prompt “what-if” analysis to respond to abrupt changes in the physical systems being modeled. Symbiotic simulation has been proposed as a way of solving this problem by having the simulation system and the physical system interact in a mutually beneficial manner. The simulation system benefits from real-time input data which is used to adapt the model and the physical system benefits from the optimized performance that is obtained from the analysis of simulation results. This talk will present a classification of symbiotic simulation systems with examples of applications from the literature. An analysis of these applications reveals some common aspects and issues that are important for symbiotic simulation systems. From this analysis, we have specified an agent-based generic framework for symbiotic simulation. We show that it is possible to identify a few basic functionalities that can be provided by corresponding agents in our framework. These can then be composed together by a specific workflow to form a particular symbiotic simulation system. Finally, the talk will discuss the use of symbiotic simulation as a decision support tool in understanding and steering complex adaptive systems. Some examples of current applications being developed at Nanyang Technological University will be described.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114304578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}