Pub Date : 2012-07-02DOI: 10.1109/VECIMS.2012.6273183
Jiangtao Lv, Zhenhe Ma
The problem of water pollution is very serious. The seaweed is an important feature of eutrophication. It is an important aspect of pollution. Three-dimensional fluorescence spectrum can show entire fingerprint information of fluorescent light that in the range of excitation and emission wavelength, but the dimension of three-dimensional fluorescence spectrum is higher, the characteristic spectrum of different kinds pelagic plant are multifarious, it is complex identification. The kernel principal component analysis (KPCA) is used in this paper. It can reduce the dimensions of the spectroscopy. The independent component analysis (ICA) is used to do the matrix decomposition from the perspective of independence to extract the main feature of the spectroscopy data processed by the KPCA. The support vector machine (SVM) is used to assort the main characteristic root books which are abstracted by the ICA. The correct laboratory sorting of seaweed is realized. Experimental result indicate, this method can identify the chief component of admixture seaweed, the high dimensional spectroscopy information of seaweed is proceed effective feature extraction, the sorting speed is increase greatly, the discrimination of sorting is reach 90% percent.
{"title":"Identification of the seaweed fluorescence spectroscopy based on the KPCA and ICA-SVM","authors":"Jiangtao Lv, Zhenhe Ma","doi":"10.1109/VECIMS.2012.6273183","DOIUrl":"https://doi.org/10.1109/VECIMS.2012.6273183","url":null,"abstract":"The problem of water pollution is very serious. The seaweed is an important feature of eutrophication. It is an important aspect of pollution. Three-dimensional fluorescence spectrum can show entire fingerprint information of fluorescent light that in the range of excitation and emission wavelength, but the dimension of three-dimensional fluorescence spectrum is higher, the characteristic spectrum of different kinds pelagic plant are multifarious, it is complex identification. The kernel principal component analysis (KPCA) is used in this paper. It can reduce the dimensions of the spectroscopy. The independent component analysis (ICA) is used to do the matrix decomposition from the perspective of independence to extract the main feature of the spectroscopy data processed by the KPCA. The support vector machine (SVM) is used to assort the main characteristic root books which are abstracted by the ICA. The correct laboratory sorting of seaweed is realized. Experimental result indicate, this method can identify the chief component of admixture seaweed, the high dimensional spectroscopy information of seaweed is proceed effective feature extraction, the sorting speed is increase greatly, the discrimination of sorting is reach 90% percent.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124517023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/VECIMS.2012.6273180
Hui Wang, Yong Yin, Hongjun Wang, G. Gong
This paper expanded two-dimensional optical flow registration method to three- dimensional optical flow method (3D-OFM) to deform the four-dimensional computed tomography (4D CT) images. 4D CT can investigate the effect of breathing motion to the liver tumors and track the “trajectory” of the tumors. This algorithm was applied to the fully automated registration of 4D CT data for hepatocellular carcinoma patients. According to the respiratory phase, 4D CT data was segmented into 10-series CT images which were named CT0, CT10…CT90, and CT0 is at end inspiration and CT50 is at end expiration. In particular, The iodipin was used to help define the gross target volume of HCC to improve the image contrast. In addition, The iodipin also used to improve the registration method and verify the performance of registration method as the markers. We chose the end-exhale CT as the reference image. To verify the registration performance of this method, we qualitatively compared the subtractions of floating images and reference image and the iodipin deposition regions before and after registered, and we quantificationally computed the multi-information between floating images and reference image of 8 patients. The improving proportion of multi-information between end-exhale CT and in-exhale CT is from 1.71% to 4.17%.
{"title":"A modified optical flow based method for registration of 4D CT data of hepatocellular carcinoma patients","authors":"Hui Wang, Yong Yin, Hongjun Wang, G. Gong","doi":"10.1109/VECIMS.2012.6273180","DOIUrl":"https://doi.org/10.1109/VECIMS.2012.6273180","url":null,"abstract":"This paper expanded two-dimensional optical flow registration method to three- dimensional optical flow method (3D-OFM) to deform the four-dimensional computed tomography (4D CT) images. 4D CT can investigate the effect of breathing motion to the liver tumors and track the “trajectory” of the tumors. This algorithm was applied to the fully automated registration of 4D CT data for hepatocellular carcinoma patients. According to the respiratory phase, 4D CT data was segmented into 10-series CT images which were named CT0, CT10…CT90, and CT0 is at end inspiration and CT50 is at end expiration. In particular, The iodipin was used to help define the gross target volume of HCC to improve the image contrast. In addition, The iodipin also used to improve the registration method and verify the performance of registration method as the markers. We chose the end-exhale CT as the reference image. To verify the registration performance of this method, we qualitatively compared the subtractions of floating images and reference image and the iodipin deposition regions before and after registered, and we quantificationally computed the multi-information between floating images and reference image of 8 patients. The improving proportion of multi-information between end-exhale CT and in-exhale CT is from 1.71% to 4.17%.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129892270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-11DOI: 10.1109/VECIMS.2009.5068922
S. R. D. Santos, Bruno M. F. Silva, J. Oliveira
Desktop Virtual Environments (VEs) are an attractive choice for low-cost virtual reality systems. However, the level of virtual presence — the subjective sensation of perceiving oneself immersed within an environment — that desktop VEs offer is lower than immersive ones. There exist approaches that try to reduce the gap between the level of presence in these two types of VEs. For instance, to improve the interaction aspect by providing an environment that supports physically based behavior or to display high quality graphics that increases the overall scene realism. Most of these approaches have been employed in the design of desktop VEs, but often lacked a formal evaluation of their real impact on the level of presence afforded. This paper presents our work and research findings on developing a model to control camera travel based on rigid body dynamics. A physics engine simulates the dynamics involved in altering the cameras physical parameters — such as forces, torques, tensor of inertia — which, in turn, controls viewpoint orientation and locomotion. Our aim was twofold: to propose a camera whose movement behaviors were easy to program, and; to engage users of a desktop VE in a richer interaction experience capable of raising their level of presence. We designed an experimental study with a group of volunteers and measured their performance in controlling the camera as well as their reported virtual presence using the Witmer and Singers Presence Questionnaire.
{"title":"Camera control based on rigid body dynamics for virtual environments","authors":"S. R. D. Santos, Bruno M. F. Silva, J. Oliveira","doi":"10.1109/VECIMS.2009.5068922","DOIUrl":"https://doi.org/10.1109/VECIMS.2009.5068922","url":null,"abstract":"Desktop Virtual Environments (VEs) are an attractive choice for low-cost virtual reality systems. However, the level of virtual presence — the subjective sensation of perceiving oneself immersed within an environment — that desktop VEs offer is lower than immersive ones. There exist approaches that try to reduce the gap between the level of presence in these two types of VEs. For instance, to improve the interaction aspect by providing an environment that supports physically based behavior or to display high quality graphics that increases the overall scene realism. Most of these approaches have been employed in the design of desktop VEs, but often lacked a formal evaluation of their real impact on the level of presence afforded. This paper presents our work and research findings on developing a model to control camera travel based on rigid body dynamics. A physics engine simulates the dynamics involved in altering the cameras physical parameters — such as forces, torques, tensor of inertia — which, in turn, controls viewpoint orientation and locomotion. Our aim was twofold: to propose a camera whose movement behaviors were easy to program, and; to engage users of a desktop VE in a richer interaction experience capable of raising their level of presence. We designed an experimental study with a group of volunteers and measured their performance in controlling the camera as well as their reported virtual presence using the Witmer and Singers Presence Questionnaire.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115307988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-10DOI: 10.1109/VECIMS.2006.250818
Ildikó Pelczer, F. C. Contreras, F. G. Rodríguez
Research regarding the manners in which the efficiency of human-computer interactions can be improved has increased over the last decade. Avatars, as animated software agents, represented a new paradigm for this field. In this paper we present a computer model of emotions that can be easily integrated in an avatar-based interaction context. Enhanced with the emotion model the avatars manifest an expressive behavior, ultimately influenced by their personality definitions. Interaction is facilitated by the emotional feedback user receives through the full body behavior of the avatar determined on its overall emotional state.
{"title":"Emotions and Interactive Agents","authors":"Ildikó Pelczer, F. C. Contreras, F. G. Rodríguez","doi":"10.1109/VECIMS.2006.250818","DOIUrl":"https://doi.org/10.1109/VECIMS.2006.250818","url":null,"abstract":"Research regarding the manners in which the efficiency of human-computer interactions can be improved has increased over the last decade. Avatars, as animated software agents, represented a new paradigm for this field. In this paper we present a computer model of emotions that can be easily integrated in an avatar-based interaction context. Enhanced with the emotion model the avatars manifest an expressive behavior, ultimately influenced by their personality definitions. Interaction is facilitated by the emotional feedback user receives through the full body behavior of the avatar determined on its overall emotional state.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134258947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2006.250789
C. Barna, M. Stratulat
This article presents a localization method based on a net of infrared sensor. Each of the detectors provide o binary signal without any possibility to establish localization. But observing the reaction of the sensors when parameters are modified and using a net of sensors we establish a localization method. This method is based on analyzing the simultaneous activations and the development in time of the output signals of each detector. Also, using this fuzzy method, it can be obtain not only the localization of the intrusion but also it can be estimated the heating surface of the object which produced the intrusion
{"title":"A Localization Method Based on Infrared Detectors for Surveillance Areas","authors":"C. Barna, M. Stratulat","doi":"10.1109/VECIMS.2006.250789","DOIUrl":"https://doi.org/10.1109/VECIMS.2006.250789","url":null,"abstract":"This article presents a localization method based on a net of infrared sensor. Each of the detectors provide o binary signal without any possibility to establish localization. But observing the reaction of the sensors when parameters are modified and using a net of sensors we establish a localization method. This method is based on analyzing the simultaneous activations and the development in time of the output signals of each detector. Also, using this fuzzy method, it can be obtain not only the localization of the intrusion but also it can be estimated the heating surface of the object which produced the intrusion","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115400213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2009.5068926
Ligu Zhu, Zhiwei Sun, Hao Liu
With the rapid growth of data, the need for archival storage is the increasing volume of material converted into the digital domain. The archival data is not removed rather closely referenced by its appliance, will only continue to grow; archival storage system with scalability and powerful search ability is desired. In this paper, a novel distributed archival storage based on CAS is brought. First, a network protocol based on standard HTTP defining the communication between CAS server and CAS client is designed and developed; data stored in CAS can be mapped into a secure URL. Each CAS client is constructed with WEB server and database server, this make CAS can be integrated with application easily and tightly. Second, an object oriented data model with good management and strong search ability is established, by integrating the metadata of files and their relationship. Users can browse and search archival data through WEB. With the help of CAS protocol based on HTTP and data object model, a distributed archival storage system based on content and object is achieved with powerful content navigation and search ability.
{"title":"Distributed archiving storage system based on CAS","authors":"Ligu Zhu, Zhiwei Sun, Hao Liu","doi":"10.1109/VECIMS.2009.5068926","DOIUrl":"https://doi.org/10.1109/VECIMS.2009.5068926","url":null,"abstract":"With the rapid growth of data, the need for archival storage is the increasing volume of material converted into the digital domain. The archival data is not removed rather closely referenced by its appliance, will only continue to grow; archival storage system with scalability and powerful search ability is desired. In this paper, a novel distributed archival storage based on CAS is brought. First, a network protocol based on standard HTTP defining the communication between CAS server and CAS client is designed and developed; data stored in CAS can be mapped into a secure URL. Each CAS client is constructed with WEB server and database server, this make CAS can be integrated with application easily and tightly. Second, an object oriented data model with good management and strong search ability is established, by integrating the metadata of files and their relationship. Users can browse and search archival data through WEB. With the help of CAS protocol based on HTTP and data object model, a distributed archival storage system based on content and object is achieved with powerful content navigation and search ability.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127937245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2005.1567578
M. Trujillo, Abdulmotaleb El Saddik
Biometrics has been introduced recently to identify people by their behavior and physiological features. It offers a wide application scope to detect fraud attempts in organizations, corporations, educational institutions, electronic resources and even crime scenes. The field of biometrics can be divided into two main classes according to features that humans are born with, such as fingerprints or facial features, or behavioral characteristics of humans, like a handwritten signature or voice (J. Ortega-Garcia et al., 2004). The work presented in this paper pursues the latter class, specifically how a person reacts to using daily devices or tools. The fact that we can exploit people's habits in handling devices to identity individuals was the hypothesis that motivated this work. Among the many examples of the potential use of this class of biometrics is the particular force applied to the keys in a keyboard. There is also the time interval between each keypad when dialing a telephone number. Another example that can be extracted from the latter would be the map described by the fingers in navigating through solving maze operation. Extracting these features by using a haptic-based application and defining the subsequent individual pattern is the objective of this research. A framework that identifies behavioral patterns through physical parameters such as direction, force, pressure and velocity has been built. The set up for the experimental work consisted of a multisensory tool, using the Reachin system (Reachin Technologies, User's Programmers Guide and API).
{"title":"Recognizing and quantifying human movement patterns through haptic-based applications","authors":"M. Trujillo, Abdulmotaleb El Saddik","doi":"10.1109/VECIMS.2005.1567578","DOIUrl":"https://doi.org/10.1109/VECIMS.2005.1567578","url":null,"abstract":"Biometrics has been introduced recently to identify people by their behavior and physiological features. It offers a wide application scope to detect fraud attempts in organizations, corporations, educational institutions, electronic resources and even crime scenes. The field of biometrics can be divided into two main classes according to features that humans are born with, such as fingerprints or facial features, or behavioral characteristics of humans, like a handwritten signature or voice (J. Ortega-Garcia et al., 2004). The work presented in this paper pursues the latter class, specifically how a person reacts to using daily devices or tools. The fact that we can exploit people's habits in handling devices to identity individuals was the hypothesis that motivated this work. Among the many examples of the potential use of this class of biometrics is the particular force applied to the keys in a keyboard. There is also the time interval between each keypad when dialing a telephone number. Another example that can be extracted from the latter would be the map described by the fingers in navigating through solving maze operation. Extracting these features by using a haptic-based application and defining the subsequent individual pattern is the objective of this research. A framework that identifies behavioral patterns through physical parameters such as direction, force, pressure and velocity has been built. The set up for the experimental work consisted of a multisensory tool, using the Reachin system (Reachin Technologies, User's Programmers Guide and API).","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124199493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2009.5068887
Shujun Zhang, Cong Wang, X. Shao, Wei Wu
3D modeling plays an important role in virtual reality interaction, immersive tele-presence and other applications. A CUDA-accelerated real-time 3D modeling system is presented in this paper. It captures multi-view images of objects and achieves real-time accurate visual hull reconstruction with texture mapping. The system includes off-line camera calibration and on-line visual hull modeling based on our DreamWorld hardware. The multi-camera based modeling process is composed of distributed image acquisition, silhouette extraction, data transmission, visual hull computation and rendering. Initially, the volumetric visual hull is computed through intersection test of voxels with each silhouette and then, a CUDA-based simplified exact marching cubes algorithm is put forward to get a polyhedral mesh model for texture mapping and rendering. Preliminary experimental results from both synthetic and real data show its accuracy, stability and real-time performance.
{"title":"DreamWorld: CUDA-accelerated real-time 3D modeling system","authors":"Shujun Zhang, Cong Wang, X. Shao, Wei Wu","doi":"10.1109/VECIMS.2009.5068887","DOIUrl":"https://doi.org/10.1109/VECIMS.2009.5068887","url":null,"abstract":"3D modeling plays an important role in virtual reality interaction, immersive tele-presence and other applications. A CUDA-accelerated real-time 3D modeling system is presented in this paper. It captures multi-view images of objects and achieves real-time accurate visual hull reconstruction with texture mapping. The system includes off-line camera calibration and on-line visual hull modeling based on our DreamWorld hardware. The multi-camera based modeling process is composed of distributed image acquisition, silhouette extraction, data transmission, visual hull computation and rendering. Initially, the volumetric visual hull is computed through intersection test of voxels with each silhouette and then, a CUDA-based simplified exact marching cubes algorithm is put forward to get a polyhedral mesh model for texture mapping and rendering. Preliminary experimental results from both synthetic and real data show its accuracy, stability and real-time performance.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125992384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2009.5068872
Li-jun Jiang, Jin-chang Chen, Wei-jie Zhan
A new idea of 2.5 dimensional cellular automaton models is provided. It discloses that the environmental altitude is an important factor which has an active function on the crowd evacuation model related to stadium. The research uses the agent technology in virtual agent's movement-decision-making for their evacuating action. It suggests using the attraction weighing value of cellular-automaton-grid as an important parameter, which considers the objective factor, for selecting agent's suitable evacuation path. Meanwhile, the agent's subjective factor, which would influence their evacuation, has also been considered. The agent's evacuating movement regulation is established by these two factors. And finally, a practical and well-visibility system for crowd evacuation has been developed; a related instance is used to prove that the 2.5D-CA-model is practicable and reasonable for crowd evacuating simulation.
{"title":"A crowd evacuation simulation model based on 2.5-dimension cellular automaton","authors":"Li-jun Jiang, Jin-chang Chen, Wei-jie Zhan","doi":"10.1109/VECIMS.2009.5068872","DOIUrl":"https://doi.org/10.1109/VECIMS.2009.5068872","url":null,"abstract":"A new idea of 2.5 dimensional cellular automaton models is provided. It discloses that the environmental altitude is an important factor which has an active function on the crowd evacuation model related to stadium. The research uses the agent technology in virtual agent's movement-decision-making for their evacuating action. It suggests using the attraction weighing value of cellular-automaton-grid as an important parameter, which considers the objective factor, for selecting agent's suitable evacuation path. Meanwhile, the agent's subjective factor, which would influence their evacuation, has also been considered. The agent's evacuating movement regulation is established by these two factors. And finally, a practical and well-visibility system for crowd evacuation has been developed; a related instance is used to prove that the 2.5D-CA-model is practicable and reasonable for crowd evacuating simulation.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133755441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/VECIMS.2009.5068865
K. Luo, Yi-jun Li, Wei Jiang
The early-warning satellite optimal scheduling is an effective way to improve its combat capability and performances. This paper analyzed first the early-warning satellite combat-flow and established a Petri Net model. Based on its particular combat pattern, this paper designed a simulation system of the early-warning satellite scheduling; described its architecture, function and data stream; discussed its key technologies and implementation. The simulation system was realized by the comprehensive application of MATLAB and STK.
{"title":"Analysis and design of the early-warning satellite scheduling simulation system","authors":"K. Luo, Yi-jun Li, Wei Jiang","doi":"10.1109/VECIMS.2009.5068865","DOIUrl":"https://doi.org/10.1109/VECIMS.2009.5068865","url":null,"abstract":"The early-warning satellite optimal scheduling is an effective way to improve its combat capability and performances. This paper analyzed first the early-warning satellite combat-flow and established a Petri Net model. Based on its particular combat pattern, this paper designed a simulation system of the early-warning satellite scheduling; described its architecture, function and data stream; discussed its key technologies and implementation. The simulation system was realized by the comprehensive application of MATLAB and STK.","PeriodicalId":177178,"journal":{"name":"IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116639977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}