Castle Game Engine (http://castle-engine.sourceforge.net/) is a modern, open-source game engine closely connected with the X3D standard. It uses X3D as a scene graph, and also as it's main 3D and 2D interchange format. In this poster we would like to highlight some engine architectural advantages.
{"title":"Castle game engine: game engine using X3D as a scene graph","authors":"Michalis Kamburelis","doi":"10.1145/2775292.2778296","DOIUrl":"https://doi.org/10.1145/2775292.2778296","url":null,"abstract":"Castle Game Engine (http://castle-engine.sourceforge.net/) is a modern, open-source game engine closely connected with the X3D standard. It uses X3D as a scene graph, and also as it's main 3D and 2D interchange format. In this poster we would like to highlight some engine architectural advantages.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133530530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matlab is a powerful tool to compute high-fidelity engineering model and plot the result in figures. Simulink implements Matlab .m source code into block diagrams and flow charts to execute the simulation. This project demonstrates how physics equations implemented in Simulink can animate X3D or VRML models, along with the methods to convert Matlab .fig format into an X3D object so we can apply it into Web-based animations.
{"title":"Matlab and simulink creation and animation of X3D in web-based simulation","authors":"Yuan Pin Cheng, D. Brutzman","doi":"10.1145/2775292.2778306","DOIUrl":"https://doi.org/10.1145/2775292.2778306","url":null,"abstract":"Matlab is a powerful tool to compute high-fidelity engineering model and plot the result in figures. Simulink implements Matlab .m source code into block diagrams and flow charts to execute the simulation. This project demonstrates how physics equations implemented in Simulink can animate X3D or VRML models, along with the methods to convert Matlab .fig format into an X3D object so we can apply it into Web-based animations.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130048951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present new implementations of important X3D nodes which enable a large class of geospatial applications in standard web browsers. We have chosen the freely available X3DOM code base as an implementation framework since it provides a very functional set of X3D nodes along with a broad selection of support functionality. In our implementations of the GeoOrigin, GeoLocation, GeoViewpoint and GeoPositionInterpolator nodes, we fully conform to the ISO specification and use well known example scenes as references for correctness. While GeoOrigin is deprecated in version 3.3 of the specification, we demonstrate that limited precision in the WebGL rendering pipeline still makes its use desirable at least until alternative solutions are formalized and coded. GeoLocation and GeoViewpoint nodes require specific alignments of coordinate systems which we document in detail. In addition, GeoViewpoint has the property to control navigation speed which conceptually conflicts with user speed control. We resolve this conflict by using relative speed and also make this control optional. Somewhat terse language in the GeoPositionInterpolator specification required clarification of its existing usage and inspired an option for coordinate interpolation along great circles which is often the expected interpolation path in global scenes. Finally, all functionality was integrated into current, stable releases of the X3DOM distribution available from www.x3dom.org.
{"title":"The X3D geospatial component: X3DOM implementation of GeoOrigin, GeoLocation, GeoViewpoint, and GeoPositionInterpolator nodes","authors":"A. Plesch, M. McCann","doi":"10.1145/2775292.2775315","DOIUrl":"https://doi.org/10.1145/2775292.2775315","url":null,"abstract":"We present new implementations of important X3D nodes which enable a large class of geospatial applications in standard web browsers. We have chosen the freely available X3DOM code base as an implementation framework since it provides a very functional set of X3D nodes along with a broad selection of support functionality. In our implementations of the GeoOrigin, GeoLocation, GeoViewpoint and GeoPositionInterpolator nodes, we fully conform to the ISO specification and use well known example scenes as references for correctness. While GeoOrigin is deprecated in version 3.3 of the specification, we demonstrate that limited precision in the WebGL rendering pipeline still makes its use desirable at least until alternative solutions are formalized and coded. GeoLocation and GeoViewpoint nodes require specific alignments of coordinate systems which we document in detail. In addition, GeoViewpoint has the property to control navigation speed which conceptually conflicts with user speed control. We resolve this conflict by using relative speed and also make this control optional. Somewhat terse language in the GeoPositionInterpolator specification required clarification of its existing usage and inspired an option for coordinate interpolation along great circles which is often the expected interpolation path in global scenes. Finally, all functionality was integrated into current, stable releases of the X3DOM distribution available from www.x3dom.org.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116707610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a framework for creating realistic virtual characters that can be delivered via the Internet and interactively controlled in a WebGL enabled web-browser. Four-dimensional performance capture is used to capture realistic human motion and appearance. The captured data is processed into efficient and compact representations for geometry and texture. Motions are analysed against a high-level, user-defined motion graph and suitable inter- and intra-motion transitions are identified. This processed data is stored on a webserver and downloaded by a client application when required. A Javascript-based character animation engine is used to manage the state of the character which responds to user input and sends required frames to a WebGL-based renderer for display. Through the efficient geometry, texture and motion graph representations, a game character capable of performing a range of motions can be represented in 40--50 MB of data. This highlights the potential use of four-dimensional performance capture for creating web-based content. Datasets are made available for further research and an online demo is provided.
{"title":"Online interactive 4D character animation","authors":"M. Volino, Peng Huang, A. Hilton","doi":"10.1145/2775292.2775297","DOIUrl":"https://doi.org/10.1145/2775292.2775297","url":null,"abstract":"This paper presents a framework for creating realistic virtual characters that can be delivered via the Internet and interactively controlled in a WebGL enabled web-browser. Four-dimensional performance capture is used to capture realistic human motion and appearance. The captured data is processed into efficient and compact representations for geometry and texture. Motions are analysed against a high-level, user-defined motion graph and suitable inter- and intra-motion transitions are identified. This processed data is stored on a webserver and downloaded by a client application when required. A Javascript-based character animation engine is used to manage the state of the character which responds to user input and sends required frames to a WebGL-based renderer for display. Through the efficient geometry, texture and motion graph representations, a game character capable of performing a range of motions can be represented in 40--50 MB of data. This highlights the potential use of four-dimensional performance capture for creating web-based content. Datasets are made available for further research and an online demo is provided.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125958854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Saunders, Brian Antonishek, Qiming Wang, B. Miller
In 1997 the National Institute of Standards and Technology (NIST) embarked on a huge project to replace one of the most cited resources for mathematical, physical and engineering scientists, the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables [Abramowitz and Stegun 1964], originally released by the National Bureau of Standards (NBS) in 1964. The 1997 project, designed to update and modernize the handbook, culminated in May 2010 with the launch of a freely available website, the NIST Digital Library of Mathematical Functions [DLMF] (http://dlmf.nist.gov/), and its print companion, the NIST Handbook of Mathematical Functions [Olver et al. 2010]. While the presence of graphics was sparse in the original handbook, the new resource contains more than 600 illustrations of high level mathematical functions, including close to 200 interactive 3D visualizations on the website. We provide the motivation for the visualization work through the context of the project and discuss our current implementation using X3DOM and WebGL.
1997年,美国国家标准与技术研究所(NIST)启动了一项庞大的项目,以取代数学、物理和工程科学家引用最多的资源之一——《公式、图形和数学表的数学函数手册》[Abramowitz和Stegun, 1964],该手册最初由美国国家标准局(NBS)于1964年发布。1997年的项目,旨在更新和现代化手册,在2010年5月达到高潮,推出了一个免费的网站,NIST数学函数数字图书馆[DLMF] (http://dlmf.nist.gov/),以及它的印刷伙伴,NIST数学函数手册[Olver et al. 2010]。虽然原始手册中的图形很少,但新资源包含600多个高级数学函数的插图,其中包括网站上近200个交互式3D可视化。我们通过项目的上下文为可视化工作提供了动力,并讨论了我们目前使用X3DOM和WebGL的实现。
{"title":"Dynamic 3D visualizations of complex function surfaces using X3DOM and WebGL","authors":"B. Saunders, Brian Antonishek, Qiming Wang, B. Miller","doi":"10.1145/2775292.2777140","DOIUrl":"https://doi.org/10.1145/2775292.2777140","url":null,"abstract":"In 1997 the National Institute of Standards and Technology (NIST) embarked on a huge project to replace one of the most cited resources for mathematical, physical and engineering scientists, the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables [Abramowitz and Stegun 1964], originally released by the National Bureau of Standards (NBS) in 1964. The 1997 project, designed to update and modernize the handbook, culminated in May 2010 with the launch of a freely available website, the NIST Digital Library of Mathematical Functions [DLMF] (http://dlmf.nist.gov/), and its print companion, the NIST Handbook of Mathematical Functions [Olver et al. 2010]. While the presence of graphics was sparse in the original handbook, the new resource contains more than 600 illustrations of high level mathematical functions, including close to 200 interactive 3D visualizations on the website. We provide the motivation for the visualization work through the context of the project and discuss our current implementation using X3DOM and WebGL.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124099234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel technique for modeling and rendering a 3D point cloud obtained from a set of photographs of a real 3D scene as a set of textured elliptical splats. We first obtain the base splat model by calculating, for each point of the cloud, an ellipse approximating locally the underlying surface. We then refine the base model by removing redundant splats to minimize overlaps, and merging splats covering flat regions of the point cloud into larger ellipses. We later apply a multi-texturing process to generate a single texture atlas from the set of photographs, by blending information from multiple cameras for every splat. Finally, we render this multi-textured, splat-based 3D model with an efficient implementation of OpenGL ES 2.0 vertex and fragment shaders which guarantees its fluid display on handheld devices.
我们提出了一种新的技术,用于建模和渲染从一组真实3D场景的照片中获得的3D点云,作为一组纹理椭圆碎片。我们首先通过计算云的每个点的局部近似下垫面的椭圆来获得基础碎片模型。然后,我们通过去除冗余的splats来优化基本模型,以最小化重叠,并将覆盖点云平坦区域的splats合并为更大的椭圆。我们随后应用多纹理过程,通过混合来自多个相机的信息,从一组照片中生成单个纹理图集。最后,我们渲染这个多纹理,飞溅为基础的3D模型与OpenGL ES 2.0顶点和片段着色器的有效实现,保证其流畅的显示在手持设备上。
{"title":"Textured splat-based point clouds for rendering in handheld devices","authors":"Sergio García, R. Pagés, Daniel Berjón, F. Morán","doi":"10.1145/2775292.2782779","DOIUrl":"https://doi.org/10.1145/2775292.2782779","url":null,"abstract":"We propose a novel technique for modeling and rendering a 3D point cloud obtained from a set of photographs of a real 3D scene as a set of textured elliptical splats. We first obtain the base splat model by calculating, for each point of the cloud, an ellipse approximating locally the underlying surface. We then refine the base model by removing redundant splats to minimize overlaps, and merging splats covering flat regions of the point cloud into larger ellipses. We later apply a multi-texturing process to generate a single texture atlas from the set of photographs, by blending information from multiple cameras for every splat. Finally, we render this multi-textured, splat-based 3D model with an efficient implementation of OpenGL ES 2.0 vertex and fragment shaders which guarantees its fluid display on handheld devices.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115726360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulation and rendering of large crowds are very demanding tasks on computational resources and until recently were inconceivable to be performed by a web browser. However, with the increasing capacity of GPUs and the maturation of web front-end development, could a web-based simulation of massive crowds be achieved in real-time in today's web-browsers? In this work we present the implementation of a minimal visualization tool for crowd simulation results, with the ability of rendering thousands of animated agents in real-time using WebGL. We also briefly present some current challenges of accomplishing crowd simulations in a web environment.
{"title":"Crowd simulation rendering for web","authors":"Daniel P. Savoy, M. Cabral, M. Zuffo","doi":"10.1145/2775292.2778302","DOIUrl":"https://doi.org/10.1145/2775292.2778302","url":null,"abstract":"Simulation and rendering of large crowds are very demanding tasks on computational resources and until recently were inconceivable to be performed by a web browser. However, with the increasing capacity of GPUs and the maturation of web front-end development, could a web-based simulation of massive crowds be achieved in real-time in today's web-browsers? In this work we present the implementation of a minimal visualization tool for crowd simulation results, with the ability of rendering thousands of animated agents in real-time using WebGL. We also briefly present some current challenges of accomplishing crowd simulations in a web environment.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126647319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stuart Anderson, Matt Adcock, B. Mantle, J. Salle, Chuong V. Nguyen, David R. Lovell
Natural history collections are an invaluable resource housing a wealth of knowledge with a long tradition of contributing to a wide range of fields such as taxonomy, quarantine, conservation and climate change. It is recognized however [Smith and Blagoderov 2012] that such physical collections are often heavily underutilized as a result of the practical issues of accessibility. The digitization of these collections is a step towards removing these access issues, but other hurdles must be addressed before we truly unlock the potential of this knowledge.
自然历史收藏是一种宝贵的资源,它蕴藏着丰富的知识,具有悠久的传统,对分类学、检疫、保护和气候变化等广泛领域做出了贡献。然而,人们认识到[Smith and Blagoderov 2012],由于可访问性的实际问题,这些实体馆藏往往严重未得到充分利用。这些馆藏的数字化是朝着消除这些获取问题迈出的一步,但在我们真正释放这些知识的潜力之前,还必须解决其他障碍。
{"title":"Towards web-based semantic enrichment of 3D insects","authors":"Stuart Anderson, Matt Adcock, B. Mantle, J. Salle, Chuong V. Nguyen, David R. Lovell","doi":"10.1145/2775292.2778305","DOIUrl":"https://doi.org/10.1145/2775292.2778305","url":null,"abstract":"Natural history collections are an invaluable resource housing a wealth of knowledge with a long tradition of contributing to a wide range of fields such as taxonomy, quarantine, conservation and climate change. It is recognized however [Smith and Blagoderov 2012] that such physical collections are often heavily underutilized as a result of the practical issues of accessibility. The digitization of these collections is a step towards removing these access issues, but other hurdles must be addressed before we truly unlock the potential of this knowledge.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124362889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of 3D Web technologies, 3D objects are now handled as embedded objects without plug-ins on web pages. Although declarative 3D objects are physically integrated into web pages, 3D objects and HTML elements are still separated from the perspective of the 3D layout context, and an annotation method is lacking. Thus it is scarcely possible to add meaningful annotations related to target 3D objects using existing web resources. In addition, people often lose the relationship between the target and related annotation objects in a 3D context due to the separation of the content layouts in different 3D contexts. In this paper, we propose a webizing method for annotating user experiences with 3D objects in a 3D Web environment. The relationship between the 3D target object and the annotation object is declared by means of web annotations and these related objects are rendered with a common 3D layout context and a camera perspective. We present typical cases of 3D scenes with web annotations on the 3D Web using a prototype implementation system to verify the usefulness of our approach.
{"title":"Webized 3D experience by HTML5 annotation in 3D web","authors":"Daeil Seo, Byounghyun Yoo, H. Ko","doi":"10.1145/2775292.2775301","DOIUrl":"https://doi.org/10.1145/2775292.2775301","url":null,"abstract":"With the development of 3D Web technologies, 3D objects are now handled as embedded objects without plug-ins on web pages. Although declarative 3D objects are physically integrated into web pages, 3D objects and HTML elements are still separated from the perspective of the 3D layout context, and an annotation method is lacking. Thus it is scarcely possible to add meaningful annotations related to target 3D objects using existing web resources. In addition, people often lose the relationship between the target and related annotation objects in a 3D context due to the separation of the content layouts in different 3D contexts. In this paper, we propose a webizing method for annotating user experiences with 3D objects in a 3D Web environment. The relationship between the 3D target object and the annotation object is declared by means of web annotations and these related objects are rendered with a common 3D layout context and a camera perspective. We present typical cases of 3D scenes with web annotations on the 3D Web using a prototype implementation system to verify the usefulness of our approach.","PeriodicalId":105857,"journal":{"name":"Proceedings of the 20th International Conference on 3D Web Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130112173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}