首页 > 最新文献

ACM SIGGRAPH 2018 Studio最新文献

英文 中文
Design method of digitally fabricated spring glass pen 数字制造弹簧玻璃笔的设计方法
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214825
Kengo Tanaka, Kohei Ogawa, Tatsuya Minagawa, Yoichi Ochiai
In this study, We propose a method to develop a spring glass dip pen by using a 3D printer and reproduce different types of writing feeling. There have been several studies on different types of pens to change the feel of writing. For example, EV-Pen [Wang et al. 2016] and haptics pens [Lee et al. 2004] changes the feel of pen writing with using vibration. However, our proposed method does not reproduce tactile sensation of softness by using vibrations.
在这项研究中,我们提出了一种利用3D打印机开发弹簧玻璃蘸笔的方法,并重现不同类型的书写感觉。有几项关于不同类型的笔改变写作感觉的研究。例如,EV-Pen [Wang et al. 2016]和haptics pen [Lee et al. 2004]通过使用振动来改变笔的书写感觉。然而,我们提出的方法不能通过振动来再现柔软的触觉。
{"title":"Design method of digitally fabricated spring glass pen","authors":"Kengo Tanaka, Kohei Ogawa, Tatsuya Minagawa, Yoichi Ochiai","doi":"10.1145/3214822.3214825","DOIUrl":"https://doi.org/10.1145/3214822.3214825","url":null,"abstract":"In this study, We propose a method to develop a spring glass dip pen by using a 3D printer and reproduce different types of writing feeling. There have been several studies on different types of pens to change the feel of writing. For example, EV-Pen [Wang et al. 2016] and haptics pens [Lee et al. 2004] changes the feel of pen writing with using vibration. However, our proposed method does not reproduce tactile sensation of softness by using vibrations.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123885082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightform: procedural effects for projected AR 光形态:投影AR的程序效果
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214823
Brittany Factura, L. LaPerche, Phil Reyneri, Brett R. Jones, Kevin Karsch
Projected augmented reality, also called projection mapping or video mapping, is a form of augmented reality that uses projected light to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its ability to add a layer of digital content directly onto physical objects or environments in a way that can be instantaneously viewed by multiple people, unencumbered by a screen or additional setup.
投影增强现实,也称为投影映射或视频映射,是一种使用投影光直接增强3D表面的增强现实形式,而不是使用透屏或耳机。投影增强现实的价值在于,它能够将一层数字内容直接添加到物理对象或环境中,这种方式可以被多人即时观看,而不受屏幕或额外设置的阻碍。
{"title":"Lightform: procedural effects for projected AR","authors":"Brittany Factura, L. LaPerche, Phil Reyneri, Brett R. Jones, Kevin Karsch","doi":"10.1145/3214822.3214823","DOIUrl":"https://doi.org/10.1145/3214822.3214823","url":null,"abstract":"Projected augmented reality, also called projection mapping or video mapping, is a form of augmented reality that uses projected light to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its ability to add a layer of digital content directly onto physical objects or environments in a way that can be instantaneously viewed by multiple people, unencumbered by a screen or additional setup.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131525517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Building a feedback loop between electrical stimulation and percussion learning 在电刺激和打击乐学习之间建立反馈回路
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214824
Ayaka Ebisu, Satoshi Hashizume, Yoichi Ochiai
A sense of rhythm is essential for playing instruments. However, many beginners learning how to play musical instruments have difficulty with rhythm. We have proposed "Stimulated Percussions," which is a musical instrument performance system using electrical muscle stimulation (EMS) in the past. In this study, we apply it to the learning of rhythm. By the movement of muscles stimulated using EMS, users are able to acquire what kind of arms and legs to move at what timing. In addition to small percussion instruments such as castanets, users can play the rhythm patterns of drums that the require the simultaneous movement of their limbs.
演奏乐器时,节奏感是必不可少的。然而,许多初学者学习如何演奏乐器有困难的节奏。我们曾经提出过利用肌肉电刺激(EMS)的乐器演奏系统“刺激式打击”。在本研究中,我们将其应用于节奏的学习。通过使用EMS刺激肌肉的运动,用户能够获得在什么时间移动什么样的手臂和腿。除了响板等小型打击乐器外,用户还可以演奏需要四肢同时运动的鼓的节奏模式。
{"title":"Building a feedback loop between electrical stimulation and percussion learning","authors":"Ayaka Ebisu, Satoshi Hashizume, Yoichi Ochiai","doi":"10.1145/3214822.3214824","DOIUrl":"https://doi.org/10.1145/3214822.3214824","url":null,"abstract":"A sense of rhythm is essential for playing instruments. However, many beginners learning how to play musical instruments have difficulty with rhythm. We have proposed \"Stimulated Percussions,\" which is a musical instrument performance system using electrical muscle stimulation (EMS) in the past. In this study, we apply it to the learning of rhythm. By the movement of muscles stimulated using EMS, users are able to acquire what kind of arms and legs to move at what timing. In addition to small percussion instruments such as castanets, users can play the rhythm patterns of drums that the require the simultaneous movement of their limbs.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130699130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Paperprinting Paperprinting
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214830
Wataru Date, Y. Kakehi
In this research, we propose a system which makes paper through additive manufacturing process by using a dispenser mounted on XY plotter. By using our system, graphic designers can design and output paper itself which is hard in an existing paper production process. This time, we designed and implemented a machine for fabricating paper and created several output examples. In SIGGRAPH, we will provide a workshop for participants to design their original paper using our machines.
{"title":"Paperprinting","authors":"Wataru Date, Y. Kakehi","doi":"10.1145/3214822.3214830","DOIUrl":"https://doi.org/10.1145/3214822.3214830","url":null,"abstract":"In this research, we propose a system which makes paper through additive manufacturing process by using a dispenser mounted on XY plotter. By using our system, graphic designers can design and output paper itself which is hard in an existing paper production process. This time, we designed and implemented a machine for fabricating paper and created several output examples. In SIGGRAPH, we will provide a workshop for participants to design their original paper using our machines.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124576868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Raymarching toolkit for unity: a highly interactive unity toolkit for constructing signed distance fields visually Raymarching unity工具包:一个高度交互式的unity工具包,用于可视化地构建带符号距离的字段
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214828
Kevin Watters, Fernando F. Ramallo
Raymarching signed distance fields is a technique used by graphics experts and demoscene enthusiasts to construct scenes with features unusual in traditional polygonal workflows-blending shapes, kaleidoscopic patterns, reflections, and infinite fractal detail all become possible and are represented in compact representations that live mostly on the graphics card. Until now these scenes have had to be constructed in shaders by hand, but the Raymarching Toolkit for Unity is an extension that combines Unity's highly visual scene editor with the power of raymarched visuals by automatically generating the raymarching shader for the scene an artist is creating, live.
Raymarching signed distance fields是图形专家和demoscene爱好者使用的一种技术,用于构建具有传统多边形工作流中不寻常功能的场景——混合形状、万花筒图案、反射和无限分形细节都成为可能,并以紧凑的表示形式表示,主要存在于图形卡上。到目前为止,这些场景必须在着色器中手工构建,但Unity的Raymarching Toolkit是一个扩展,它结合了Unity的高度视觉场景编辑器和Raymarching视觉效果的力量,通过自动生成Raymarching着色器为艺术家正在创建的场景,现场。
{"title":"Raymarching toolkit for unity: a highly interactive unity toolkit for constructing signed distance fields visually","authors":"Kevin Watters, Fernando F. Ramallo","doi":"10.1145/3214822.3214828","DOIUrl":"https://doi.org/10.1145/3214822.3214828","url":null,"abstract":"Raymarching signed distance fields is a technique used by graphics experts and demoscene enthusiasts to construct scenes with features unusual in traditional polygonal workflows-blending shapes, kaleidoscopic patterns, reflections, and infinite fractal detail all become possible and are represented in compact representations that live mostly on the graphics card. Until now these scenes have had to be constructed in shaders by hand, but the Raymarching Toolkit for Unity is an extension that combines Unity's highly visual scene editor with the power of raymarched visuals by automatically generating the raymarching shader for the scene an artist is creating, live.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"372 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115983555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design engine community project: generate quick adhoc inventions to explore at SIGGRAPH and in the studio 设计引擎社区项目:在SIGGRAPH和工作室中生成快速的特别发明
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214829
M. Griffin, Lizabeth Arum
Since its release, "The Design Engine" has been played by groups of students, teachers, and individuals looking to spark self-guided training. "The Design Engine" is a direct response to educators' requests for better classroom tools surrounding inspiration and 3D printing. By prompting participants to create their own original, imaginative works-instead of using pre-selected examples-teachers can keep their students better motivated through the process of mastering desktop 3D printing. We are hosting a brand new SIGGRAPH-edition of "The Design Engine," a constantly evolving series of challenges hosted within the Studio. Participants of all backgrounds can join for a short startup round, or stick around to design and develop their projects using the tools available in the SIGGRAPH Studio Workshop.
自发布以来,“设计引擎”已经被学生,教师和个人团体播放,希望激发自我指导培训。“设计引擎”是对教育工作者要求更好的围绕灵感和3D打印的课堂工具的直接回应。通过鼓励参与者创造自己的原创,富有想象力的作品-而不是使用预先选择的例子-教师可以通过掌握桌面3D打印的过程更好地激励学生。我们正在主办一个全新的siggraph版“设计引擎”,这是一个不断发展的系列挑战,在工作室主持。所有背景的参与者都可以参加一个简短的启动回合,或者留下来使用SIGGRAPH工作室工作坊提供的工具来设计和开发他们的项目。
{"title":"Design engine community project: generate quick adhoc inventions to explore at SIGGRAPH and in the studio","authors":"M. Griffin, Lizabeth Arum","doi":"10.1145/3214822.3214829","DOIUrl":"https://doi.org/10.1145/3214822.3214829","url":null,"abstract":"Since its release, \"The Design Engine\" has been played by groups of students, teachers, and individuals looking to spark self-guided training. \"The Design Engine\" is a direct response to educators' requests for better classroom tools surrounding inspiration and 3D printing. By prompting participants to create their own original, imaginative works-instead of using pre-selected examples-teachers can keep their students better motivated through the process of mastering desktop 3D printing. We are hosting a brand new SIGGRAPH-edition of \"The Design Engine,\" a constantly evolving series of challenges hosted within the Studio. Participants of all backgrounds can join for a short startup round, or stick around to design and develop their projects using the tools available in the SIGGRAPH Studio Workshop.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117149673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACM SIGGRAPH 2018 Studio ACM SIGGRAPH 2018工作室
Pub Date : 2018-08-12 DOI: 10.1145/3214822
{"title":"ACM SIGGRAPH 2018 Studio","authors":"","doi":"10.1145/3214822","DOIUrl":"https://doi.org/10.1145/3214822","url":null,"abstract":"","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121412871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive previz: VR authoring for film previsualisation 沉浸式预览:电影预览的VR创作
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214831
Quentin Galvane, I-Sheng Lin, M. Christie, Tsai-Yen Li
Creatives in animated and real movie productions have been exploring new modalities to visually design filmic sequences before realizing them in studios, through techniques like hand-drawn storyboards, physical mockups or more recently virtual 3D environments. A central issue in using virtual 3D environments is the complexity of content creation tools for non technical film creatives. To overcome this issue, we present One Man Movie, a VR authoring system which enables the crafting of filmic sequences with no prior knowledge in 3D animation. The system is designed to reflect the traditional creative process in film pre-production through stages like (i) scene layout (ii) animation of characters, (iii) placement and control of cameras and (iv) montage of the filmic sequence, while enabling a fully novel and seamless back-and-forth between all stages of the process thanks to real-time engines. This research tool has been designed and evaluated with students and experts from film schools, and should therefore raise a significant interest among Siggraph participants.
动画和真实电影制作的创意人员一直在探索新的模式,在工作室实现之前,通过手绘故事板,物理模型或最近的虚拟3D环境等技术来视觉设计电影序列。使用虚拟3D环境的一个核心问题是,对于非技术电影创作者来说,内容创作工具的复杂性。为了克服这个问题,我们提出了一个人电影,一个虚拟现实创作系统,使电影序列的制作没有3D动画的先验知识。该系统旨在通过(i)场景布局(ii)角色动画,(iii)摄像机的放置和控制以及(iv)电影序列的蒙太奇等阶段反映电影前期制作中的传统创作过程,同时由于实时引擎,在过程的所有阶段之间实现完全新颖和无缝的来回切换。这个研究工具是由电影学院的学生和专家设计和评估的,因此应该会引起Siggraph参与者的极大兴趣。
{"title":"Immersive previz: VR authoring for film previsualisation","authors":"Quentin Galvane, I-Sheng Lin, M. Christie, Tsai-Yen Li","doi":"10.1145/3214822.3214831","DOIUrl":"https://doi.org/10.1145/3214822.3214831","url":null,"abstract":"Creatives in animated and real movie productions have been exploring new modalities to visually design filmic sequences before realizing them in studios, through techniques like hand-drawn storyboards, physical mockups or more recently virtual 3D environments. A central issue in using virtual 3D environments is the complexity of content creation tools for non technical film creatives. To overcome this issue, we present One Man Movie, a VR authoring system which enables the crafting of filmic sequences with no prior knowledge in 3D animation. The system is designed to reflect the traditional creative process in film pre-production through stages like (i) scene layout (ii) animation of characters, (iii) placement and control of cameras and (iv) montage of the filmic sequence, while enabling a fully novel and seamless back-and-forth between all stages of the process thanks to real-time engines. This research tool has been designed and evaluated with students and experts from film schools, and should therefore raise a significant interest among Siggraph participants.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real-time motion generation for imaginary creatures using hierarchical reinforcement learning 使用分层强化学习的虚拟生物实时运动生成
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214826
Keisuke Ogaki, Masayoshi Nakamura
Describing the motions of imaginary original creatures is an essential part of animations and computer games. One approach to generate such motions involves finding an optimal motion for approaching a goal by using the creatures' body and motor skills. Currently, researchers are employing deep reinforcement learning (DeepRL) to find such optimal motions. Some end-to-end DeepRL approaches learn the policy function, which outputs target pose for each joint according to the environment. In our study, we employed a hierarchical approach with a separate DeepRL decision maker and simple exploration-based sequence maker, and an action token, through which these two layers can communicate. By optimizing these two functions independently, we can achieve a light, fast-learning system available on mobile devices. In addition, we propose another technique to learn the policy at a faster pace with the help of a heuristic rule. By treating the heuristic rule as an additional action token, we can naturally incorporate it via Q-learning. The experimental results show that creatures can achieve better performance with the use of both heuristics and DeepRL than by using them independently.
描述想象中的原始生物的动作是动画和电脑游戏的重要组成部分。产生这种动作的一种方法是利用生物的身体和运动技能找到接近目标的最佳动作。目前,研究人员正在使用深度强化学习(DeepRL)来寻找这种最佳运动。一些端到端DeepRL方法学习策略函数,根据环境输出每个关节的目标姿态。在我们的研究中,我们采用了一种分层方法,其中包括一个单独的DeepRL决策者和一个简单的基于探索的序列生成器,以及一个动作令牌,通过它这两层可以进行通信。通过对这两个功能的独立优化,我们可以实现一个轻巧、快速的移动设备学习系统。此外,我们提出了另一种技术,借助启发式规则以更快的速度学习策略。通过将启发式规则视为附加的动作令牌,我们可以通过Q-learning自然地将其合并。实验结果表明,与单独使用启发式和深度深度学习相比,生物可以获得更好的性能。
{"title":"Real-time motion generation for imaginary creatures using hierarchical reinforcement learning","authors":"Keisuke Ogaki, Masayoshi Nakamura","doi":"10.1145/3214822.3214826","DOIUrl":"https://doi.org/10.1145/3214822.3214826","url":null,"abstract":"Describing the motions of imaginary original creatures is an essential part of animations and computer games. One approach to generate such motions involves finding an optimal motion for approaching a goal by using the creatures' body and motor skills. Currently, researchers are employing deep reinforcement learning (DeepRL) to find such optimal motions. Some end-to-end DeepRL approaches learn the policy function, which outputs target pose for each joint according to the environment. In our study, we employed a hierarchical approach with a separate DeepRL decision maker and simple exploration-based sequence maker, and an action token, through which these two layers can communicate. By optimizing these two functions independently, we can achieve a light, fast-learning system available on mobile devices. In addition, we propose another technique to learn the policy at a faster pace with the help of a heuristic rule. By treating the heuristic rule as an additional action token, we can naturally incorporate it via Q-learning. The experimental results show that creatures can achieve better performance with the use of both heuristics and DeepRL than by using them independently.","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133672337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMVERSE livemaker: create a 3D model from a single 2D photo inside VR IMVERSE livemaker:从VR内的单个2D照片创建3D模型
Pub Date : 2018-08-12 DOI: 10.1145/3214822.3214832
R. Mange, Kepa Iturrioz Zabala
{"title":"IMVERSE livemaker: create a 3D model from a single 2D photo inside VR","authors":"R. Mange, Kepa Iturrioz Zabala","doi":"10.1145/3214822.3214832","DOIUrl":"https://doi.org/10.1145/3214822.3214832","url":null,"abstract":"","PeriodicalId":225677,"journal":{"name":"ACM SIGGRAPH 2018 Studio","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116695208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2018 Studio
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1