首页 > 最新文献

IEEE Transactions on Visualization and Computer Graphics最新文献

英文 中文
Immersive Telepresence and Remote Collaboration using Mobile and Wearable Devices. 使用移动和可穿戴设备的沉浸式远程呈现和远程协作。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898737
Jacob Young, Tobias Langlotz, Matthew Cook, Steven Mills, Holger Regenbrecht

The mobility and ubiquity of mobile head-mounted displays make them a promising platform for telepresence research as they allow for spontaneous and remote use cases not possible with stationary hardware. In this work we present a system that provides immersive telepresence and remote collaboration on mobile and wearable devices by building a live spherical panoramic representation of a user's environment that can be viewed in real time by a remote user who can independently choose the viewing direction. The remote user can then interact with this environment as if they were actually there through intuitive gesture-based interaction. Each user can obtain independent views within this environment by rotating their device, and their current field of view is shared to allow for simple coordination of viewpoints. We present several different approaches to create this shared live environment and discuss their implementation details, individual challenges, and performance on modern mobile hardware; by doing so we provide key insights into the design and implementation of next generation mobile telepresence systems, guiding future research in this domain. The results of a preliminary user study confirm the ability of our system to induce the desired sense of presence in its users.

移动头戴式显示器的移动性和普遍性使它们成为远程呈现研究的一个很有前途的平台,因为它们允许固定硬件无法实现的自发和远程用例。在这项工作中,我们提出了一个系统,通过构建用户环境的实时球形全景表示,在移动和可穿戴设备上提供身临其境的远程呈现和远程协作,远程用户可以独立选择观看方向进行实时观看。然后,远程用户可以通过直观的基于手势的交互与这个环境进行交互,就好像他们真的在那里一样。每个用户都可以通过旋转他们的设备在这个环境中获得独立的视图,并且他们当前的视野是共享的,从而允许简单的视点协调。我们提出了几种不同的方法来创建这种共享的实时环境,并讨论了它们的实现细节、各自的挑战和在现代移动硬件上的性能;通过这样做,我们为下一代移动远程呈现系统的设计和实现提供了关键见解,指导该领域的未来研究。初步用户研究的结果证实了我们的系统能够在用户中诱导所需的存在感。
{"title":"Immersive Telepresence and Remote Collaboration using Mobile and Wearable Devices.","authors":"Jacob Young,&nbsp;Tobias Langlotz,&nbsp;Matthew Cook,&nbsp;Steven Mills,&nbsp;Holger Regenbrecht","doi":"10.1109/TVCG.2019.2898737","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898737","url":null,"abstract":"<p><p>The mobility and ubiquity of mobile head-mounted displays make them a promising platform for telepresence research as they allow for spontaneous and remote use cases not possible with stationary hardware. In this work we present a system that provides immersive telepresence and remote collaboration on mobile and wearable devices by building a live spherical panoramic representation of a user's environment that can be viewed in real time by a remote user who can independently choose the viewing direction. The remote user can then interact with this environment as if they were actually there through intuitive gesture-based interaction. Each user can obtain independent views within this environment by rotating their device, and their current field of view is shared to allow for simple coordination of viewpoints. We present several different approaches to create this shared live environment and discuss their implementation details, individual challenges, and performance on modern mobile hardware; by doing so we provide key insights into the design and implementation of next generation mobile telepresence systems, guiding future research in this domain. The results of a preliminary user study confirm the ability of our system to induce the desired sense of presence in its users.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1908-1918"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40447909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model. 基于概率阻尼模型的虚拟现实音频材料重建。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898822
Auston Sterling, Nicholas Rewkowski, Roberta L Klatzky, Ming C Lin

Modal sound synthesis has been used to create realistic sounds from rigid-body objects, but requires accurate real-world material parameters. These material parameters can be estimated from recorded sounds of an impacted object, but external factors can interfere with accurate parameter estimation. We present a novel technique for estimating the damping parameters of materials from recorded impact sounds that probabilistically models these external factors. We represent the combined effects of material damping, support damping, and sampling inaccuracies with a probabilistic generative model, then use maximum likelihood estimation to fit a damping model to recorded data. This technique greatly reduces the human effort needed and does not require the precise object geometry or the exact hit location. We validate the effectiveness of this technique with a comprehensive analysis of a synthetic dataset and a perceptual study on object identification. We also present a study establishing human performance on the same parameter estimation task for comparison.

模态声音合成已被用于从刚体物体中创建逼真的声音,但需要准确的真实世界材料参数。这些材料参数可以从被撞击物体的记录声音中估计出来,但外部因素可能会干扰准确的参数估计。我们提出了一种从记录的冲击声中估计材料阻尼参数的新技术,该技术可以对这些外部因素进行概率建模。我们用概率生成模型来表示材料阻尼、支撑阻尼和采样不准确性的综合影响,然后使用最大似然估计来拟合记录数据的阻尼模型。这种技术大大减少了所需的人力,并且不需要精确的物体几何形状或精确的命中位置。我们通过对合成数据集的综合分析和对对象识别的感知研究来验证该技术的有效性。我们还提出了一项研究,在相同的参数估计任务上建立人类的表现进行比较。
{"title":"Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model.","authors":"Auston Sterling,&nbsp;Nicholas Rewkowski,&nbsp;Roberta L Klatzky,&nbsp;Ming C Lin","doi":"10.1109/TVCG.2019.2898822","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898822","url":null,"abstract":"<p><p>Modal sound synthesis has been used to create realistic sounds from rigid-body objects, but requires accurate real-world material parameters. These material parameters can be estimated from recorded sounds of an impacted object, but external factors can interfere with accurate parameter estimation. We present a novel technique for estimating the damping parameters of materials from recorded impact sounds that probabilistically models these external factors. We represent the combined effects of material damping, support damping, and sampling inaccuracies with a probabilistic generative model, then use maximum likelihood estimation to fit a damping model to recorded data. This technique greatly reduces the human effort needed and does not require the precise object geometry or the exact hit location. We validate the effectiveness of this technique with a comprehensive analysis of a synthetic dataset and a perceptual study on object identification. We also present a study establishing human performance on the same parameter estimation task for comparison.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1855-1864"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40447915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Adaptive Sampling for Sound Propagation. 声音传播的自适应采样。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898765
Chakravarty R Alla Chaitanya, John M Snyder, Keith Godin, Derek Nowrouzezahrai, Nikunj Raghuvanshi

Precomputed sound propagation samples acoustics at discrete scene probe positions to support dynamic listener locations. An offline 3D numerical simulation is performed at each probe and the resulting field is encoded for runtime rendering with dynamic sources. Prior work place probes on a uniform grid, requiring high density to resolve narrow spaces. Our adaptive sampling approach varies probe density based on a novel "local diameter" measure of the space surrounding a given point, evaluated by stochastically tracing paths in the scene. We apply this measure to layout probes so as to smoothly adapt resolution and eliminate undersampling in corners, narrow corridors and stairways, while coarsening appropriately in more open areas. Coupled with a new runtime interpolator based on radial weights over geodesic paths, we achieve smooth acoustic effects that respect scene boundaries as both the source or listener move, unlike existing visibility-based solutions. We consistently demonstrate quality improvement over prior work at fixed cost.

预先计算的声音传播样本声学在离散的场景探头位置,以支持动态听众的位置。在每个探针处执行脱机3D数值模拟,并对结果字段进行编码,以便使用动态源进行运行时渲染。先前的工作场所在一个均匀的网格上探测,需要高密度来解决狭窄的空间。我们的自适应采样方法根据给定点周围空间的一种新的“局部直径”测量来改变探针密度,通过随机跟踪场景中的路径来评估。我们将此措施应用于布局探头,以便在角落、狭窄走廊和楼梯处平滑地适应分辨率并消除欠采样,而在更开阔的区域进行适当的粗化。与现有的基于可见性的解决方案不同,我们结合了基于测地线路径上径向权重的新的运行时插值器,在声源或听者移动时实现了尊重场景边界的平滑声学效果。我们始终如一地以固定成本证明质量优于先前的工作。
{"title":"Adaptive Sampling for Sound Propagation.","authors":"Chakravarty R Alla Chaitanya,&nbsp;John M Snyder,&nbsp;Keith Godin,&nbsp;Derek Nowrouzezahrai,&nbsp;Nikunj Raghuvanshi","doi":"10.1109/TVCG.2019.2898765","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898765","url":null,"abstract":"<p><p>Precomputed sound propagation samples acoustics at discrete scene probe positions to support dynamic listener locations. An offline 3D numerical simulation is performed at each probe and the resulting field is encoded for runtime rendering with dynamic sources. Prior work place probes on a uniform grid, requiring high density to resolve narrow spaces. Our adaptive sampling approach varies probe density based on a novel \"local diameter\" measure of the space surrounding a given point, evaluated by stochastically tracing paths in the scene. We apply this measure to layout probes so as to smoothly adapt resolution and eliminate undersampling in corners, narrow corridors and stairways, while coarsening appropriately in more open areas. Coupled with a new runtime interpolator based on radial weights over geodesic paths, we achieve smooth acoustic effects that respect scene boundaries as both the source or listener move, unlike existing visibility-based solutions. We consistently demonstrate quality improvement over prior work at fixed cost.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1846-1854"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40538301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation. 你还是我?性格特征预测意外情况下的牺牲决定。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-25 DOI: 10.1109/TVCG.2019.2899227
Ju Uijong, June Kang, Christian Wallraven

Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples' lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the "safe" turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.

汽车行驶中的紧急情况有时会迫使司机做出突然的决定。预测这些决策将在更新保险应用中的风险分析方面具有重要应用,但也可以为起草自动驾驶汽车指南提供见解。然而,在实验环境中研究这种行为受到伦理问题的限制,因为它会危及人们的生命。在这里,我们利用虚拟现实(VR)的潜力来研究参与者必须牺牲他人以拯救自己的极端情况下的决策。在虚拟现实驾驶模拟中,参与者首先接受训练,完成一段有多个十字路口的艰难路线,在这个过程中,错误的转弯可能会导致汽车掉下悬崖。在测试阶段,障碍物突然出现在十字路口的“安全”转弯处:对于对照组来说,障碍物是树木,而对于实验组来说,障碍物是行人。在这两组中,司机都必须在从悬崖上掉下来或与障碍物相撞之间做出选择。结果表明,性格特征的差异能够预测这一决定:在实验组中,与行人相撞的司机具有明显更高的精神变态和冲动特征,而在对照组中,冲动本身在某种程度上具有预测性。其他因素,如心率差异、性别、电子游戏专业知识和驾驶经验,对两组的紧急决策都没有预测作用。我们的研究结果表明,在极端情况下,自我利益相关的人格特征会影响人们在自我保护或他人保护之间的选择,并展示了虚拟现实在研究和模拟人类决策方面的潜力。
{"title":"You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation.","authors":"Ju Uijong,&nbsp;June Kang,&nbsp;Christian Wallraven","doi":"10.1109/TVCG.2019.2899227","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899227","url":null,"abstract":"<p><p>Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples' lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the \"safe\" turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1898-1907"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899227","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36997351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences. 从虚拟体验中学习个人偏好的功能工作空间优化。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898721
Wei Liang, Jingjing Liu, Yining Lang, Bing Ning, Lap-Fai Yu

The functionality of a workspace is one of the most important considerations in both virtual world design and interior design. To offer appropriate functionality to the user, designers usually take some general rules into account, e.g., general workflow and average stature of users, which are summarized from the population statistics. Yet, such general rules cannot reflect the personal preferences of a single individual, which vary from person to person. In this paper, we intend to optimize a functional workspace according to the personal preferences of the specific individual who will use it. We come up with an approach to learn the individual's personal preferences from his activities while using a virtual version of the workspace via virtual reality devices. Then, we construct a cost function, which incorporates personal preferences, spatial constraints, pose assessments, and visual field. At last, the cost function is optimized to achieve an optimal layout. To evaluate the approach, we experimented with different settings. The results of the user study show that the workspaces updated in this way better fit the users.

工作空间的功能是虚拟世界设计和室内设计中最重要的考虑因素之一。为了给用户提供合适的功能,设计师通常会考虑一些一般的规则,例如,一般的工作流程和用户的平均身高,这些都是从人口统计中总结出来的。然而,这样的一般规则不能反映单个人的个人偏好,这是因人而异的。在本文中,我们打算根据使用它的特定个体的个人偏好来优化功能性工作空间。我们提出了一种方法,通过虚拟现实设备使用虚拟版本的工作空间,从个人的活动中了解个人偏好。然后,我们构建了一个包含个人偏好、空间约束、姿态评估和视野的成本函数。最后对成本函数进行优化,得到最优布局。为了评估这种方法,我们对不同的设置进行了实验。用户研究的结果表明,以这种方式更新的工作空间更适合用户。
{"title":"Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences.","authors":"Wei Liang,&nbsp;Jingjing Liu,&nbsp;Yining Lang,&nbsp;Bing Ning,&nbsp;Lap-Fai Yu","doi":"10.1109/TVCG.2019.2898721","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898721","url":null,"abstract":"<p><p>The functionality of a workspace is one of the most important considerations in both virtual world design and interior design. To offer appropriate functionality to the user, designers usually take some general rules into account, e.g., general workflow and average stature of users, which are summarized from the population statistics. Yet, such general rules cannot reflect the personal preferences of a single individual, which vary from person to person. In this paper, we intend to optimize a functional workspace according to the personal preferences of the specific individual who will use it. We come up with an approach to learn the individual's personal preferences from his activities while using a virtual version of the workspace via virtual reality devices. Then, we construct a cost function, which incorporates personal preferences, spatial constraints, pose assessments, and visual field. At last, the cost function is optimized to achieve an optimal layout. To evaluate the approach, we experimented with different settings. The results of the user study show that the workspaces updated in this way better fit the users.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1836-1845"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40447506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Modulating Fine Roughness Perception of Vibrotactile Textured Surface using Pseudo-haptic Effect. 利用伪触觉效应调制振动触觉纹理表面的精细粗糙度感知。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898820
Yusuke Ujitoko, Yuki Ban, Koichi Hirota

Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured surfaces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile designers. Thus, a method of modulating the vibrotactile perception is required. We focus on fine roughness perception and we propose a method using a pseudo-haptic effect to modulate fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer's position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesized that if users receive vibrational feedback watching the pointer visually oscillating back/forth and left/right, users would believe the vibrotactile surfaces more uneven. We also hypothesized that as the size of visual oscillation is getting larger, the amount of modification of roughness perception of vibrotactile surfaces would be larger. We conducted user studies to test the hypotheses. Results of first user study suggested that users felt vibrotactile texture with our method rougher than they did without our method at a high probability. Results of second user study suggested that users felt different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested that our method was effective. Also, the same effect could potentially be applied to the visual movement of virtual hands or fingertips when users are interacting with virtual surfaces using their hands.

通过执行器回放振动触觉信号通常用于模拟虚拟纹理表面的触觉感受。然而,触觉设计师所模拟的触觉感受与预期的触觉感受之间往往存在着微小的不匹配。因此,需要一种调节振动触觉感知的方法。本文以精细粗糙感知为研究对象,提出了一种利用伪触觉效应调节振动触觉纹理精细粗糙感知的方法。具体来说,我们在视觉上稍微修改了指针在屏幕上的位置,这表示触摸在纹理表面上的位置。我们假设,如果用户看到指针在视觉上前后左右摆动时收到振动反馈,用户会认为振动触觉表面更不均匀。我们还假设,随着视觉振荡的增大,振动触觉表面粗糙度感知的修正量也会增大。我们进行了用户研究来检验这些假设。第一次用户研究的结果表明,使用我们的方法的用户感觉振动触觉纹理比没有使用我们的方法的用户更粗糙。第二次用户研究结果表明,用户对振动纹理的粗糙度随视觉振荡的大小而不同。这些结果证实了我们的假设,表明我们的方法是有效的。同样,当用户用手与虚拟表面进行交互时,同样的效果也可能应用于虚拟手或指尖的视觉运动。
{"title":"Modulating Fine Roughness Perception of Vibrotactile Textured Surface using Pseudo-haptic Effect.","authors":"Yusuke Ujitoko,&nbsp;Yuki Ban,&nbsp;Koichi Hirota","doi":"10.1109/TVCG.2019.2898820","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898820","url":null,"abstract":"<p><p>Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured surfaces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile designers. Thus, a method of modulating the vibrotactile perception is required. We focus on fine roughness perception and we propose a method using a pseudo-haptic effect to modulate fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer's position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesized that if users receive vibrational feedback watching the pointer visually oscillating back/forth and left/right, users would believe the vibrotactile surfaces more uneven. We also hypothesized that as the size of visual oscillation is getting larger, the amount of modification of roughness perception of vibrotactile surfaces would be larger. We conducted user studies to test the hypotheses. Results of first user study suggested that users felt vibrotactile texture with our method rougher than they did without our method at a high probability. Results of second user study suggested that users felt different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested that our method was effective. Also, the same effect could potentially be applied to the visual movement of virtual hands or fingertips when users are interacting with virtual surfaces using their hands.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1981-1990"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40547670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Manufacturing Application-Driven Foveated Near-Eye Displays. 制造应用驱动的注视点近眼显示器。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898781
Kaan Aksit, Praneeth Chakravarthula, Kishore Rathinavel, Youngmo Jeong, Rachel Albert, Henry Fuchs, David Luebke

Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30o×55o), and a resolution of 12 cycles per degree can be achieved.

传统的光学制造给近眼显示器设计师带来了巨大的挑战,因为它的交货时间长达数周,限制了光学设计师快速迭代和探索传统设计之外的能力。我们提供了一个完整的近眼显示器制造管道,使用商品硬件,交货时间为一天。我们新颖的制造流水线由几项创新组成,包括快速生产技术,可将3D打印组件的表面提高到适合近眼显示应用的光学质量,使用机器学习和光线追踪的计算设计方法,可为近眼显示创建自由形状的静态投影屏幕表面,可以表示任意焦面。以及一种自定义投影透镜设计,用于非均匀分布像素,用于注视点近眼显示硬件设计候选产品。我们已经展示了不受束缚的增强现实近眼显示原型,以评估我们技术的成功,并展示了滑雪镜的外形因素,大的单目视野(30o×55o),以及每度12个周期的分辨率可以实现。
{"title":"Manufacturing Application-Driven Foveated Near-Eye Displays.","authors":"Kaan Aksit,&nbsp;Praneeth Chakravarthula,&nbsp;Kishore Rathinavel,&nbsp;Youngmo Jeong,&nbsp;Rachel Albert,&nbsp;Henry Fuchs,&nbsp;David Luebke","doi":"10.1109/TVCG.2019.2898781","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898781","url":null,"abstract":"<p><p>Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30<sup>o</sup>×55<sup>o</sup>), and a resolution of 12 cycles per degree can be achieved.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1928-1939"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898781","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays. 多层可调节显示的感知驱动混合分解。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-18 DOI: 10.1109/TVCG.2019.2898821
Hyeonseung Yu, Mojtaba Bemana, Marek Wernikowski, Michal Chwesiuk, Okan Tarhan Tursun, Gurprit Singh, Karol Myszkowski, Radoslaw Mantiuk, Hans-Peter Seidel, Piotr Didyk

Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based hybrid decomposition technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.

多焦平面和多层光场显示是解决现实世界中观察到的所有视觉线索的有前途的解决方案。不幸的是,这些设备通常需要昂贵的优化来计算输入光场或焦点堆栈的适当分解,以驱动单个显示层。尽管这些方法提供了接近正确的图像重建,但显著的计算成本阻碍了实时应用。一个简单的替代方案是线性混合策略,它使用深度信息分解单个2D图像。这种方法提供了实时性能,但在遮挡边界和光滑表面上产生不准确的结果。本文提出了一种基于感知的混合分解技术,该技术结合了上述策略的优点,实现了实时性和高保真度的结果。其基本思想是只在感知上优越的区域应用昂贵的优化,例如,中央凹处的深度不连续,并返回到成本较低的线性混合。我们提出了一个完整的,感知知情的分析和模型,在当地确定哪两种策略应该应用。然后利用我们的新合成方法进行图像分解。在用户自定义的多平面显示器上对结果进行了分析和验证。
{"title":"A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays.","authors":"Hyeonseung Yu,&nbsp;Mojtaba Bemana,&nbsp;Marek Wernikowski,&nbsp;Michal Chwesiuk,&nbsp;Okan Tarhan Tursun,&nbsp;Gurprit Singh,&nbsp;Karol Myszkowski,&nbsp;Radoslaw Mantiuk,&nbsp;Hans-Peter Seidel,&nbsp;Piotr Didyk","doi":"10.1109/TVCG.2019.2898821","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898821","url":null,"abstract":"<p><p>Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based hybrid decomposition technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1940-1950"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898821","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
MegaParallax: Casual 360° Panoramas with Motion Parallax. MegaParallax:随意360°全景与运动视差。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 Epub Date: 2019-02-25 DOI: 10.1109/TVCG.2019.2898799
Tobias Bertel, Neill D F Campbell, Christian Richardt

The ubiquity of smart mobile devices, such as phones and tablets, enables users to casually capture 360° panoramas with a single camera sweep to share and relive experiences. However, panoramas lack motion parallax as they do not provide different views for different viewpoints. The motion parallax induced by translational head motion is a crucial depth cue in daily life. Alternatives, such as omnidirectional stereo panoramas, provide different views for each eye (binocular disparity), but they also lack motion parallax as the left and right eye panoramas are stitched statically. Methods based on explicit scene geometry reconstruct textured 3D geometry, which provides motion parallax, but suffers from visible reconstruction artefacts. The core of our method is a novel multi-perspective panorama representation, which can be casually captured and rendered with motion parallax for each eye on the fly. This provides a more realistic perception of panoramic environments which is particularly useful for virtual reality applications. Our approach uses a single consumer video camera to acquire 200-400 views of a real 360° environment with a single sweep. By using novel-view synthesis with flow-based blending, we show how to turn these input views into an enriched 360° panoramic experience that can be explored in real time, without relying on potentially unreliable reconstruction of scene geometry. We compare our results with existing omnidirectional stereo and image-based rendering methods to demonstrate the benefit of our approach, which is the first to enable casual consumers to capture and view high-quality 360° panoramas with motion parallax.

无处不在的智能移动设备,如手机和平板电脑,使用户可以轻松捕获360°全景,只需一次相机扫描,分享和重温体验。然而,全景图缺乏运动视差,因为它们不能为不同的视点提供不同的视图。头部平动引起的运动视差是日常生活中重要的深度线索。替代方案,如全方位立体全景图,为每只眼睛提供不同的视角(双目视差),但它们也缺乏运动视差,因为左右眼全景图是静态拼接的。基于显式场景几何的方法重建纹理三维几何,提供运动视差,但受可见重建伪影的影响。我们的方法的核心是一种新颖的多视角全景表示,可以随意捕获和渲染每只眼睛在飞行中的运动视差。这为全景环境提供了更真实的感知,这对虚拟现实应用特别有用。我们的方法使用单个消费者摄像机,通过一次扫描获得真实360°环境的200-400个视图。通过使用基于流的混合的新视图合成,我们展示了如何将这些输入视图转换为丰富的360°全景体验,可以实时探索,而不依赖于可能不可靠的场景几何重建。我们将我们的结果与现有的全向立体和基于图像的渲染方法进行比较,以证明我们的方法的好处,这是第一个使休闲消费者能够捕捉和观看高质量的360°运动视差全景图。
{"title":"MegaParallax: Casual 360° Panoramas with Motion Parallax.","authors":"Tobias Bertel,&nbsp;Neill D F Campbell,&nbsp;Christian Richardt","doi":"10.1109/TVCG.2019.2898799","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898799","url":null,"abstract":"<p><p>The ubiquity of smart mobile devices, such as phones and tablets, enables users to casually capture 360° panoramas with a single camera sweep to share and relive experiences. However, panoramas lack motion parallax as they do not provide different views for different viewpoints. The motion parallax induced by translational head motion is a crucial depth cue in daily life. Alternatives, such as omnidirectional stereo panoramas, provide different views for each eye (binocular disparity), but they also lack motion parallax as the left and right eye panoramas are stitched statically. Methods based on explicit scene geometry reconstruct textured 3D geometry, which provides motion parallax, but suffers from visible reconstruction artefacts. The core of our method is a novel multi-perspective panorama representation, which can be casually captured and rendered with motion parallax for each eye on the fly. This provides a more realistic perception of panoramic environments which is particularly useful for virtual reality applications. Our approach uses a single consumer video camera to acquire 200-400 views of a real 360° environment with a single sweep. By using novel-view synthesis with flow-based blending, we show how to turn these input views into an enriched 360° panoramic experience that can be explored in real time, without relying on potentially unreliable reconstruction of scene geometry. We compare our results with existing omnidirectional stereo and image-based rendering methods to demonstrate the benefit of our approach, which is the first to enable casual consumers to capture and view high-quality 360° panoramas with motion parallax.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1828-1835"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898799","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36997350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask. 使用滑动遮挡遮罩的光学透明头戴式显示器的变焦遮挡。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2019-05-01 DOI: 10.1109/TVCG.2019.2899249
Takumi Hamasaki, Yuta Itoh

We propose a varifocal occlusion technique for optical see-through head-mounted displays (OST-HMDs). Occlusion in OST-HMDs is a powerful visual cue that enables depth perception in augmented reality (AR). Without occlusion, virtual objects rendered by an OST-HMD appear semi-transparent and less realistic. A common occlusion technique is to use spatial light modulators (SLMs) to block incoming light rays at each pixel on the SLM selectively. However, most of the existing methods create an occlusion mask only at a single, fixed depth-typically at infinity. With recent advances in varifocal OST-HMDs, such traditional fixed-focus occlusion causes a mismatch in depth between the occlusion mask plane and the virtual object to be occluded, leading to an uncomfortable user experience with blurred occlusion masks. In this paper, we thus propose an OST-HMD system with varifocal occlusion capability: we physically slide a transmissive liquid crystal display (LCD) to optically shift the occlusion plane along the optical path so that the mask appears sharp and aligns to a virtual image at a given depth. Our solution has several benefits over existing varifocal occlusion methods: it is computationally less demanding and, more importantly, it is optically consistent, i.e., when a user loses focus on the corresponding virtual image, the mask again gets blurred consistently as the virtual image does. In the experiment, we build a proof-of-concept varifocal occlusion system implemented with a custom retinal projection display and demonstrate that the system can shift the occlusion plane to depths ranging from 25 cm to infinity.

我们提出了一种用于光学透明头戴式显示器(ost - hmd)的变焦遮挡技术。ost - hmd中的遮挡是一种强大的视觉线索,可以实现增强现实(AR)中的深度感知。没有遮挡,由OST-HMD渲染的虚拟物体看起来是半透明的,不太真实。一种常见的遮挡技术是使用空间光调制器(SLM)选择性地阻挡SLM上每个像素处的入射光线。然而,大多数现有的方法只在一个固定的深度上创建一个遮挡遮罩——通常是在无穷大。随着最近变焦ost - hmd的发展,这种传统的固定焦点遮挡会导致遮挡掩模平面与被遮挡的虚拟物体之间的深度不匹配,导致遮挡掩模模糊的用户体验不舒服。因此,在本文中,我们提出了一种具有变焦遮挡能力的OST-HMD系统:我们物理地滑动透射式液晶显示器(LCD),使遮挡平面沿着光路光学位移,从而使掩模看起来清晰,并在给定深度与虚拟图像对齐。与现有的变焦遮挡方法相比,我们的解决方案有几个优点:它在计算上的要求更低,更重要的是,它在光学上是一致的,即,当用户失去对相应虚拟图像的焦点时,掩模再次像虚拟图像一样被模糊。在实验中,我们构建了一个概念验证的用自定义视网膜投影显示器实现的变焦遮挡系统,并证明该系统可以将遮挡平面移动到从25厘米到无限远的深度范围内。
{"title":"Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask.","authors":"Takumi Hamasaki,&nbsp;Yuta Itoh","doi":"10.1109/TVCG.2019.2899249","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899249","url":null,"abstract":"<p><p>We propose a varifocal occlusion technique for optical see-through head-mounted displays (OST-HMDs). Occlusion in OST-HMDs is a powerful visual cue that enables depth perception in augmented reality (AR). Without occlusion, virtual objects rendered by an OST-HMD appear semi-transparent and less realistic. A common occlusion technique is to use spatial light modulators (SLMs) to block incoming light rays at each pixel on the SLM selectively. However, most of the existing methods create an occlusion mask only at a single, fixed depth-typically at infinity. With recent advances in varifocal OST-HMDs, such traditional fixed-focus occlusion causes a mismatch in depth between the occlusion mask plane and the virtual object to be occluded, leading to an uncomfortable user experience with blurred occlusion masks. In this paper, we thus propose an OST-HMD system with varifocal occlusion capability: we physically slide a transmissive liquid crystal display (LCD) to optically shift the occlusion plane along the optical path so that the mask appears sharp and aligns to a virtual image at a given depth. Our solution has several benefits over existing varifocal occlusion methods: it is computationally less demanding and, more importantly, it is optically consistent, i.e., when a user loses focus on the corresponding virtual image, the mask again gets blurred consistently as the virtual image does. In the experiment, we build a proof-of-concept varifocal occlusion system implemented with a custom retinal projection display and demonstrate that the system can shift the occlusion plane to depths ranging from 25 cm to infinity.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1961-1969"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899249","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37295094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
IEEE Transactions on Visualization and Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1