首页 > 最新文献

ACM Transactions on Applied Perception最新文献

英文 中文
Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers 利用时空调制触觉指针提高对空中触觉形状的感知
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-29 DOI: 10.1145/3611388
Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal
Ultrasound mid-air haptic (UMH) devices can remotely render vibrotactile shapes on the skin of unequipped users, e.g., to draw haptic icons or render virtual object shapes. Spatio-temporal modulation (STM), the state-of-the-art UMH shape rendering method, provides large freedom in shape design and produces the strongest possible stimuli for this technology. Yet, STM shapes are often reported to be blurry, complicating shape identification. Dynamic tactile pointers (DTP) were recently introduced as a technique to overcome this issue. By tracing a contour with an amplitude-modulated focal point, they significantly improve shape identification accuracy over STM, but at the cost of much lower stimulus intensity. Building upon this, we propose Spatio-temporally-modulated Tactile Pointers (STP), a novel method for rendering clearer and sharper UMH shapes while at the same time producing strong vibrotactile sensations. We ran two human participant experiments, which show that STP shapes are perceived as significantly stronger than DTP shapes, while shape identification accuracy is significantly improved over STM and on par with that obtained with DTP. Our work has implications for effective shape rendering with UMH, and provides insights which could inform future psychophysical investigation into vibrotactile shape perception in UMH.
超声波空中触觉(UMH)设备可以远程在未装备的用户皮肤上呈现振动触觉形状,例如绘制触觉图标或呈现虚拟物体形状。时空调制(STM)是最先进的UMH形状绘制方法,它为形状设计提供了很大的自由度,并为该技术产生了最强的刺激。然而,STM形状通常是模糊的,使形状识别变得复杂。动态触觉指针(DTP)作为一种克服这一问题的技术最近被引入。通过用调幅焦点跟踪轮廓,它们显著提高了STM的形状识别精度,但代价是大大降低了刺激强度。在此基础上,我们提出了时空调制触觉指针(STP),这是一种呈现更清晰、更锐利的UMH形状,同时产生强烈振动触觉的新方法。我们进行了两个人类参与者的实验,结果表明STP形状比DTP形状明显更强,而STM的形状识别精度显著提高,与DTP的形状识别精度相当。我们的工作对UMH的有效形状渲染具有启示意义,并为UMH中振动触觉形状感知的未来心理物理研究提供了见解。
{"title":"Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers","authors":"Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal","doi":"10.1145/3611388","DOIUrl":"https://doi.org/10.1145/3611388","url":null,"abstract":"Ultrasound mid-air haptic (UMH) devices can remotely render vibrotactile shapes on the skin of unequipped users, e.g., to draw haptic icons or render virtual object shapes. Spatio-temporal modulation (STM), the state-of-the-art UMH shape rendering method, provides large freedom in shape design and produces the strongest possible stimuli for this technology. Yet, STM shapes are often reported to be blurry, complicating shape identification. Dynamic tactile pointers (DTP) were recently introduced as a technique to overcome this issue. By tracing a contour with an amplitude-modulated focal point, they significantly improve shape identification accuracy over STM, but at the cost of much lower stimulus intensity. Building upon this, we propose Spatio-temporally-modulated Tactile Pointers (STP), a novel method for rendering clearer and sharper UMH shapes while at the same time producing strong vibrotactile sensations. We ran two human participant experiments, which show that STP shapes are perceived as significantly stronger than DTP shapes, while shape identification accuracy is significantly improved over STM and on par with that obtained with DTP. Our work has implications for effective shape rendering with UMH, and provides insights which could inform future psychophysical investigation into vibrotactile shape perception in UMH.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44964635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin identification over viewpoint change: A deep convolutional neural network surpasses humans 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-07-20 DOI: https://dl.acm.org/doi/10.1145/3609224
Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole

Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree-profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r = 0.38 to r = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.

深度卷积神经网络(DCNNs)在人脸识别方面已经达到了人类水平的准确性(Phillips等人,2018),尽管尚不清楚它们区分高度相似的人脸的准确性。在这里,人类和DCNN进行了一项具有挑战性的面部识别匹配任务,其中包括同卵双胞胎。参与者(N = 87)观看了三种类型的成对面部图像:同一身份,一般冒名顶替者(来自相似人口群体的不同身份)和双胞胎冒名顶替者(同卵双胞胎兄弟姐妹)。他们的任务是确定这两组照片显示的是同一个人还是不同的人。在三种视点差异条件下进行身份比较测试:正面到正面,正面到45度侧面,正面到90度侧面。在每个视点视差条件下,评估了区分匹配身份对与双胞胎冒名顶替者对和一般冒名顶替者对的准确性。人类对普通的冒名顶替者比对双胞胎的冒名顶替者更准确,而且准确率随着一对图像之间视点差异的增加而下降。训练用于人脸识别的DCNN (Ranjan et al., 2018)在呈现给人类的相同图像对上进行了测试。机器的表现反映了人类的准确性模式,但在除一种情况外的所有情况下,机器的表现都达到或超过了人类。在所有图像对类型中比较人类和机器的相似性得分。这项项目水平的分析表明,在9种图像对类型中,人类和机器的相似性评级在6种类型中显著相关[范围r = 0.38至r = 0.63],表明人类对面部相似性的感知与DCNN大致一致。这些发现也有助于我们理解DCNN在识别高相似度面孔方面的表现,表明DCNN的表现达到或超过人类的水平,并表明人类和DCNN使用的特征之间存在一定程度的平等。
{"title":"Twin identification over viewpoint change: A deep convolutional neural network surpasses humans","authors":"Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole","doi":"https://dl.acm.org/doi/10.1145/3609224","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3609224","url":null,"abstract":"<p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree-profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans. 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.9 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-01 DOI: 10.1145/3609224
Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole

Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r = 0.38 to r = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.

深度卷积神经网络(DCNN)在人脸识别方面已经达到了人类水平的准确性(Phillips et al.,2018),但尚不清楚它们区分高度相似人脸的准确性。在这里,人类和DCNN执行了一项具有挑战性的人脸身份匹配任务,其中包括同卵双胞胎。参与者(N=87)观看了三种类型的成对人脸图像:相同身份、普通冒名顶替者(来自相似人口群体的不同身份)和双胞胎冒名顶替(同卵双胞胎兄弟姐妹)。任务是确定这对情侣是同一个人还是不同的人。在三种视角差异条件下测试身份比较:正面到正面、正面到45°轮廓和正面到90°轮廓。在每个视点视差条件下,评估了从双冒名顶替者对和一般冒名顶替对中区分匹配身份对的准确性。与双胞胎冒名顶替者对相比,人类对普通冒名顶替对更准确,并且准确性随着一对图像之间视点视差的增加而下降。针对人脸识别训练的DCNN(Ranjan等人,2018)在呈现给人类的相同图像对上进行了测试。机器性能反映了人类准确性的模式,但在除一种情况外的所有情况下,其性能都达到或高于所有人类。对所有图像对类型的人机相似性得分进行比较。该项目级分析显示,在九种图像对类型中的六种图像对中,人和机器的相似性评级显著相关[范围r=0.38至r=0.63],表明人类对人脸相似性的感知与DCNN之间总体一致。这些发现也有助于我们理解DCNN在识别高相似度人脸方面的性能,证明DCNN的性能达到或高于人类的水平,并表明人类使用的特征与DCNN之间存在一定程度的对等性。
{"title":"Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans.","authors":"Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole","doi":"10.1145/3609224","DOIUrl":"10.1145/3609224","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42806850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Subthreshold Electrotactile Stimulation On The Perception of Electrovibration 阈下电触觉刺激对电振动感知的影响
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-29 DOI: https://dl.acm.org/doi/10.1145/3599970
Jagan Krishnasamy Balasubramanian, Rahul Kumar Ray, Manivannan Muniyandi

Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on Likert’s scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.

电振动用于触摸设备来渲染不同的纹理。当触觉亚模态刺激与电振刺激同时呈现时,可以增强纹理感知。纹理的感知依赖于电振动的阈值。在本研究中,我们对13名被试进行了心理物理实验,探讨了阈下触电刺激(SES)对电振动感知的影响。触觉亚模态刺激的相互作用导致一个刺激在另一个刺激存在下的掩蔽。本研究探讨了电振动在电触觉刺激下的触觉掩蔽现象。结果表明,当电触觉刺激达到感知阈值的90%和80%时,电振动阈值分别降低12.46%和6.75%。该方法在调谐曲线中从20 Hz到320 Hz的宽频率范围内进行了测试,并报告了百分比降低随频率的变化。另一项实验是用李克特量表来测量对组合刺激的感知。结果表明,在80%的SES下,感知更倾向于电振动,而在90%的SES下,感知是冷漠的。电振动阈值的降低表明,在阈下条件下,电触觉刺激的触觉掩蔽效应并不普遍。该研究为未来基于触觉亚模态刺激的纹理渲染算法的开发提供了重要的见解。
{"title":"Effect of Subthreshold Electrotactile Stimulation On The Perception of Electrovibration","authors":"Jagan Krishnasamy Balasubramanian, Rahul Kumar Ray, Manivannan Muniyandi","doi":"https://dl.acm.org/doi/10.1145/3599970","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3599970","url":null,"abstract":"<p>Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on Likert’s scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Subthreshold Electrotactile Stimulation on the Perception of Electrovibration 阈下电触觉刺激对电振动感知的影响
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-05-29 DOI: 10.1145/3599970
Jagan Krishnasamy Balasubramanian, R. Ray, Manivannan Muniyandi
Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on the Likert scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.
电振动用于触摸设备来渲染不同的纹理。当触觉亚模态刺激与电振刺激同时呈现时,可以增强纹理感知。纹理的感知依赖于电振动的阈值。在本研究中,我们对13名被试进行了心理物理实验,探讨了阈下触电刺激(SES)对电振动感知的影响。触觉亚模态刺激的相互作用导致一个刺激在另一个刺激存在下的掩蔽。本研究探讨了电振动在电触觉刺激下的触觉掩蔽现象。结果表明,当电触觉刺激达到感知阈值的90%和80%时,电振动阈值分别降低12.46%和6.75%。该方法在调谐曲线中从20 Hz到320 Hz的宽频率范围内进行了测试,并报告了百分比降低随频率的变化。另一项实验是在李克特量表上测量对组合刺激的感知。结果表明,在80%的SES下,感知更倾向于电振动,而在90%的SES下,感知是冷漠的。电振动阈值的降低表明,在阈下条件下,电触觉刺激的触觉掩蔽效应并不普遍。该研究为未来基于触觉亚模态刺激的纹理渲染算法的开发提供了重要的见解。
{"title":"Effect of Subthreshold Electrotactile Stimulation on the Perception of Electrovibration","authors":"Jagan Krishnasamy Balasubramanian, R. Ray, Manivannan Muniyandi","doi":"10.1145/3599970","DOIUrl":"https://doi.org/10.1145/3599970","url":null,"abstract":"Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on the Likert scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45952438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient-Centeredness and Saliency Size in Computational Aesthetics 计算美学中的显著性中心与显著性大小
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-21 DOI: https://dl.acm.org/doi/10.1145/3588317
Weng Khuan Hoh, Fang-Lue Zhang, Neil A. Dodgson

We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.

我们研究了摄影图像中单个显性突出区域的最佳美学位置和大小。现有的摄影构图算法并没有充分考虑到这些突出区域的空间定位或大小。我们提出了一组实验来评估审美偏好,灵感来自于中心、主线和三分法的理论。我们的实验结果表明,显著区域在图像中心有明显的偏好,并且在该显著区域周围有一个首选的非显著边界大小。因此,我们提出了一种新的图像裁剪机制,用于包含单个显著区域的图像,以达到最佳的美学平衡。我们的研究结果表明,三分法则并非普遍有效,但也允许我们假设在哪些情况下它是有用的,在哪些情况下它是不合适的。
{"title":"Salient-Centeredness and Saliency Size in Computational Aesthetics","authors":"Weng Khuan Hoh, Fang-Lue Zhang, Neil A. Dodgson","doi":"https://dl.acm.org/doi/10.1145/3588317","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588317","url":null,"abstract":"<p>We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image Features 学习基于gan的注视点重建以恢复感知上重要的图像特征
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-21 DOI: https://dl.acm.org/doi/10.1145/3583072
Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, Piotr Didyk

A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of generative adversarial networks (GANs) has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.

人眼视觉系统的视网膜灵敏度随偏心率的增大而迅速降低,根据视网膜灵敏度分布的稀疏样本集可以完全重构出注视点图像。生成对抗网络(GANs)的使用最近被证明是一个很有前途的解决方案,因为它们可以成功地产生缺失的图像信息。与其他监督学习方法一样,损失函数的定义和训练策略严重影响输出的质量。在这项工作中,我们考虑了有效指导注视点重建技术训练的问题,使他们更加了解人类视觉系统的能力和局限性,从而可以重建视觉上重要的图像特征。我们的主要目标是使训练过程对人类无法检测到的扭曲不那么敏感,并专注于惩罚感知上重要的工件。鉴于基于gan的解决方案的性质,我们关注的是在不同密度的输入样本情况下,人类视觉对幻觉的敏感性。我们提出了心理物理实验、数据集和一个训练注视点图像重建的程序。所提出的策略通过只惩罚输出中感知上重要的偏差,使发电机网络具有灵活性。因此,该方法强调恢复感知上重要的图像特征。我们评估了我们的策略,并通过使用新训练的客观度量、最近的焦点视频质量度量和用户实验,将其与替代解决方案进行了比较。我们的评估显示,与标准的基于gan的训练方法相比,感知图像重建质量有显著改善。
{"title":"Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image Features","authors":"Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, Piotr Didyk","doi":"https://dl.acm.org/doi/10.1145/3583072","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583072","url":null,"abstract":"<p>A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of generative adversarial networks (GANs) has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code 在阅读文本和代码的眼动追踪研究中,线的识别和垂直跳跃的解释
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-04-06 DOI: https://dl.acm.org/doi/10.1145/3579357
Mor Shamy, Dror G. Feitelson

Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification.

Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.

眼动追踪研究表明,与阅读文本相比,阅读代码包含许多垂直跳跃。由于不同的代码行可能具有完全不同的功能(例如,变量定义、流控制或计算),因此准确识别正在读取的行是很重要的。我们设计的实验需要仔细检查一行特定的文本。使用这一行周围的注视分布,然后我们计算识别正在阅读的行的精度如何取决于字体大小和间距。结果表明,即使在纠正了系统偏差之后,不自然的大字体和间距可能需要可靠的线条识别。有趣的是,在实验过程中,参与者还反复检查他们的任务,看看他们是否在看正确的线,导致垂直跳跃,类似于阅读代码时观察到的。这表明观察到的阅读模式可能是“低效的”,从某种意义上说,参与者觉得有必要重复超过任务所需的最小数量的动作。这可能会对阅读模式的解释产生影响。特别是,阅读并不仅仅反映从文本或代码中提取信息。相反,阅读模式也可能反映其他类型的活动,比如获得一个大致的方向,以及在执行特定任务的背景下搜索特定的位置。
{"title":"Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code","authors":"Mor Shamy, Dror G. Feitelson","doi":"https://dl.acm.org/doi/10.1145/3579357","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579357","url":null,"abstract":"<p>Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification.</p><p>Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient-Centeredness and Saliency Size in Computational Aesthetics 计算美学中的显著性中心与显著性大小
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-03-17 DOI: 10.1145/3588317
Weng Khuan Hoh, Fang-Lue Zhang, N. Dodgson
We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.
我们研究了摄影图像中单个显性突出区域的最佳美学位置和大小。现有的摄影构图算法并没有充分考虑到这些突出区域的空间定位或大小。我们提出了一组实验来评估审美偏好,灵感来自于中心、主线和三分法的理论。我们的实验结果表明,显著区域在图像中心有明显的偏好,并且在该显著区域周围有一个首选的非显著边界大小。因此,我们提出了一种新的图像裁剪机制,用于包含单个显著区域的图像,以达到最佳的美学平衡。我们的研究结果表明,三分法则并非普遍有效,但也允许我们假设在哪些情况下它是有用的,在哪些情况下它是不合适的。
{"title":"Salient-Centeredness and Saliency Size in Computational Aesthetics","authors":"Weng Khuan Hoh, Fang-Lue Zhang, N. Dodgson","doi":"10.1145/3588317","DOIUrl":"https://doi.org/10.1145/3588317","url":null,"abstract":"We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43946864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code 文本和代码阅读眼动追踪研究中的线条识别和垂直跳跃解释
IF 1.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-02-09 DOI: 10.1145/3579357
Mor Shamy, D. Feitelson
Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification. Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.
眼动追踪研究表明,与阅读文本不同,阅读代码包括许多垂直跳跃。由于不同的代码行可能具有完全不同的功能(例如,变量定义、流量控制或计算),因此准确识别正在读取的代码行很重要。我们设计的实验需要仔细检查一行特定的文本。利用这条线周围凝视的分布,我们计算出识别正在阅读的线的精度如何取决于字体大小和间距。结果表明,即使在校正了系统偏差之后,也可能需要不自然的大字体和间距来进行可靠的行识别。有趣的是,在实验过程中,参与者还反复重新检查他们的任务,以及他们是否在看正确的线,导致类似于阅读代码时观察到的垂直跳跃。这表明,观察到的阅读模式可能是“低效的”,因为参与者觉得有必要重复超出任务所需的最小数量的动作。这可能会对阅读模式的解释产生影响。特别是,阅读不仅仅反映从文本或代码中提取信息。相反,阅读模式也可能反映其他类型的活动,例如获得一般方向,以及在执行特定任务的背景下搜索特定位置。
{"title":"Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code","authors":"Mor Shamy, D. Feitelson","doi":"10.1145/3579357","DOIUrl":"https://doi.org/10.1145/3579357","url":null,"abstract":"Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification. Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45824907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Applied Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1