首页 > 最新文献

ACM Transactions on Applied Perception最新文献

英文 中文
Effect of Subthreshold Electrotactile Stimulation On The Perception of Electrovibration 阈下电触觉刺激对电振动感知的影响
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-29 DOI: https://dl.acm.org/doi/10.1145/3599970
Jagan Krishnasamy Balasubramanian, Rahul Kumar Ray, Manivannan Muniyandi

Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on Likert’s scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.

电振动用于触摸设备来渲染不同的纹理。当触觉亚模态刺激与电振刺激同时呈现时,可以增强纹理感知。纹理的感知依赖于电振动的阈值。在本研究中,我们对13名被试进行了心理物理实验,探讨了阈下触电刺激(SES)对电振动感知的影响。触觉亚模态刺激的相互作用导致一个刺激在另一个刺激存在下的掩蔽。本研究探讨了电振动在电触觉刺激下的触觉掩蔽现象。结果表明,当电触觉刺激达到感知阈值的90%和80%时,电振动阈值分别降低12.46%和6.75%。该方法在调谐曲线中从20 Hz到320 Hz的宽频率范围内进行了测试,并报告了百分比降低随频率的变化。另一项实验是用李克特量表来测量对组合刺激的感知。结果表明,在80%的SES下,感知更倾向于电振动,而在90%的SES下,感知是冷漠的。电振动阈值的降低表明,在阈下条件下,电触觉刺激的触觉掩蔽效应并不普遍。该研究为未来基于触觉亚模态刺激的纹理渲染算法的开发提供了重要的见解。
{"title":"Effect of Subthreshold Electrotactile Stimulation On The Perception of Electrovibration","authors":"Jagan Krishnasamy Balasubramanian, Rahul Kumar Ray, Manivannan Muniyandi","doi":"https://dl.acm.org/doi/10.1145/3599970","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3599970","url":null,"abstract":"<p>Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on Likert’s scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"8 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Subthreshold Electrotactile Stimulation on the Perception of Electrovibration 阈下电触觉刺激对电振动感知的影响
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-29 DOI: 10.1145/3599970
Jagan Krishnasamy Balasubramanian, R. Ray, Manivannan Muniyandi
Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on the Likert scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.
电振动用于触摸设备来渲染不同的纹理。当触觉亚模态刺激与电振刺激同时呈现时,可以增强纹理感知。纹理的感知依赖于电振动的阈值。在本研究中,我们对13名被试进行了心理物理实验,探讨了阈下触电刺激(SES)对电振动感知的影响。触觉亚模态刺激的相互作用导致一个刺激在另一个刺激存在下的掩蔽。本研究探讨了电振动在电触觉刺激下的触觉掩蔽现象。结果表明,当电触觉刺激达到感知阈值的90%和80%时,电振动阈值分别降低12.46%和6.75%。该方法在调谐曲线中从20 Hz到320 Hz的宽频率范围内进行了测试,并报告了百分比降低随频率的变化。另一项实验是在李克特量表上测量对组合刺激的感知。结果表明,在80%的SES下,感知更倾向于电振动,而在90%的SES下,感知是冷漠的。电振动阈值的降低表明,在阈下条件下,电触觉刺激的触觉掩蔽效应并不普遍。该研究为未来基于触觉亚模态刺激的纹理渲染算法的开发提供了重要的见解。
{"title":"Effect of Subthreshold Electrotactile Stimulation on the Perception of Electrovibration","authors":"Jagan Krishnasamy Balasubramanian, R. Ray, Manivannan Muniyandi","doi":"10.1145/3599970","DOIUrl":"https://doi.org/10.1145/3599970","url":null,"abstract":"Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on the Likert scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 16"},"PeriodicalIF":1.6,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45952438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient-Centeredness and Saliency Size in Computational Aesthetics 计算美学中的显著性中心与显著性大小
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-21 DOI: https://dl.acm.org/doi/10.1145/3588317
Weng Khuan Hoh, Fang-Lue Zhang, Neil A. Dodgson

We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.

我们研究了摄影图像中单个显性突出区域的最佳美学位置和大小。现有的摄影构图算法并没有充分考虑到这些突出区域的空间定位或大小。我们提出了一组实验来评估审美偏好,灵感来自于中心、主线和三分法的理论。我们的实验结果表明,显著区域在图像中心有明显的偏好,并且在该显著区域周围有一个首选的非显著边界大小。因此,我们提出了一种新的图像裁剪机制,用于包含单个显著区域的图像,以达到最佳的美学平衡。我们的研究结果表明,三分法则并非普遍有效,但也允许我们假设在哪些情况下它是有用的,在哪些情况下它是不合适的。
{"title":"Salient-Centeredness and Saliency Size in Computational Aesthetics","authors":"Weng Khuan Hoh, Fang-Lue Zhang, Neil A. Dodgson","doi":"https://dl.acm.org/doi/10.1145/3588317","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588317","url":null,"abstract":"<p>We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"8 4","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image Features 学习基于gan的注视点重建以恢复感知上重要的图像特征
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-21 DOI: https://dl.acm.org/doi/10.1145/3583072
Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, Piotr Didyk

A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of generative adversarial networks (GANs) has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.

人眼视觉系统的视网膜灵敏度随偏心率的增大而迅速降低,根据视网膜灵敏度分布的稀疏样本集可以完全重构出注视点图像。生成对抗网络(GANs)的使用最近被证明是一个很有前途的解决方案,因为它们可以成功地产生缺失的图像信息。与其他监督学习方法一样,损失函数的定义和训练策略严重影响输出的质量。在这项工作中,我们考虑了有效指导注视点重建技术训练的问题,使他们更加了解人类视觉系统的能力和局限性,从而可以重建视觉上重要的图像特征。我们的主要目标是使训练过程对人类无法检测到的扭曲不那么敏感,并专注于惩罚感知上重要的工件。鉴于基于gan的解决方案的性质,我们关注的是在不同密度的输入样本情况下,人类视觉对幻觉的敏感性。我们提出了心理物理实验、数据集和一个训练注视点图像重建的程序。所提出的策略通过只惩罚输出中感知上重要的偏差,使发电机网络具有灵活性。因此,该方法强调恢复感知上重要的图像特征。我们评估了我们的策略,并通过使用新训练的客观度量、最近的焦点视频质量度量和用户实验,将其与替代解决方案进行了比较。我们的评估显示,与标准的基于gan的训练方法相比,感知图像重建质量有显著改善。
{"title":"Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image Features","authors":"Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, Piotr Didyk","doi":"https://dl.acm.org/doi/10.1145/3583072","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583072","url":null,"abstract":"<p>A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of generative adversarial networks (GANs) has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code 在阅读文本和代码的眼动追踪研究中,线的识别和垂直跳跃的解释
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-06 DOI: https://dl.acm.org/doi/10.1145/3579357
Mor Shamy, Dror G. Feitelson

Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification.

Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.

眼动追踪研究表明,与阅读文本相比,阅读代码包含许多垂直跳跃。由于不同的代码行可能具有完全不同的功能(例如,变量定义、流控制或计算),因此准确识别正在读取的行是很重要的。我们设计的实验需要仔细检查一行特定的文本。使用这一行周围的注视分布,然后我们计算识别正在阅读的行的精度如何取决于字体大小和间距。结果表明,即使在纠正了系统偏差之后,不自然的大字体和间距可能需要可靠的线条识别。有趣的是,在实验过程中,参与者还反复检查他们的任务,看看他们是否在看正确的线,导致垂直跳跃,类似于阅读代码时观察到的。这表明观察到的阅读模式可能是“低效的”,从某种意义上说,参与者觉得有必要重复超过任务所需的最小数量的动作。这可能会对阅读模式的解释产生影响。特别是,阅读并不仅仅反映从文本或代码中提取信息。相反,阅读模式也可能反映其他类型的活动,比如获得一个大致的方向,以及在执行特定任务的背景下搜索特定的位置。
{"title":"Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code","authors":"Mor Shamy, Dror G. Feitelson","doi":"https://dl.acm.org/doi/10.1145/3579357","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579357","url":null,"abstract":"<p>Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification.</p><p>Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 4","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient-Centeredness and Saliency Size in Computational Aesthetics 计算美学中的显著性中心与显著性大小
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-03-17 DOI: 10.1145/3588317
Weng Khuan Hoh, Fang-Lue Zhang, N. Dodgson
We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.
我们研究了摄影图像中单个显性突出区域的最佳美学位置和大小。现有的摄影构图算法并没有充分考虑到这些突出区域的空间定位或大小。我们提出了一组实验来评估审美偏好,灵感来自于中心、主线和三分法的理论。我们的实验结果表明,显著区域在图像中心有明显的偏好,并且在该显著区域周围有一个首选的非显著边界大小。因此,我们提出了一种新的图像裁剪机制,用于包含单个显著区域的图像,以达到最佳的美学平衡。我们的研究结果表明,三分法则并非普遍有效,但也允许我们假设在哪些情况下它是有用的,在哪些情况下它是不合适的。
{"title":"Salient-Centeredness and Saliency Size in Computational Aesthetics","authors":"Weng Khuan Hoh, Fang-Lue Zhang, N. Dodgson","doi":"10.1145/3588317","DOIUrl":"https://doi.org/10.1145/3588317","url":null,"abstract":"We investigate the optimal aesthetic location and size of a single dominant salient region in a photographic image. Existing algorithms for photographic composition do not take full account of the spatial positioning or sizes of these salient regions. We present a set of experiments to assess aesthetic preferences, inspired by theories of centeredness, principal lines, and Rule-of-Thirds. Our experimental results show a clear preference for the salient region to be centered in the image and that there is a preferred size of non-salient border around this salient region. We thus propose a novel image cropping mechanism for images containing a single salient region to achieve the best aesthetic balance. Our results show that the Rule-of-Thirds guideline is not generally valid but also allow us to hypothesize in which situations it is useful and in which it is inappropriate.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 23"},"PeriodicalIF":1.6,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43946864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code 文本和代码阅读眼动追踪研究中的线条识别和垂直跳跃解释
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-02-09 DOI: 10.1145/3579357
Mor Shamy, D. Feitelson
Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification. Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.
眼动追踪研究表明,与阅读文本不同,阅读代码包括许多垂直跳跃。由于不同的代码行可能具有完全不同的功能(例如,变量定义、流量控制或计算),因此准确识别正在读取的代码行很重要。我们设计的实验需要仔细检查一行特定的文本。利用这条线周围凝视的分布,我们计算出识别正在阅读的线的精度如何取决于字体大小和间距。结果表明,即使在校正了系统偏差之后,也可能需要不自然的大字体和间距来进行可靠的行识别。有趣的是,在实验过程中,参与者还反复重新检查他们的任务,以及他们是否在看正确的线,导致类似于阅读代码时观察到的垂直跳跃。这表明,观察到的阅读模式可能是“低效的”,因为参与者觉得有必要重复超出任务所需的最小数量的动作。这可能会对阅读模式的解释产生影响。特别是,阅读不仅仅反映从文本或代码中提取信息。相反,阅读模式也可能反映其他类型的活动,例如获得一般方向,以及在执行特定任务的背景下搜索特定位置。
{"title":"Identifying Lines and Interpreting Vertical Jumps in Eye Tracking Studies of Reading Text and Code","authors":"Mor Shamy, D. Feitelson","doi":"10.1145/3579357","DOIUrl":"https://doi.org/10.1145/3579357","url":null,"abstract":"Eye tracking studies have shown that reading code, in contradistinction to reading text, includes many vertical jumps. As different lines of code may have quite different functions (e.g., variable definition, flow control, or computation), it is important to accurately identify the lines being read. We design experiments that require a specific line of text to be scrutinized. Using the distribution of gazes around this line, we then calculate how the precision with which we can identify the line being read depends on the font size and spacing. The results indicate that, even after correcting for systematic bias, unnaturally large fonts and spacing may be required for reliable line identification. Interestingly, during the experiments, the participants also repeatedly re-checked their task and if they were looking at the correct line, leading to vertical jumps similar to those observed when reading code. This suggests that observed reading patterns may be “inefficient,” in the sense that participants feel the need to repeat actions beyond the minimal number apparently required for the task. This may have implications regarding the interpretation of reading patterns. In particular, reading does not reflect only the extraction of information from the text or code. Rather, reading patterns may also reflect other types of activities, such as getting a general orientation, and searching for specific locations in the context of performing a particular task.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 20"},"PeriodicalIF":1.6,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45824907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gap Detection in Pairs of Ultrasound Mid-air Vibrotactile Stimuli 超声半空振动触觉刺激对间隙检测
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3570904
Thomas Howard, Karina Driller, William Frier, Claudio Pacchierotti, Maud Marchal, Jessica Hartcher-O’Brien

Ultrasound mid-air haptic (UMH) devices are a novel tool for haptic feedback, capable of providing localized vibrotactile stimuli to users at a distance. UMH applications largely rely on generating tactile shape outlines on the users’ skin. Here we investigate how to achieve sensations of continuity or gaps within such two-dimensional curves by studying the perception of pairs of amplitude-modulated focused ultrasound stimuli. On the one hand, we aim to investigate perceptual effects that may arise from providing simultaneous UMH stimuli. On the other hand, we wish to provide perception-based rendering guidelines for generating continuous or discontinuous sensations of tactile shapes. Finally, we hope to contribute toward a measure of the perceptually achievable resolution of UMH interfaces. We performed a user study to identify how far apart two focal points need to be to elicit a perceptual experience of two distinct stimuli separated by a gap. Mean gap detection thresholds were found at 32.3-mm spacing between focal points, but a high within- and between-subject variability was observed. Pairs spaced below 15 mm were consistently (>95%) perceived as a single stimulus, while pairs spaced 45 mm apart were consistently (84%) perceived as two separate stimuli. To investigate the observed variability, we resort to acoustic simulations of the resulting pressure fields. These show a non-linear evolution of actual peak pressure spacing as a function of nominal focal point spacing. Beyond an initial threshold in spacing (between 15 and 18 mm), which we believe to be related to the perceived size of a focal point, the probability of detecting a gap between focal points appears to linearly increase with spacing. Our work highlights physical interactions and perceptual effects to consider when designing or investigating the perception of UMH shapes.

超声半空触觉(UMH)设备是一种新颖的触觉反馈工具,能够为远距离用户提供局部振动触觉刺激。UMH应用主要依赖于在用户皮肤上生成触觉形状轮廓。在这里,我们研究如何通过研究对振幅调制聚焦超声刺激的感知来实现这种二维曲线内的连续性或间隙的感觉。一方面,我们的目标是研究同时提供UMH刺激可能产生的感知效应。另一方面,我们希望提供基于感知的渲染指南,用于生成触觉形状的连续或不连续感觉。最后,我们希望为UMH接口的感知可实现分辨率的测量做出贡献。我们进行了一项用户研究,以确定两个焦点之间的距离需要有多远,才能引起被间隙分隔的两个不同刺激的感知体验。平均间隙检测阈值在焦点间距32.3 mm处发现,但观察到受试者内部和受试者之间的高变异性。间隔小于15毫米的成对一致(95%)被认为是单一刺激,而间隔45毫米的成对一致(84%)被认为是两个单独的刺激。为了研究观察到的变化,我们采用声学模拟产生的压力场。这些显示了实际峰值压力间距作为标称焦点间距函数的非线性演化。超出初始阈值的间距(在15和18毫米之间),我们认为这与感知到的焦点大小有关,检测到焦点之间的间隙的概率似乎随着间距线性增加。我们的工作强调了在设计或研究UMH形状的感知时要考虑的物理相互作用和感知效应。
{"title":"Gap Detection in Pairs of Ultrasound Mid-air Vibrotactile Stimuli","authors":"Thomas Howard, Karina Driller, William Frier, Claudio Pacchierotti, Maud Marchal, Jessica Hartcher-O’Brien","doi":"https://dl.acm.org/doi/10.1145/3570904","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3570904","url":null,"abstract":"<p>Ultrasound mid-air haptic (UMH) devices are a novel tool for haptic feedback, capable of providing localized vibrotactile stimuli to users at a distance. UMH applications largely rely on generating tactile shape outlines on the users’ skin. Here we investigate how to achieve sensations of continuity or gaps within such two-dimensional curves by studying the perception of pairs of amplitude-modulated focused ultrasound stimuli. On the one hand, we aim to investigate perceptual effects that may arise from providing simultaneous UMH stimuli. On the other hand, we wish to provide perception-based rendering guidelines for generating continuous or discontinuous sensations of tactile shapes. Finally, we hope to contribute toward a measure of the perceptually achievable resolution of UMH interfaces. We performed a user study to identify how far apart two focal points need to be to elicit a perceptual experience of two distinct stimuli separated by a gap. Mean gap detection thresholds were found at 32.3-mm spacing between focal points, but a high within- and between-subject variability was observed. Pairs spaced below 15 mm were consistently (&gt;95%) perceived as a single stimulus, while pairs spaced 45 mm apart were consistently (84%) perceived as two separate stimuli. To investigate the observed variability, we resort to acoustic simulations of the resulting pressure fields. These show a non-linear evolution of actual peak pressure spacing as a function of nominal focal point spacing. Beyond an initial threshold in spacing (between 15 and 18 mm), which we believe to be related to the perceived size of a focal point, the probability of detecting a gap between focal points appears to linearly increase with spacing. Our work highlights physical interactions and perceptual effects to consider when designing or investigating the perception of UMH shapes.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"51 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Big Heads in Extended Reality: Estimation of Ideal Head Scales and Perceptual Thresholds for Comfort and Facial Cues 扩展现实中的虚拟大头:对舒适和面部线索的理想头部尺度和感知阈值的估计
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3571074
Zubin Choudhary, Austin Erickson, Nahal Norouzi, Kangsoo Kim, Gerd Bruder, Gregory Welch

Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues.

扩展现实(XR)技术,如虚拟现实(VR)和增强现实(AR),为用户、他们的化身和具体化的代理提供了一个在空间环境中协作的共享平台。虽然传统的面对面交流受到用户距离的限制,这意味着另一个人的非语言具体化的线索越远,就越难以感知,但研究人员和实践者已经开始寻找方法来强调或放大这种具体化的线索和信号,以抵消XR技术的距离影响。在这篇文章中,我们描述并评估了大头技术,在VR/AR中,人类的头部相对于他们与观察者的距离按比例放大,作为一种增强非语言面部线索(如面部表情或眼睛注视)可见性的机制。为了更好地理解和探索这种技术,我们在本文中提供了两个互补的人体实验。在我们的第一个实验中,我们使用头戴式显示器进行了VR研究,以了解头部鳞片的增加或减少对参与者感知面部表情的能力的影响,以及他们在长达10米的距离内的舒适感和“不可思议”感。我们探索了两种不同的缩放方法,并比较了感知阈值和用户偏好。我们的第二个实验是在户外AR环境中使用光学透明头戴式显示器进行的。参与者被要求估计面部表情和目光,并在30,60和90米的距离内识别虚拟人。在这两个实验中,我们的结果显示,在不同距离和任务中,与感知面部、面部表情和眼睛注视有关的最小、最大和理想头部尺度存在显著差异,我们还发现,在较远的距离上,参与者对头部略大的感觉更舒服。我们就所使用的技术讨论了我们的发现,并讨论了旨在利用xr增强面部线索的实际应用的含义和指导方针。
{"title":"Virtual Big Heads in Extended Reality: Estimation of Ideal Head Scales and Perceptual Thresholds for Comfort and Facial Cues","authors":"Zubin Choudhary, Austin Erickson, Nahal Norouzi, Kangsoo Kim, Gerd Bruder, Gregory Welch","doi":"https://dl.acm.org/doi/10.1145/3571074","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3571074","url":null,"abstract":"<p>Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the <i>Big Head</i> technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"51 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Content-adaptive Visibility Predictor for Perceptually Optimized Image Blending 感知优化图像混合的内容自适应可见性预测器
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3565972
Taiki Fukiage, Takeshi Oishi

The visibility of an image semi-transparently overlaid on another image varies significantly, depending on the content of the images. This makes it difficult to maintain the desired visibility level when the image content changes. To tackle this problem, we developed a perceptual model to predict the visibility of the blended results of arbitrarily combined images. Conventional visibility models cannot reflect the dependence of the suprathreshold visibility of the blended images on the appearance of the pre-blended image content. Therefore, we have proposed a visibility model with a content-adaptive feature aggregation mechanism, which integrates the visibility for each image feature (i.e., such as spatial frequency and colors) after applying weights that are adaptively determined according to the appearance of the input image. We conducted a large-scale psychophysical experiment to develop the visibility predictor model. Ablation studies revealed the importance of the adaptive weighting mechanism in accurately predicting the visibility of blended images. We have also proposed a technique for optimizing the image opacity such that users can set the visibility of the target image to an arbitrary level. Our evaluation revealed that the proposed perceptually optimized image blending was effective under practical conditions.

半透明地覆盖在另一幅图像上的图像的可见性根据图像的内容有很大的不同。这使得在图像内容发生变化时难以维持所需的可见性水平。为了解决这个问题,我们开发了一个感知模型来预测任意组合图像混合结果的可见性。传统的可见性模型不能反映混合图像的超阈值可见性对预混合图像内容外观的依赖性。因此,我们提出了一种具有内容自适应特征聚合机制的可见性模型,该模型在应用根据输入图像外观自适应确定的权重后,集成了每个图像特征(例如空间频率和颜色)的可见性。我们进行了大规模的心理物理实验来建立可见性预测模型。消融研究揭示了自适应加权机制在准确预测混合图像可见性方面的重要性。我们还提出了一种优化图像不透明度的技术,这样用户就可以将目标图像的可见性设置为任意级别。我们的评估表明,所提出的感知优化图像混合在实际条件下是有效的。
{"title":"A Content-adaptive Visibility Predictor for Perceptually Optimized Image Blending","authors":"Taiki Fukiage, Takeshi Oishi","doi":"https://dl.acm.org/doi/10.1145/3565972","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3565972","url":null,"abstract":"<p>The visibility of an image semi-transparently overlaid on another image varies significantly, depending on the content of the images. This makes it difficult to maintain the desired visibility level when the image content changes. To tackle this problem, we developed a perceptual model to predict the visibility of the blended results of arbitrarily combined images. Conventional visibility models cannot reflect the dependence of the suprathreshold visibility of the blended images on the appearance of the pre-blended image content. Therefore, we have proposed a visibility model with a content-adaptive feature aggregation mechanism, which integrates the visibility for each image feature (i.e., such as spatial frequency and colors) after applying weights that are adaptively determined according to the appearance of the input image. We conducted a large-scale psychophysical experiment to develop the visibility predictor model. Ablation studies revealed the importance of the adaptive weighting mechanism in accurately predicting the visibility of blended images. We have also proposed a technique for optimizing the image opacity such that users can set the visibility of the target image to an arbitrary level. Our evaluation revealed that the proposed perceptually optimized image blending was effective under practical conditions.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"51 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Applied Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1