首页 > 最新文献

Proceedings of the ACM Web Conference 2023最新文献

英文 中文
A Passage-Level Reading Behavior Model for Mobile Search 面向移动搜索的篇章级阅读行为模型
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583343
Zhijing Wu, Jiaxin Mao, Kedi Xu, Dandan Song, Heyan Huang
Reading is a vital and complex cognitive activity during users’ information-seeking process. Several studies have focused on understanding users’ reading behavior in desktop search. Their findings greatly contribute to the design of information retrieval models. However, little is known about how users read a result in mobile search, although search currently happens more frequently in mobile scenarios. In this paper, we conduct a lab-based user study to investigate users’ fine-grained reading behavior patterns in mobile search. We find that users’ reading attention allocation is strongly affected by several behavior biases, such as position and selection biases. Inspired by these findings, we propose a probabilistic generative model, the Passage-level Reading behavior Model (PRM), to model users’ reading behavior in mobile search. The PRM utilizes observable passage-level exposure and viewport duration events to infer users’ unobserved skimming event, reading event, and satisfaction perception during the reading process. Besides fitting the passage-level reading behavior, we utilize the fitted parameters of PRM to estimate the passage-level and document-level relevance. Experimental results show that PRM outperforms existing unsupervised relevance estimation models. PRM has strong interpretability and provides valuable insights into the understanding of how users seek and perceive useful information in mobile search.
阅读是用户信息寻求过程中重要而复杂的认知活动。有几项研究的重点是了解用户在桌面搜索中的阅读行为。他们的发现对信息检索模型的设计有很大的帮助。然而,人们对用户如何在移动搜索中阅读结果知之甚少,尽管搜索目前在移动场景中更频繁地发生。在本文中,我们进行了一项基于实验室的用户研究,以调查用户在移动搜索中的细粒度阅读行为模式。研究发现,用户的阅读注意力分配受到一些行为偏差的强烈影响,如位置偏差和选择偏差。受这些发现的启发,我们提出了一个概率生成模型,即通道级阅读行为模型(PRM),以模拟用户在移动搜索中的阅读行为。PRM利用可观察到的通道级曝光和视口持续时间事件来推断用户在阅读过程中未被观察到的浏览事件、阅读事件和满意度感知。除了拟合文章层面的阅读行为外,我们还利用拟合的PRM参数来估计文章层面和文档层面的相关性。实验结果表明,PRM算法优于现有的无监督相关估计模型。PRM具有很强的可解释性,并为理解用户如何在移动搜索中寻找和感知有用信息提供了有价值的见解。
{"title":"A Passage-Level Reading Behavior Model for Mobile Search","authors":"Zhijing Wu, Jiaxin Mao, Kedi Xu, Dandan Song, Heyan Huang","doi":"10.1145/3543507.3583343","DOIUrl":"https://doi.org/10.1145/3543507.3583343","url":null,"abstract":"Reading is a vital and complex cognitive activity during users’ information-seeking process. Several studies have focused on understanding users’ reading behavior in desktop search. Their findings greatly contribute to the design of information retrieval models. However, little is known about how users read a result in mobile search, although search currently happens more frequently in mobile scenarios. In this paper, we conduct a lab-based user study to investigate users’ fine-grained reading behavior patterns in mobile search. We find that users’ reading attention allocation is strongly affected by several behavior biases, such as position and selection biases. Inspired by these findings, we propose a probabilistic generative model, the Passage-level Reading behavior Model (PRM), to model users’ reading behavior in mobile search. The PRM utilizes observable passage-level exposure and viewport duration events to infer users’ unobserved skimming event, reading event, and satisfaction perception during the reading process. Besides fitting the passage-level reading behavior, we utilize the fitted parameters of PRM to estimate the passage-level and document-level relevance. Experimental results show that PRM outperforms existing unsupervised relevance estimation models. PRM has strong interpretability and provides valuable insights into the understanding of how users seek and perceive useful information in mobile search.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DANCE: Learning A Domain Adaptive Framework for Deep Hashing DANCE:学习深度哈希的领域自适应框架
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583445
Haixin Wang, Jinan Sun, Xiang Wei, Shikun Zhang, C. Chen, Xiansheng Hua, Xiao Luo
This paper studies unsupervised domain adaptive hashing, which aims to transfer a hashing model from a label-rich source domain to a label-scarce target domain. Current state-of-the-art approaches generally resolve the problem by integrating pseudo-labeling and domain adaptation techniques into deep hashing paradigms. Nevertheless, they usually suffer from serious class imbalance in pseudo-labels and suboptimal domain alignment caused by the neglection of the intrinsic structures of two domains. To address this issue, we propose a novel method named unbiaseD duAl hashiNg Contrastive lEarning (DANCE) for domain adaptive image retrieval. The core of our DANCE is to perform contrastive learning on hash codes from both instance level and prototype level. To begin, DANCE utilizes label information to guide instance-level hashing contrastive learning in the source domain. To generate unbiased and reliable pseudo-labels for semantic learning in the target domain, we uniformly select samples around each label embedding in the Hamming space. A momentum-update scheme is also utilized to smooth the optimization process. Additionally, we measure the semantic prototype representations in both source and target domains and incorporate them into a domain-aware prototype-level contrastive learning paradigm, which enhances domain alignment in the Hamming space while maximizing the model capacity. Experimental results on a number of well-known domain adaptive retrieval benchmarks validate the effectiveness of our proposed DANCE compared to a variety of competing baselines in different settings.
研究无监督域自适应哈希算法,旨在将哈希模型从标签丰富的源域转移到标签稀缺的目标域。目前最先进的方法通常通过将伪标记和领域自适应技术集成到深度哈希范式中来解决问题。然而,由于忽略了两个领域的内在结构,它们通常存在伪标签严重的类不平衡和次优的领域对齐问题。为了解决这个问题,我们提出了一种新的领域自适应图像检索方法——无偏对偶哈希对比学习(DANCE)。DANCE的核心是对实例级和原型级的哈希码进行对比学习。首先,DANCE利用标签信息来指导源域的实例级哈希对比学习。为了在目标域生成无偏和可靠的伪标签用于语义学习,我们在Hamming空间中统一选择每个标签周围的样本。同时采用动量更新方案使优化过程更加平滑。此外,我们测量了源域和目标域的语义原型表示,并将它们合并到一个领域感知的原型级对比学习范式中,从而增强了汉明空间中的领域对齐,同时最大化了模型容量。在许多知名领域自适应检索基准上的实验结果验证了我们所提出的DANCE在不同设置下与各种竞争基线相比的有效性。
{"title":"DANCE: Learning A Domain Adaptive Framework for Deep Hashing","authors":"Haixin Wang, Jinan Sun, Xiang Wei, Shikun Zhang, C. Chen, Xiansheng Hua, Xiao Luo","doi":"10.1145/3543507.3583445","DOIUrl":"https://doi.org/10.1145/3543507.3583445","url":null,"abstract":"This paper studies unsupervised domain adaptive hashing, which aims to transfer a hashing model from a label-rich source domain to a label-scarce target domain. Current state-of-the-art approaches generally resolve the problem by integrating pseudo-labeling and domain adaptation techniques into deep hashing paradigms. Nevertheless, they usually suffer from serious class imbalance in pseudo-labels and suboptimal domain alignment caused by the neglection of the intrinsic structures of two domains. To address this issue, we propose a novel method named unbiaseD duAl hashiNg Contrastive lEarning (DANCE) for domain adaptive image retrieval. The core of our DANCE is to perform contrastive learning on hash codes from both instance level and prototype level. To begin, DANCE utilizes label information to guide instance-level hashing contrastive learning in the source domain. To generate unbiased and reliable pseudo-labels for semantic learning in the target domain, we uniformly select samples around each label embedding in the Hamming space. A momentum-update scheme is also utilized to smooth the optimization process. Additionally, we measure the semantic prototype representations in both source and target domains and incorporate them into a domain-aware prototype-level contrastive learning paradigm, which enhances domain alignment in the Hamming space while maximizing the model capacity. Experimental results on a number of well-known domain adaptive retrieval benchmarks validate the effectiveness of our proposed DANCE compared to a variety of competing baselines in different settings.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disentangling Degree-related Biases and Interest for Out-of-Distribution Generalized Directed Network Embedding 分布外广义有向网络嵌入的度相关偏差和兴趣解缠
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583271
Hyunsik Yoo, Yeon-Chang Lee, Kijung Shin, Sang-Wook Kim
The goal of directed network embedding is to represent the nodes in a given directed network as embeddings that preserve the asymmetric relationships between nodes. While a number of directed network embedding methods have been proposed, we empirically show that the existing methods lack out-of-distribution generalization abilities against degree-related distributional shifts. To mitigate this problem, we propose ODIN (Out-of-Distribution Generalized Directed Network Embedding), a new directed NE method where we model multiple factors in the formation of directed edges. Then, for each node, ODIN learns multiple embeddings, each of which preserves its corresponding factor, by disentangling interest factors and biases related to in- and out-degrees of nodes. Our experiments on four real-world directed networks demonstrate that disentangling multiple factors enables ODIN to yield out-of-distribution generalized embeddings that are consistently effective under various degrees of shifts in degree distributions. Specifically, ODIN universally outperforms 9 state-of-the-art competitors in 2 LP tasks on 4 real-world datasets under both identical distribution (ID) and non-ID settings. The code is available at https://github.com/hsyoo32/odin.
有向网络嵌入的目标是将给定有向网络中的节点表示为保持节点之间不对称关系的嵌入。虽然已经提出了许多有向网络嵌入方法,但我们的经验表明,现有方法缺乏针对与程度相关的分布变化的分布外泛化能力。为了缓解这个问题,我们提出了ODIN (Out-of-Distribution Generalized Directed Network Embedding),这是一种新的有向网元方法,我们对有向边形成过程中的多个因素进行建模。然后,对于每个节点,ODIN通过解开与节点内外度相关的兴趣因素和偏差,学习多个嵌入,每个嵌入都保留其相应的因素。我们在四个现实世界的有向网络上的实验表明,解纠缠多个因素使ODIN能够产生分布外的广义嵌入,这种嵌入在程度分布的不同程度变化下始终有效。具体来说,ODIN在相同分布(ID)和非ID设置下,在4个真实数据集的2个LP任务中普遍优于9个最先进的竞争对手。代码可在https://github.com/hsyoo32/odin上获得。
{"title":"Disentangling Degree-related Biases and Interest for Out-of-Distribution Generalized Directed Network Embedding","authors":"Hyunsik Yoo, Yeon-Chang Lee, Kijung Shin, Sang-Wook Kim","doi":"10.1145/3543507.3583271","DOIUrl":"https://doi.org/10.1145/3543507.3583271","url":null,"abstract":"The goal of directed network embedding is to represent the nodes in a given directed network as embeddings that preserve the asymmetric relationships between nodes. While a number of directed network embedding methods have been proposed, we empirically show that the existing methods lack out-of-distribution generalization abilities against degree-related distributional shifts. To mitigate this problem, we propose ODIN (Out-of-Distribution Generalized Directed Network Embedding), a new directed NE method where we model multiple factors in the formation of directed edges. Then, for each node, ODIN learns multiple embeddings, each of which preserves its corresponding factor, by disentangling interest factors and biases related to in- and out-degrees of nodes. Our experiments on four real-world directed networks demonstrate that disentangling multiple factors enables ODIN to yield out-of-distribution generalized embeddings that are consistently effective under various degrees of shifts in degree distributions. Specifically, ODIN universally outperforms 9 state-of-the-art competitors in 2 LP tasks on 4 real-world datasets under both identical distribution (ID) and non-ID settings. The code is available at https://github.com/hsyoo32/odin.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
EDNet: Attention-Based Multimodal Representation for Classification of Twitter Users Related to Eating Disorders 与饮食失调相关的Twitter用户分类的基于注意力的多模态表示
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583863
Mohammad Abuhassan, Tarique Anwar, Chengfei Liu, H. Jarman, M. Fuller‐Tyszkiewicz
Social media platforms provide rich data sources in several domains. In mental health, individuals experiencing an Eating Disorder (ED) are often hesitant to seek help through conventional healthcare services. However, many people seek help with diet and body image issues on social media. To better distinguish at-risk users who may need help for an ED from those who are simply commenting on ED in social environments, highly sophisticated approaches are required. Assessment of ED risks in such a situation can be done in various ways, and each has its own strengths and weaknesses. Hence, there is a need for and potential benefit of a more complex multimodal approach. To this end, we collect historical tweets, user biographies, and online behaviours of relevant users from Twitter, and generate a reasonably large labelled benchmark dataset. Thereafter, we develop an advanced multimodal deep learning model called EDNet using these data to identify the different types of users with ED engagement (e.g., potential ED sufferers, healthcare professionals, or communicators) and distinguish them from those not experiencing EDs on Twitter. EDNet consists of five deep neural network layers. With the help of its embedding, representation and behaviour modeling layers, it effectively learns the multimodalities of social media. In our experiments, EDNet consistently outperforms all the baseline techniques by significant margins. It achieves an accuracy of up to 94.32% and F1 score of up to 93.91% F1 score. To the best of our knowledge, this is the first such study to propose a multimodal approach for user-level classification according to their engagement with ED content on social media.
社交媒体平台在多个领域提供了丰富的数据源。在心理健康方面,个体经历饮食失调(ED)往往犹豫寻求帮助,通过传统的医疗保健服务。然而,许多人在社交媒体上寻求饮食和身体形象方面的帮助。为了更好地区分那些可能需要ED帮助的高危用户和那些只是在社交环境中评论ED的用户,需要高度复杂的方法。在这种情况下,ED风险的评估可以通过各种方式进行,每种方式都有自己的优势和劣势。因此,需要一种更复杂的多模式方法,而且这种方法有潜在的好处。为此,我们从Twitter上收集历史推文、用户传记和相关用户的在线行为,并生成一个相当大的标记基准数据集。之后,我们开发了一个先进的多模态深度学习模型,称为EDNet,使用这些数据来识别ED参与的不同类型的用户(例如,潜在的ED患者,医疗保健专业人员或传播者),并将他们与没有在Twitter上经历ED的人区分开来。EDNet由五个深层神经网络层组成。借助其嵌入层、表示层和行为建模层,有效地学习了社交媒体的多模态。在我们的实验中,EDNet的性能一直明显优于所有基线技术。准确率高达94.32%,F1得分高达93.91%。据我们所知,这是第一个根据用户在社交媒体上对ED内容的参与程度提出用户级分类的多模式方法的研究。
{"title":"EDNet: Attention-Based Multimodal Representation for Classification of Twitter Users Related to Eating Disorders","authors":"Mohammad Abuhassan, Tarique Anwar, Chengfei Liu, H. Jarman, M. Fuller‐Tyszkiewicz","doi":"10.1145/3543507.3583863","DOIUrl":"https://doi.org/10.1145/3543507.3583863","url":null,"abstract":"Social media platforms provide rich data sources in several domains. In mental health, individuals experiencing an Eating Disorder (ED) are often hesitant to seek help through conventional healthcare services. However, many people seek help with diet and body image issues on social media. To better distinguish at-risk users who may need help for an ED from those who are simply commenting on ED in social environments, highly sophisticated approaches are required. Assessment of ED risks in such a situation can be done in various ways, and each has its own strengths and weaknesses. Hence, there is a need for and potential benefit of a more complex multimodal approach. To this end, we collect historical tweets, user biographies, and online behaviours of relevant users from Twitter, and generate a reasonably large labelled benchmark dataset. Thereafter, we develop an advanced multimodal deep learning model called EDNet using these data to identify the different types of users with ED engagement (e.g., potential ED sufferers, healthcare professionals, or communicators) and distinguish them from those not experiencing EDs on Twitter. EDNet consists of five deep neural network layers. With the help of its embedding, representation and behaviour modeling layers, it effectively learns the multimodalities of social media. In our experiments, EDNet consistently outperforms all the baseline techniques by significant margins. It achieves an accuracy of up to 94.32% and F1 score of up to 93.91% F1 score. To the best of our knowledge, this is the first such study to propose a multimodal approach for user-level classification according to their engagement with ED content on social media.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131723261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reference-Dependent Model for Web Search Evaluation: Understanding and Measuring the Experience of Boundedly Rational Users 基于参考的网络搜索评估模型:理解和衡量有限理性用户的体验
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583551
Nuo Chen, Jiqun Liu, Tetsuya Sakai
Previous researches demonstrate that users’ actions in search interaction are associated with relative gains and losses to reference points, known as the reference dependence effect. However, this widely confirmed effect is not represented in most user models underpinning existing search evaluation metrics. In this study, we propose a new evaluation metric framework, namely Reference Dependent Metric (ReDeM), for assessing query-level search by incorporating the effect of reference dependence into the modelling of user search behavior. To test the overall effectiveness of the proposed framework, (1) we evaluate the performance, in terms of correlation with user satisfaction, of ReDeMs built upon different reference points against that of the widely-used metrics on three search datasets; (2) we examine the performance of ReDeMs under different task states, like task difficulty and task urgency; and (3) we analyze the statistical reliability of ReDeMs in terms of discriminative power. Experimental results indicate that: (1) ReDeMs integrated with a proper reference point achieve better correlations with user satisfaction than most of the existing metrics, like Discounted Cumulative Gain (DCG) and Rank-Biased Precision (RBP), even though their parameters have already been well-tuned; (2) ReDeMs reach relatively better performance compared to existing metrics when the task triggers a high-level cognitive load; (3) the discriminative power of ReDeMs is far stronger than Expected Reciprocal Rank (ERR), slightly stronger than Precision and similar to DCG, RBP and INST. To our knowledge, this study is the first to explicitly incorporate the reference dependence effect into the user browsing model and offline evaluation metrics. Our work illustrates a promising approach to leveraging the insights about user biases from cognitive psychology in better evaluating user search experience and enhancing user models.
已有研究表明,用户在搜索交互中的行为与参考点的相对得失有关,称为参考依赖效应。然而,这种被广泛证实的效果并没有体现在大多数支持现有搜索评估指标的用户模型中。在这项研究中,我们提出了一个新的评估度量框架,即参考依赖度量(ReDeM),通过将参考依赖的影响纳入用户搜索行为的建模中来评估查询级搜索。为了测试所提出的框架的整体有效性,(1)我们根据与用户满意度的相关性,评估了基于不同参考点的ReDeMs与三个搜索数据集上广泛使用的指标的性能;(2)在任务难度和任务紧迫性等不同任务状态下,考察了redem的绩效;(3)从判别能力的角度分析了redem的统计信度。实验结果表明:(1)与大多数现有指标(如贴现累积增益(DCG)和秩偏精度(RBP))相比,与适当参考点集成的ReDeMs与用户满意度的相关性更好,即使它们的参数已经经过了很好的调整;(2)当任务触发高水平认知负荷时,相对于现有指标,ReDeMs表现相对较好;(3) ReDeMs的判别能力远强于ERR,略强于Precision,与DCG、RBP和INST相似。据我们所知,本研究首次将参考依赖效应明确纳入用户浏览模型和离线评价指标。我们的工作展示了一种很有前途的方法,可以利用认知心理学对用户偏见的见解来更好地评估用户搜索体验和增强用户模型。
{"title":"A Reference-Dependent Model for Web Search Evaluation: Understanding and Measuring the Experience of Boundedly Rational Users","authors":"Nuo Chen, Jiqun Liu, Tetsuya Sakai","doi":"10.1145/3543507.3583551","DOIUrl":"https://doi.org/10.1145/3543507.3583551","url":null,"abstract":"Previous researches demonstrate that users’ actions in search interaction are associated with relative gains and losses to reference points, known as the reference dependence effect. However, this widely confirmed effect is not represented in most user models underpinning existing search evaluation metrics. In this study, we propose a new evaluation metric framework, namely Reference Dependent Metric (ReDeM), for assessing query-level search by incorporating the effect of reference dependence into the modelling of user search behavior. To test the overall effectiveness of the proposed framework, (1) we evaluate the performance, in terms of correlation with user satisfaction, of ReDeMs built upon different reference points against that of the widely-used metrics on three search datasets; (2) we examine the performance of ReDeMs under different task states, like task difficulty and task urgency; and (3) we analyze the statistical reliability of ReDeMs in terms of discriminative power. Experimental results indicate that: (1) ReDeMs integrated with a proper reference point achieve better correlations with user satisfaction than most of the existing metrics, like Discounted Cumulative Gain (DCG) and Rank-Biased Precision (RBP), even though their parameters have already been well-tuned; (2) ReDeMs reach relatively better performance compared to existing metrics when the task triggers a high-level cognitive load; (3) the discriminative power of ReDeMs is far stronger than Expected Reciprocal Rank (ERR), slightly stronger than Precision and similar to DCG, RBP and INST. To our knowledge, this study is the first to explicitly incorporate the reference dependence effect into the user browsing model and offline evaluation metrics. Our work illustrates a promising approach to leveraging the insights about user biases from cognitive psychology in better evaluating user search experience and enhancing user models.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126472645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automated WebAssembly Function Purpose Identification With Semantics-Aware Analysis 使用语义感知分析的自动WebAssembly功能目的识别
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583235
Alan Romano, Weihang Wang
WebAssembly is a recent web standard built for better performance in web applications. The standard defines a binary code format to use as a compilation target for a variety of languages, such as C, C++, and Rust. The standard also defines a text representation for readability, although, WebAssembly modules are difficult to interpret by human readers, regardless of their experience level. This makes it difficult to understand and maintain any existing WebAssembly code. As a result, third-party WebAssembly modules need to be implicitly trusted by developers as verifying the functionality themselves may not be feasible. To this end, we construct WASPur, a tool to automatically identify the purposes of WebAssembly functions. To build this tool, we first construct an extensive collection of WebAssembly samples that represent the state of WebAssembly. Second, we analyze the dataset and identify the diverse use cases of the collected WebAssembly modules. We leverage the dataset of WebAssembly modules to construct semantics-aware intermediate representations (IR) of the functions in the modules. We encode the function IR for use in a machine learning classifier, and we find that this classifier can predict the similarity of a given function against known named functions with an accuracy rate of 88.07%. We hope our tool will enable inspection of optimized and minified WebAssembly modules that remove function names and most other semantic identifiers.
WebAssembly是最近为提高web应用程序的性能而构建的一种web标准。该标准定义了一种二进制代码格式,作为各种语言(如C、c++和Rust)的编译目标。该标准还定义了可读性的文本表示,尽管WebAssembly模块很难被人类读者解释,无论他们的经验水平如何。这使得理解和维护任何现有的WebAssembly代码变得困难。因此,第三方WebAssembly模块需要得到开发人员的隐式信任,因为验证功能本身可能是不可行的。为此,我们构造WASPur,这是一个自动识别WebAssembly函数用途的工具。为了构建这个工具,我们首先构造一个广泛的WebAssembly示例集合,这些示例表示WebAssembly的状态。其次,我们分析数据集并确定所收集的WebAssembly模块的不同用例。我们利用WebAssembly模块的数据集来构建模块中功能的语义感知的中间表示(IR)。我们将函数IR编码用于机器学习分类器中,我们发现该分类器可以预测给定函数与已知命名函数的相似性,准确率为88.07%。我们希望我们的工具能够检查优化和最小化的WebAssembly模块,删除函数名和大多数其他语义标识符。
{"title":"Automated WebAssembly Function Purpose Identification With Semantics-Aware Analysis","authors":"Alan Romano, Weihang Wang","doi":"10.1145/3543507.3583235","DOIUrl":"https://doi.org/10.1145/3543507.3583235","url":null,"abstract":"WebAssembly is a recent web standard built for better performance in web applications. The standard defines a binary code format to use as a compilation target for a variety of languages, such as C, C++, and Rust. The standard also defines a text representation for readability, although, WebAssembly modules are difficult to interpret by human readers, regardless of their experience level. This makes it difficult to understand and maintain any existing WebAssembly code. As a result, third-party WebAssembly modules need to be implicitly trusted by developers as verifying the functionality themselves may not be feasible. To this end, we construct WASPur, a tool to automatically identify the purposes of WebAssembly functions. To build this tool, we first construct an extensive collection of WebAssembly samples that represent the state of WebAssembly. Second, we analyze the dataset and identify the diverse use cases of the collected WebAssembly modules. We leverage the dataset of WebAssembly modules to construct semantics-aware intermediate representations (IR) of the functions in the modules. We encode the function IR for use in a machine learning classifier, and we find that this classifier can predict the similarity of a given function against known named functions with an accuracy rate of 88.07%. We hope our tool will enable inspection of optimized and minified WebAssembly modules that remove function names and most other semantic identifiers.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123833292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Simulate Crowd Trajectories with Graph Networks 学习用图网络模拟人群轨迹
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583858
Hongzhi Shi, Quanming Yao, Yong Li
Crowd stampede disasters often occur, such as recent ones in Indonesia and South Korea, and crowd simulation is particularly important to prevent and avoid such disasters. Most traditional models for crowd simulation, such as the social force model, are hand-designed formulas, which use Newtonian forces to model the interactions between pedestrians. However, such formula-based methods may not be flexible enough to capture the complex interaction patterns in diverse crowd scenarios. Recently, due to the development of the Internet, a large amount of pedestrian movement data has been collected, allowing us to study crowd simulation in a data-driven way. Inspired by the recent success of graph network-based simulation (GNS), we propose a novel method under the framework of GNS, which simulates the crowd in a data-driven way. Specifically, we propose to model the interactions among people and the environment using a heterogeneous graph. Then, we design a heterogeneous gated message-passing network to learn the interaction pattern that depends on the visual field. Finally, the randomness is introduced by modeling the context’s different influences on pedestrians with a probabilistic emission function. Extensive experiments on synthetic data, controlled-environment data and real-world data are performed. Extensive results show that our model can generally capture the three main factors which contribute to crowd trajectories while adapting to the data characteristics beyond the strong assumption of formulas-based methods. As a result, the proposed method outperforms existing methods by a large margin.
人群踩踏事故时有发生,例如最近在印度尼西亚和韩国发生的人群踩踏事故,人群模拟对于预防和避免此类灾难尤为重要。大多数传统的人群模拟模型,如社会力模型,都是手工设计的公式,使用牛顿力来模拟行人之间的相互作用。然而,这种基于公式的方法可能不够灵活,无法捕获不同人群场景中的复杂交互模式。近年来,由于互联网的发展,人们收集了大量的行人运动数据,使得我们可以用数据驱动的方式来研究人群模拟。受近年来基于图网络的仿真(GNS)成功的启发,我们提出了一种在GNS框架下以数据驱动的方式模拟人群的新方法。具体来说,我们建议使用异构图来模拟人与环境之间的相互作用。然后,我们设计了一个异构门控消息传递网络来学习依赖于视野的交互模式。最后,利用概率发射函数建模环境对行人的不同影响,引入随机性。对合成数据、受控环境数据和真实世界数据进行了广泛的实验。广泛的结果表明,我们的模型通常可以捕捉到影响人群轨迹的三个主要因素,同时适应数据特征,而不是基于公式的方法的强假设。结果表明,所提出的方法在很大程度上优于现有的方法。
{"title":"Learning to Simulate Crowd Trajectories with Graph Networks","authors":"Hongzhi Shi, Quanming Yao, Yong Li","doi":"10.1145/3543507.3583858","DOIUrl":"https://doi.org/10.1145/3543507.3583858","url":null,"abstract":"Crowd stampede disasters often occur, such as recent ones in Indonesia and South Korea, and crowd simulation is particularly important to prevent and avoid such disasters. Most traditional models for crowd simulation, such as the social force model, are hand-designed formulas, which use Newtonian forces to model the interactions between pedestrians. However, such formula-based methods may not be flexible enough to capture the complex interaction patterns in diverse crowd scenarios. Recently, due to the development of the Internet, a large amount of pedestrian movement data has been collected, allowing us to study crowd simulation in a data-driven way. Inspired by the recent success of graph network-based simulation (GNS), we propose a novel method under the framework of GNS, which simulates the crowd in a data-driven way. Specifically, we propose to model the interactions among people and the environment using a heterogeneous graph. Then, we design a heterogeneous gated message-passing network to learn the interaction pattern that depends on the visual field. Finally, the randomness is introduced by modeling the context’s different influences on pedestrians with a probabilistic emission function. Extensive experiments on synthetic data, controlled-environment data and real-world data are performed. Extensive results show that our model can generally capture the three main factors which contribute to crowd trajectories while adapting to the data characteristics beyond the strong assumption of formulas-based methods. As a result, the proposed method outperforms existing methods by a large margin.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114296643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The More Things Change, the More They Stay the Same: Integrity of Modern JavaScript 变化越多,保持不变越多:现代JavaScript的完整性
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583395
J. So, M. Ferdman, Nick Nikiforakis
The modern web is a collection of remote resources that are identified by their location and composed of interleaving networks of trust. Supply chain attacks compromise the users of a target domain by leveraging its often large set of trusted third parties who provide resources such as JavaScript. The ubiquity of JavaScript, paired with its ability to execute arbitrary code on client machines, makes this particular web resource an ideal vector for supply chain attacks. Currently, there exists no robust method for users browsing the web to verify that the script content they receive from a third party is the expected content. In this paper, we present key insights to inform the design of robust integrity mechanisms, derived from our large-scale analyses of the 6M scripts we collected while crawling 44K domains every day for 77 days. We find that scripts that frequently change should be considered first-class citizens in the modern web ecosystem, and that the ways in which scripts change remain constant over time. Furthermore, we present analyses on the use of strict integrity verification (e.g., Subresource Integrity) at the granularity of the script providers themselves, offering a more complete perspective and demonstrating that the use of strict integrity alone cannot provide satisfactory security guarantees. We conclude that it is infeasible for a client to distinguish benign changes from malicious ones without additional, external knowledge, motivating the need for a new protocol to provide clients the necessary context to assess the potential ramifications of script changes.
现代网络是远程资源的集合,这些资源由它们的位置识别,并由相互交错的信任网络组成。供应链攻击利用目标域的大量可信第三方(如JavaScript)提供资源,从而危及目标域的用户。JavaScript的无处不在,加上它在客户端机器上执行任意代码的能力,使得这个特定的web资源成为供应链攻击的理想载体。目前,对于浏览网页的用户来说,还没有一种可靠的方法来验证他们从第三方收到的脚本内容是否是预期的内容。在本文中,我们提出了一些重要的见解,以指导稳健的完整性机制的设计,这些见解来自于我们对每天爬行44K域名时收集的6M脚本的大规模分析,持续了77天。我们发现频繁更改的脚本应该被认为是现代网络生态系统中的一等公民,并且脚本更改的方式随着时间的推移保持不变。此外,我们在脚本提供程序本身的粒度上对严格完整性验证(例如,子资源完整性)的使用进行了分析,提供了一个更完整的视角,并证明单独使用严格完整性不能提供令人满意的安全保证。我们得出的结论是,如果没有额外的外部知识,客户机将良性更改与恶意更改区分开来是不可行的,这促使需要一个新协议来为客户机提供必要的上下文,以评估脚本更改的潜在后果。
{"title":"The More Things Change, the More They Stay the Same: Integrity of Modern JavaScript","authors":"J. So, M. Ferdman, Nick Nikiforakis","doi":"10.1145/3543507.3583395","DOIUrl":"https://doi.org/10.1145/3543507.3583395","url":null,"abstract":"The modern web is a collection of remote resources that are identified by their location and composed of interleaving networks of trust. Supply chain attacks compromise the users of a target domain by leveraging its often large set of trusted third parties who provide resources such as JavaScript. The ubiquity of JavaScript, paired with its ability to execute arbitrary code on client machines, makes this particular web resource an ideal vector for supply chain attacks. Currently, there exists no robust method for users browsing the web to verify that the script content they receive from a third party is the expected content. In this paper, we present key insights to inform the design of robust integrity mechanisms, derived from our large-scale analyses of the 6M scripts we collected while crawling 44K domains every day for 77 days. We find that scripts that frequently change should be considered first-class citizens in the modern web ecosystem, and that the ways in which scripts change remain constant over time. Furthermore, we present analyses on the use of strict integrity verification (e.g., Subresource Integrity) at the granularity of the script providers themselves, offering a more complete perspective and demonstrating that the use of strict integrity alone cannot provide satisfactory security guarantees. We conclude that it is infeasible for a client to distinguish benign changes from malicious ones without additional, external knowledge, motivating the need for a new protocol to provide clients the necessary context to assess the potential ramifications of script changes.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"359 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122757614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Coherent Topic Modeling for Creative Multimodal Data on Social Media 社交媒体上创造性多模态数据的连贯主题建模
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3587433
Junaid Rashid, Jungeun Kim, Usman Naseem
The creative web is all about combining different types of media to create a unique and engaging online experience. Multimodal data, such as text and images, is a key component in the creative web. Social media posts that incorporate both text descriptions and images offer a wealth of information and context. Text in social media posts typically relates to one topic, while images often convey information about multiple topics due to the richness of visual content. Despite this potential, many existing multimodal topic models do not take these criteria into account, resulting in poor quality topics being generated. Therefore, we proposed a Coherent Topic modeling for Multimodal Data (CTM-MM), which takes into account that text in social media posts typically relates to one topic, while images can contain information about multiple topics. Our experimental results show that CTM-MM outperforms traditional multimodal topic models in terms of classification and topic coherence.
创意网络就是将不同类型的媒体结合起来,创造一种独特的、引人入胜的在线体验。多模态数据,如文本和图像,是创意网络的关键组成部分。结合文字描述和图片的社交媒体帖子提供了丰富的信息和背景。社交媒体帖子中的文本通常与一个主题相关,而由于视觉内容的丰富性,图像通常传达多个主题的信息。尽管有这种潜力,但许多现有的多模态主题模型没有考虑到这些标准,导致生成的主题质量很差。因此,我们提出了针对多模态数据的连贯主题建模(CTM-MM),该模型考虑到社交媒体帖子中的文本通常与一个主题相关,而图像可以包含多个主题的信息。实验结果表明,CTM-MM在分类和主题一致性方面优于传统的多模态主题模型。
{"title":"Coherent Topic Modeling for Creative Multimodal Data on Social Media","authors":"Junaid Rashid, Jungeun Kim, Usman Naseem","doi":"10.1145/3543507.3587433","DOIUrl":"https://doi.org/10.1145/3543507.3587433","url":null,"abstract":"The creative web is all about combining different types of media to create a unique and engaging online experience. Multimodal data, such as text and images, is a key component in the creative web. Social media posts that incorporate both text descriptions and images offer a wealth of information and context. Text in social media posts typically relates to one topic, while images often convey information about multiple topics due to the richness of visual content. Despite this potential, many existing multimodal topic models do not take these criteria into account, resulting in poor quality topics being generated. Therefore, we proposed a Coherent Topic modeling for Multimodal Data (CTM-MM), which takes into account that text in social media posts typically relates to one topic, while images can contain information about multiple topics. Our experimental results show that CTM-MM outperforms traditional multimodal topic models in terms of classification and topic coherence.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127731375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaML: Carbon Footprinting of Household Products with Zero-Shot Semantic Text Similarity 语义文本相似度为零的家居产品碳足迹
Pub Date : 2023-04-30 DOI: 10.1145/3543507.3583882
Bharathan Balaji, Venkata Sai Gargeya Vunnava, G. Guest, Jared Kramer
Products contribute to carbon emissions in each phase of their life cycle, from manufacturing to disposal. Estimating the embodied carbon in products is a key step towards understanding their impact, and undertaking mitigation actions. Precise carbon attribution is challenging at scale, requiring both domain expertise and granular supply chain data. As a first-order approximation, standard reports use Economic Input-Output based Life Cycle Assessment (EIO-LCA) which estimates carbon emissions per dollar at an industry sector level using transactions between different parts of the economy. EIO-LCA models map products to an industry sector, and uses the corresponding carbon per dollar estimates to calculate the embodied carbon footprint of a product. An LCA expert needs to map each product to one of upwards of 1000 potential industry sectors. To reduce the annotation burden, the standard practice is to group products by categories, and map categories to their corresponding industry sector. We present CaML, an algorithm to automate EIO-LCA using semantic text similarity matching by leveraging the text descriptions of the product and the industry sector. CaML uses a pre-trained sentence transformer model to rank the top-5 matches, and asks a human to check if any of them are a good match. We annotated 40K products with non-experts. Our results reveal that pre-defined product categories are heterogeneous with respect to EIO-LCA industry sectors, and lead to a large mean absolute percentage error (MAPE) of 51% in kgCO2e/$. CaML outperforms the previous manually intensive method, yielding a MAPE of 22% with no domain labels (zero-shot). We compared annotations of a small sample of 210 products with LCA experts, and find that CaML accuracy is comparable to that of annotations by non-experts.
从制造到处理,产品在其生命周期的每个阶段都会产生碳排放。估计产品中的隐含碳是了解其影响并采取缓解行动的关键步骤。精确的碳归属在规模上具有挑战性,需要领域专业知识和颗粒供应链数据。作为一阶近似,标准报告使用基于经济投入产出的生命周期评估(EIO-LCA),该评估使用经济不同部分之间的交易来估计工业部门层面每美元的碳排放量。EIO-LCA模型将产品映射到工业部门,并使用相应的每美元碳估计来计算产品的隐含碳足迹。LCA专家需要将每个产品映射到多达1000个潜在行业部门中的一个。为了减少注释负担,标准做法是按类别对产品进行分组,并将类别映射到相应的行业部门。我们提出了CaML,一种通过利用产品和行业部门的文本描述来使用语义文本相似性匹配来自动化EIO-LCA的算法。CaML使用预先训练的句子转换模型对前5个匹配进行排名,并要求人工检查其中是否有任何匹配。我们用非专家对40K产品进行了注释。我们的研究结果表明,就EIO-LCA行业部门而言,预定义的产品类别是异质的,导致kgCO2e/$的平均绝对百分比误差(MAPE)很大,为51%。CaML优于以前的手动密集型方法,在没有域标签(零射击)的情况下产生22%的MAPE。我们将210个小样本产品的注释与LCA专家进行了比较,发现CaML的准确性与非专家的注释相当。
{"title":"CaML: Carbon Footprinting of Household Products with Zero-Shot Semantic Text Similarity","authors":"Bharathan Balaji, Venkata Sai Gargeya Vunnava, G. Guest, Jared Kramer","doi":"10.1145/3543507.3583882","DOIUrl":"https://doi.org/10.1145/3543507.3583882","url":null,"abstract":"Products contribute to carbon emissions in each phase of their life cycle, from manufacturing to disposal. Estimating the embodied carbon in products is a key step towards understanding their impact, and undertaking mitigation actions. Precise carbon attribution is challenging at scale, requiring both domain expertise and granular supply chain data. As a first-order approximation, standard reports use Economic Input-Output based Life Cycle Assessment (EIO-LCA) which estimates carbon emissions per dollar at an industry sector level using transactions between different parts of the economy. EIO-LCA models map products to an industry sector, and uses the corresponding carbon per dollar estimates to calculate the embodied carbon footprint of a product. An LCA expert needs to map each product to one of upwards of 1000 potential industry sectors. To reduce the annotation burden, the standard practice is to group products by categories, and map categories to their corresponding industry sector. We present CaML, an algorithm to automate EIO-LCA using semantic text similarity matching by leveraging the text descriptions of the product and the industry sector. CaML uses a pre-trained sentence transformer model to rank the top-5 matches, and asks a human to check if any of them are a good match. We annotated 40K products with non-experts. Our results reveal that pre-defined product categories are heterogeneous with respect to EIO-LCA industry sectors, and lead to a large mean absolute percentage error (MAPE) of 51% in kgCO2e/$. CaML outperforms the previous manually intensive method, yielding a MAPE of 22% with no domain labels (zero-shot). We compared annotations of a small sample of 210 products with LCA experts, and find that CaML accuracy is comparable to that of annotations by non-experts.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133692676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the ACM Web Conference 2023
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1