首页 > 最新文献

Proc. VLDB Endow.最新文献

英文 中文
Relational Query Synthesis ⋈ Decision Tree Learning 关系查询合成 ⋈ 决策树学习
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626306
Aaditya Naik, Aalok Thakkar, Adam Stein, R. Alur, Mayur Naik
We study the problem of synthesizing a core fragment of relational queries called select-project-join (SPJ) queries from input-output examples. Search-based synthesis techniques are suited to synthesizing projections and joins by navigating the network of relational tables but require additional supervision for synthesizing comparison predicates. On the other hand, decision tree learning techniques are suited to synthesizing comparison predicates when the input database can be summarized as a single labelled relational table. In this paper, we adapt and interleave methods from the domains of relational query synthesis and decision tree learning, and present an end-to-end framework for synthesizing relational queries with categorical and numerical comparison predicates. Our technique guarantees the completeness of the synthesis procedure and strongly encourages minimality of the synthesized program. We present Libra, an implementation of this technique and evaluate it on a benchmark suite of 1,475 instances of queries over 159 databases with multiple tables. Libra solves 1,361 of these instances in an average of 59 seconds per instance. It outperforms state-of-the-art program synthesis tools Scythe and PatSQL in terms of both the running time and the quality of the synthesized programs.
我们研究了从输入-输出示例中合成关系查询核心片段(称为选择-项目-连接(SPJ)查询)的问题。基于搜索的合成技术适合通过浏览关系表网络来合成投影和连接,但在合成比较谓词时需要额外的监督。另一方面,决策树学习技术适合在输入数据库可归纳为单一标记关系表的情况下合成比较谓词。在本文中,我们对关系查询合成和决策树学习领域的方法进行了调整和交错,并提出了一个端到端的框架,用于合成带有分类和数字比较谓词的关系查询。我们的技术保证了合成过程的完整性,并强烈鼓励合成程序的最小化。我们介绍了这项技术的实现--Libra,并在 159 个多表数据库的 1475 个查询实例的基准套件上对其进行了评估。Libra 解决了其中的 1,361 个实例,平均每个实例耗时 59 秒。在运行时间和合成程序的质量方面,它都优于最先进的程序合成工具 Scythe 和 PatSQL。
{"title":"Relational Query Synthesis ⋈ Decision Tree Learning","authors":"Aaditya Naik, Aalok Thakkar, Adam Stein, R. Alur, Mayur Naik","doi":"10.14778/3626292.3626306","DOIUrl":"https://doi.org/10.14778/3626292.3626306","url":null,"abstract":"We study the problem of synthesizing a core fragment of relational queries called select-project-join (SPJ) queries from input-output examples. Search-based synthesis techniques are suited to synthesizing projections and joins by navigating the network of relational tables but require additional supervision for synthesizing comparison predicates. On the other hand, decision tree learning techniques are suited to synthesizing comparison predicates when the input database can be summarized as a single labelled relational table. In this paper, we adapt and interleave methods from the domains of relational query synthesis and decision tree learning, and present an end-to-end framework for synthesizing relational queries with categorical and numerical comparison predicates. Our technique guarantees the completeness of the synthesis procedure and strongly encourages minimality of the synthesized program. We present Libra, an implementation of this technique and evaluate it on a benchmark suite of 1,475 instances of queries over 159 databases with multiple tables. Libra solves 1,361 of these instances in an average of 59 seconds per instance. It outperforms state-of-the-art program synthesis tools Scythe and PatSQL in terms of both the running time and the quality of the synthesized programs.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"28 1","pages":"250-263"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139326866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query Refinement for Diversity Constraint Satisfaction 满足多样性约束的查询细化
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626295
Jinyang Li, Y. Moskovitch, Julia Stoyanovich, H. V. Jagadish
Diversity, group representation, and similar needs often apply to query results, which in turn require constraints on the sizes of various subgroups in the result set. Traditional relational queries only specify conditions as part of the query predicate(s), and do not support such restrictions on the output. In this paper, we study the problem of modifying queries to have the result satisfy constraints on the sizes of multiple subgroups in it. This problem, in the worst case, cannot be solved in polynomial time. Yet, with the help of provenance annotation, we are able to develop a query refinement method that works quite efficiently, as we demonstrate through extensive experiments.
多样性、分组表示和类似需求通常适用于查询结果,而查询结果又需要对结果集中各分组的大小进行限制。传统的关系查询只能指定条件作为查询谓词的一部分,而不支持对输出结果的此类限制。在本文中,我们研究了修改查询以使结果满足其中多个子组大小限制的问题。在最坏的情况下,这个问题无法在多项式时间内解决。然而,在出处注释的帮助下,我们开发出了一种查询细化方法,并通过大量实验证明了这种方法的高效性。
{"title":"Query Refinement for Diversity Constraint Satisfaction","authors":"Jinyang Li, Y. Moskovitch, Julia Stoyanovich, H. V. Jagadish","doi":"10.14778/3626292.3626295","DOIUrl":"https://doi.org/10.14778/3626292.3626295","url":null,"abstract":"Diversity, group representation, and similar needs often apply to query results, which in turn require constraints on the sizes of various subgroups in the result set. Traditional relational queries only specify conditions as part of the query predicate(s), and do not support such restrictions on the output. In this paper, we study the problem of modifying queries to have the result satisfy constraints on the sizes of multiple subgroups in it. This problem, in the worst case, cannot be solved in polynomial time. Yet, with the help of provenance annotation, we are able to develop a query refinement method that works quite efficiently, as we demonstrate through extensive experiments.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"14 1","pages":"106-118"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139330918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VeLP: Vehicle Loading Plan Learning from Human Behavior in Nationwide Logistics System VeLP:从全国物流系统中的人类行为学习车辆装载计划
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626305
Sijing Duan, Feng Lyu, Xin Zhu, Yi Ding, Haotian Wang, Desheng Zhang, Xue Liu, Yaoxue Zhang, Ju Ren
For a nationwide logistics transportation system, it is critical to make the vehicle loading plans (i.e., given many packages, deciding vehicle types and numbers) at each sorting and distribution center. This task is currently completed by dispatchers at each center in many logistics companies and consumes a lot of workloads for dispatchers. Existing works formulate such an issue as a cargo loading problem and solve it by combinatorial optimization methods. However, it cannot work in some real-world nationwide applications due to the lack of accurate cargo volume information and effective model design under complicated impact factors as well as temporal correlation. In this paper, we explore a new opportunity to utilize large-scale route and human behavior data (i.e., dispatchers' decision process on planning vehicles) to generate vehicle loading plans (i.e., plans). Specifically, we collect a five-month nationwide operational dataset from JD Logistics in China and comprehensively analyze human behaviors. Based on the data-driven analytics insights, we design a Vehicle Loading Plan learning model, named VeLP, which consists of a pattern mining module and a deep temporal cross neural network, to learn the human behaviors on regular and irregular routes, respectively. Extensive experiments demonstrate the superiority of VeLP, which achieves performance improvement by 35.8% and 50% for trunk and branch routes compared with baselines, respectively. Besides, we deployed VeLP in JDL and applied it in about 400 routes, reducing the time by approximately 20% in creating plans. It saves significant human workload and improves operational efficiency for the logistics company.
对于一个全国性的物流运输系统来说,在每个分拣和配送中心制定车辆装载计划(即给定许多包裹,决定车辆类型和数量)至关重要。目前,许多物流公司的这项任务都是由各中心的调度员完成的,耗费了调度员大量的工作量。现有研究将这一问题表述为货物装载问题,并通过组合优化方法加以解决。然而,由于缺乏准确的货量信息和有效的模型设计,在复杂的影响因素和时间相关性下,该方法无法在全国范围内的一些实际应用中发挥作用。在本文中,我们探索了利用大规模路线和人类行为数据(即调度员计划车辆的决策过程)生成车辆装载计划(即计划)的新机会。具体来说,我们从中国的京东物流收集了为期五个月的全国运营数据集,并对人类行为进行了全面分析。基于数据驱动的分析洞察,我们设计了一个车辆装载计划学习模型,命名为 VeLP,由模式挖掘模块和深度时空交叉神经网络组成,分别学习人类在常规路线和不规则路线上的行为。广泛的实验证明了 VeLP 的优越性,与基线相比,VeLP 在干线和支线上的性能分别提高了 35.8% 和 50%。此外,我们还在 JDL 中部署了 VeLP,并将其应用于约 400 条线路,使创建计划的时间减少了约 20%。这为物流公司节省了大量人力,提高了运营效率。
{"title":"VeLP: Vehicle Loading Plan Learning from Human Behavior in Nationwide Logistics System","authors":"Sijing Duan, Feng Lyu, Xin Zhu, Yi Ding, Haotian Wang, Desheng Zhang, Xue Liu, Yaoxue Zhang, Ju Ren","doi":"10.14778/3626292.3626305","DOIUrl":"https://doi.org/10.14778/3626292.3626305","url":null,"abstract":"For a nationwide logistics transportation system, it is critical to make the vehicle loading plans (i.e., given many packages, deciding vehicle types and numbers) at each sorting and distribution center. This task is currently completed by dispatchers at each center in many logistics companies and consumes a lot of workloads for dispatchers. Existing works formulate such an issue as a cargo loading problem and solve it by combinatorial optimization methods. However, it cannot work in some real-world nationwide applications due to the lack of accurate cargo volume information and effective model design under complicated impact factors as well as temporal correlation. In this paper, we explore a new opportunity to utilize large-scale route and human behavior data (i.e., dispatchers' decision process on planning vehicles) to generate vehicle loading plans (i.e., plans). Specifically, we collect a five-month nationwide operational dataset from JD Logistics in China and comprehensively analyze human behaviors. Based on the data-driven analytics insights, we design a Vehicle Loading Plan learning model, named VeLP, which consists of a pattern mining module and a deep temporal cross neural network, to learn the human behaviors on regular and irregular routes, respectively. Extensive experiments demonstrate the superiority of VeLP, which achieves performance improvement by 35.8% and 50% for trunk and branch routes compared with baselines, respectively. Besides, we deployed VeLP in JDL and applied it in about 400 routes, reducing the time by approximately 20% in creating plans. It saves significant human workload and improves operational efficiency for the logistics company.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"29 1","pages":"241-249"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139330991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cryptographically Secure Private Record Linkage Using Locality-Sensitive Hashing 利用位置敏感哈希算法实现加密安全的私人记录链接
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626293
Ruidi Wei, F. Kerschbaum
Private record linkage (PRL) is the problem of identifying pairs of records that approximately match across datasets in a secure, privacy-preserving manner. Two-party PRL specifically allows each of the parties to obtain records from the other party, only given that each record matches with one of their own. The privacy goal is that no other information about the datasets should be released than the matching records. A fundamental challenge is not to leak information while at the same time not comparing all pairs of records. In plaintext record linkage this is done using a blocking strategy, e.g., locality-sensitive hashing. One recent approach proposed by He et al. (ACM CCS 2017) uses locality-sensitive hashing and then releases a provably differential private representation of the hash bins. However, differential privacy still leaks some, although provable bounded information and does not protect against attacks, such as property inference attacks. Another recent approach by Khurram and Kerschbaum (IEEE ICDE 2020) uses locality-preserving hashing and provides cryptographic security, i.e., it releases no information except the output. However, locality-preserving hash functions are much harder to construct than locality-sensitive hash functions and hence accuracy of this approach is limited, particularly on larger datasets. In this paper, we address the open problem of providing cryptographic security of PRL while using locality-sensitive hash functions. Using recent results in oblivious algorithms, we design a new cryptographically secure PRL with locality-sensitive hash functions. Our prototypical implementation can match 40000 records in the British National Library/Toronto Public Library and the North Carolina Voter Registry datasets with 99.3% and 99.9% accuracy, respectively, in less than an hour which is more than an order of magnitude faster than Khurram and Kerschbaum's work at a higher accuracy.
私人记录链接(PRL)是指以安全、保护隐私的方式识别数据集之间大致匹配的记录对的问题。双方 PRL 特别允许每一方从另一方获取记录,但前提是每条记录都与自己的记录相匹配。隐私保护的目标是,除了匹配记录外,不得泄露数据集的其他信息。一个基本的挑战是在不比较所有记录对的同时不泄露信息。在明文记录链接中,可以使用阻塞策略(如位置敏感哈希算法)做到这一点。He 等人最近提出的一种方法(ACM CCS 2017)使用了对位置敏感的哈希算法,然后发布了哈希分仓的可证明差分隐私表示。然而,差分隐私仍然会泄露一些可证明的有界信息,而且无法抵御攻击,如属性推理攻击。Khurram 和 Kerschbaum 最近提出的另一种方法(IEEE ICDE 2020)使用了局部性保护散列,并提供了加密安全性,即除了输出外不会泄露任何信息。然而,位置保持散列函数比位置敏感散列函数更难构建,因此这种方法的准确性有限,尤其是在较大的数据集上。在本文中,我们要解决的问题是,在使用位置敏感散列函数的同时提供 PRL 的加密安全性。利用最近在遗忘算法方面取得的成果,我们设计了一种新的加密安全 PRL,同时使用对位置敏感的散列函数。我们的原型实现可以在不到一个小时的时间内分别以 99.3% 和 99.9% 的准确率匹配英国国家图书馆/多伦多公共图书馆和北卡罗莱纳州选民登记数据集中的 40000 条记录,这比 Khurram 和 Kerschbaum 在更高准确率下的工作要快一个数量级以上。
{"title":"Cryptographically Secure Private Record Linkage Using Locality-Sensitive Hashing","authors":"Ruidi Wei, F. Kerschbaum","doi":"10.14778/3626292.3626293","DOIUrl":"https://doi.org/10.14778/3626292.3626293","url":null,"abstract":"Private record linkage (PRL) is the problem of identifying pairs of records that approximately match across datasets in a secure, privacy-preserving manner. Two-party PRL specifically allows each of the parties to obtain records from the other party, only given that each record matches with one of their own. The privacy goal is that no other information about the datasets should be released than the matching records. A fundamental challenge is not to leak information while at the same time not comparing all pairs of records. In plaintext record linkage this is done using a blocking strategy, e.g., locality-sensitive hashing. One recent approach proposed by He et al. (ACM CCS 2017) uses locality-sensitive hashing and then releases a provably differential private representation of the hash bins. However, differential privacy still leaks some, although provable bounded information and does not protect against attacks, such as property inference attacks. Another recent approach by Khurram and Kerschbaum (IEEE ICDE 2020) uses locality-preserving hashing and provides cryptographic security, i.e., it releases no information except the output. However, locality-preserving hash functions are much harder to construct than locality-sensitive hash functions and hence accuracy of this approach is limited, particularly on larger datasets. In this paper, we address the open problem of providing cryptographic security of PRL while using locality-sensitive hash functions. Using recent results in oblivious algorithms, we design a new cryptographically secure PRL with locality-sensitive hash functions. Our prototypical implementation can match 40000 records in the British National Library/Toronto Public Library and the North Carolina Voter Registry datasets with 99.3% and 99.9% accuracy, respectively, in less than an hour which is more than an order of magnitude faster than Khurram and Kerschbaum's work at a higher accuracy.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"84 1","pages":"79-91"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139325314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach 十亿级双方图嵌入:全局-局部诱导法
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626300
Xueyi Wu, Yuanyuan Xu, Wenjie Zhang, Ying Zhang
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choices. Hence, existing solutions face a challenge in achieving both embedding quality and model scalability. Motivated by this, we propose a novel graph neural network named AnchorGNN based on global-local learning framework, which can generate high-quality BGE and scale to billion-scale bipartite graphs. Concretely, AnchorGNN leverages a novel anchor-based message passing schema for global learning, which enables global knowledge to be incorporated to generate node embeddings. Meanwhile, AnchorGNN offers an efficient one-hop local structure modelling using maximum likelihood estimation for bipartite graphs with rational analysis, avoiding large adjacency matrix construction. Both global information and local structure are integrated to generate distinguishable node embeddings. Extensive experiments demonstrate that AnchorGNN outperforms the best competitor by up to 36% in accuracy and achieves up to 28 times speed-up against the only metric-based baseline on billion-scale bipartite graphs.
双元图嵌入(BGE)是双元图网络分析的基本任务,其目的是将每个节点映射为紧凑的低维向量,并保持其固有属性。现有的 BGE 解决方案分为两类:基于度量的方法和基于图神经网络(GNN)的方法。由于深度学习具有很强的表征能力,后者生成的嵌入质量通常高于前者。然而,由于昂贵的信息传递或复杂的建模选择,现有的基于图神经网络的方法都无法处理十亿规模的双向图。因此,现有解决方案在实现嵌入质量和模型可扩展性方面都面临挑战。受此启发,我们提出了一种基于全局-局部学习框架的新型图神经网络,名为 AnchorGNN,它可以生成高质量的 BGE,并可扩展到十亿尺度的双元图。具体来说,AnchorGNN 利用一种新颖的基于锚的消息传递模式进行全局学习,从而将全局知识纳入生成节点嵌入的过程中。同时,AnchorGNN 利用最大似然估计法对双方形图进行合理分析,提供了高效的单跳局部结构建模,避免了大型邻接矩阵的构建。全局信息和局部结构被整合在一起,以生成可区分的节点嵌入。大量实验证明,AnchorGNN 的准确率比最佳竞争者高出 36%,在十亿规模的双方形图上,与唯一基于度量的基线相比,速度提高了 28 倍。
{"title":"Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach","authors":"Xueyi Wu, Yuanyuan Xu, Wenjie Zhang, Ying Zhang","doi":"10.14778/3626292.3626300","DOIUrl":"https://doi.org/10.14778/3626292.3626300","url":null,"abstract":"Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choices. Hence, existing solutions face a challenge in achieving both embedding quality and model scalability. Motivated by this, we propose a novel graph neural network named AnchorGNN based on global-local learning framework, which can generate high-quality BGE and scale to billion-scale bipartite graphs. Concretely, AnchorGNN leverages a novel anchor-based message passing schema for global learning, which enables global knowledge to be incorporated to generate node embeddings. Meanwhile, AnchorGNN offers an efficient one-hop local structure modelling using maximum likelihood estimation for bipartite graphs with rational analysis, avoiding large adjacency matrix construction. Both global information and local structure are integrated to generate distinguishable node embeddings. Extensive experiments demonstrate that AnchorGNN outperforms the best competitor by up to 36% in accuracy and achieves up to 28 times speed-up against the only metric-based baseline on billion-scale bipartite graphs.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"25 1","pages":"175-183"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139327364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utility-aware Payment Channel Network Rebalance 公用事业感知支付渠道网络再平衡
Pub Date : 2023-10-01 DOI: 10.14778/3626292.3626301
Wangze Ni, Pengze Chen, Lei Chen, Peng Cheng, Chen Zhang, Xuemin Lin
The payment channel network (PCN) is a promising solution to increase the throughput of blockchains. However, unidirectional transactions can deplete a user's deposits in a payment channel (PC), reducing the success ratio of transactions (SRoT). To address this depletion issue, rebalance protocols are used to shift tokens from well-deposited PCs to under-deposited PCs. To improve SRoT, it is beneficial to increase the balance of a PC with a lower balance and a higher weight (i.e., more transaction executions rely on the PC). In this paper, we define the utility of a transaction and the utility-aware rebalance (UAR) problem. The utility of a transaction is proportional to the weight of the PC and the amount of the transaction, and inversely proportional to the balance of the receiver. To maximize the effect of improving SRoT, UAR aims to find a set of transactions with maximized utilities, satisfying the budget and conservation constraints. The budget constraint limits the number of tokens shifted in a PC. The conservation constraint requires that the number of tokens each user sends equals the number of tokens received. We prove that UAR is NP-hard and cannot be approximately solved with a constant ratio. Thus, we propose two heuristic algorithms, namely Circuit Greedy and UAR_DC. Extensive experiments show that our approaches outperform the existing approach by at least 3.16 times in terms of utilities.
支付通道网络(PCN)是提高区块链吞吐量的一种有前途的解决方案。然而,单向交易会耗尽用户在支付通道(PC)中的存款,降低交易成功率(SRoT)。为解决这一消耗问题,使用再平衡协议将代币从存款充足的 PC 转移到存款不足的 PC。为了提高 SRoT,增加余额较低且权重较高的 PC 的余额是有益的(即更多的交易执行依赖于该 PC)。本文定义了交易效用和效用感知再平衡(UAR)问题。交易效用与 PC 的权重和交易金额成正比,与接收方的余额成反比。为了最大限度地提高 SRoT 的效果,UAR 的目标是找到一组效用最大的交易,同时满足预算和保护约束。预算约束限制了 PC 中转移的代币数量。保护约束要求每个用户发送的代币数等于收到的代币数。我们证明,UAR 是 NP 难题,无法以恒定比率近似求解。因此,我们提出了两种启发式算法,即 Circuit Greedy 和 UAR_DC。大量实验表明,就效用而言,我们的方法比现有方法至少高出 3.16 倍。
{"title":"Utility-aware Payment Channel Network Rebalance","authors":"Wangze Ni, Pengze Chen, Lei Chen, Peng Cheng, Chen Zhang, Xuemin Lin","doi":"10.14778/3626292.3626301","DOIUrl":"https://doi.org/10.14778/3626292.3626301","url":null,"abstract":"The payment channel network (PCN) is a promising solution to increase the throughput of blockchains. However, unidirectional transactions can deplete a user's deposits in a payment channel (PC), reducing the success ratio of transactions (SRoT). To address this depletion issue, rebalance protocols are used to shift tokens from well-deposited PCs to under-deposited PCs. To improve SRoT, it is beneficial to increase the balance of a PC with a lower balance and a higher weight (i.e., more transaction executions rely on the PC). In this paper, we define the utility of a transaction and the utility-aware rebalance (UAR) problem. The utility of a transaction is proportional to the weight of the PC and the amount of the transaction, and inversely proportional to the balance of the receiver. To maximize the effect of improving SRoT, UAR aims to find a set of transactions with maximized utilities, satisfying the budget and conservation constraints. The budget constraint limits the number of tokens shifted in a PC. The conservation constraint requires that the number of tokens each user sends equals the number of tokens received. We prove that UAR is NP-hard and cannot be approximately solved with a constant ratio. Thus, we propose two heuristic algorithms, namely Circuit Greedy and UAR_DC. Extensive experiments show that our approaches outperform the existing approach by at least 3.16 times in terms of utilities.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"8 1","pages":"184-196"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139325443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraphOS: Towards Oblivious Graph Processing GraphOS:走向遗忘图处理
Pub Date : 2023-09-01 DOI: 10.14778/3625054.3625067
Javad Ghareh Chamani, I. Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, R. Jalili
We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing : the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure , so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction-fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace , i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS (translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are "padded" to the worst case.
我们提出的 GraphOS 是一个允许拥有图形数据库的客户端将其外包给不受信任的服务器进行存储和查询的系统。该系统依赖于双盲基元和可信硬件来实现极强的隐私和效率概念,我们称之为 "遗忘图处理":服务器除了知道图顶点和边的数量,以及每次查询的类型和响应大小外,什么也不知道。在技术层面上,GraphOS 将图存储在双盲数据结构中,因此所有顶点/边的访问都是不可区分的。为此,我们提出了一种新型双盲图 Omix++,它的性能比以前的技术水平高出 34 倍,而且可能具有独立的意义。此外,为了避免查询评估过程中 CPU 指令抓取造成的任何泄漏,我们提出了四种基本图查询算法(BFS/DFS 遍历、最小生成树和单源最短路径),这些算法具有固定的执行轨迹,即执行操作的顺序与输入无关。通过结合这些技术,我们消除了观察受保护飞地内内存访问模式的硬件对手可以推断出的所有信息。我们将 GraphOS 与基于遗忘关系 DBMS(将图形查询转换为关系运算符)的现有最佳解决方案进行了比较。GraphOS 不仅性能显著提高(对于我们测试过的图形,提高了两个数量级),而且消除了与图形拓扑相关的泄漏,而使用关系数据库管理系统时,除非所有操作都 "填充 "到最坏情况,否则泄漏实际上是固有的。
{"title":"GraphOS: Towards Oblivious Graph Processing","authors":"Javad Ghareh Chamani, I. Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, R. Jalili","doi":"10.14778/3625054.3625067","DOIUrl":"https://doi.org/10.14778/3625054.3625067","url":null,"abstract":"We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing : the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure , so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction-fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace , i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS (translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are \"padded\" to the worst case.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"33 1","pages":"4324-4338"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139343351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Doquet: Differentially Oblivious Range and Join Queries with Private Data Structures Doquet:使用私有数据结构的差分遗忘范围和连接查询
Pub Date : 2023-09-01 DOI: 10.14778/3625054.3625055
Lina Qiu, Georgios Kellaris, N. Mamoulis, Kobbi Nissim, G. Kollios
Most cloud service providers offer limited data privacy guarantees, discouraging clients from using them for managing their sensitive data. Cloud providers may use servers with Trusted Execution Environments (TEEs) to protect outsourced data, while supporting remote querying. However, TEEs may leak access patterns and allow communication volume attacks, enabling an honest-but-curious cloud provider to learn sensitive information. Oblivious algorithms can be used to completely hide data access patterns, but their high overhead could render them impractical. To alleviate the latter, the notion of Differential Obliviousness (DO) has been recently proposed. DO applies differential privacy (DP) on access patterns while hiding the communication volume of intermediate and final results; it does so by trading some level of privacy for efficiency. We present Doquet: D ifferentially O blivious Range and Join Que ries with Private Data Struc t ures, a framework for DO outsourced database systems. Doquet is the first approach that supports private data structures, indices, selection, foreign key join, many-to-many join, and their composition select-join in a realistic TEE setting, even when the accesses to the private memory can be eavesdropped on by the adversary. We prove that the algorithms in Doquet satisfy differential obliviousness. Furthermore, we implemented Doquet and tested it on a machine having a second generation of Intel SGX (TEE); the results show that Doquet offers up to an order of magnitude speedup in comparison with other fully oblivious and differentially oblivious approaches.
大多数云服务提供商提供的数据隐私保证有限,因此客户不愿使用它们来管理敏感数据。云提供商可以使用带有可信执行环境(TEE)的服务器来保护外包数据,同时支持远程查询。但是,TEE 可能会泄露访问模式并允许通信量攻击,从而使诚实但好奇的云提供商了解敏感信息。遗忘算法可用于完全隐藏数据访问模式,但其高昂的开销可能使其变得不切实际。为了缓解后者的问题,最近有人提出了差分遗忘(DO)的概念。差分遗忘(DO)将差分隐私(DP)应用于访问模式,同时隐藏中间和最终结果的通信量;它是通过以一定程度的隐私换取效率来实现这一点的。 我们介绍 Doquet: D ifferentially O blivious Range and Join Que ries with Private Data Struc t ures),这是一种用于 DO 外包数据库系统的框架。Doquet 是第一种支持私有数据结构、索引、选择、外键连接、多对多连接以及它们在现实 TEE 环境中的组合 select-join 的方法,即使对私有内存的访问可以被对手窃听。我们证明了 Doquet 算法满足差分遗忘性。此外,我们实现了 Doquet,并在第二代英特尔 SGX(TEE)机器上进行了测试;结果表明,与其他完全遗忘和差分遗忘方法相比,Doquet 的速度提高了一个数量级。
{"title":"Doquet: Differentially Oblivious Range and Join Queries with Private Data Structures","authors":"Lina Qiu, Georgios Kellaris, N. Mamoulis, Kobbi Nissim, G. Kollios","doi":"10.14778/3625054.3625055","DOIUrl":"https://doi.org/10.14778/3625054.3625055","url":null,"abstract":"Most cloud service providers offer limited data privacy guarantees, discouraging clients from using them for managing their sensitive data. Cloud providers may use servers with Trusted Execution Environments (TEEs) to protect outsourced data, while supporting remote querying. However, TEEs may leak access patterns and allow communication volume attacks, enabling an honest-but-curious cloud provider to learn sensitive information. Oblivious algorithms can be used to completely hide data access patterns, but their high overhead could render them impractical. To alleviate the latter, the notion of Differential Obliviousness (DO) has been recently proposed. DO applies differential privacy (DP) on access patterns while hiding the communication volume of intermediate and final results; it does so by trading some level of privacy for efficiency. We present Doquet: D ifferentially O blivious Range and Join Que ries with Private Data Struc t ures, a framework for DO outsourced database systems. Doquet is the first approach that supports private data structures, indices, selection, foreign key join, many-to-many join, and their composition select-join in a realistic TEE setting, even when the accesses to the private memory can be eavesdropped on by the adversary. We prove that the algorithms in Doquet satisfy differential obliviousness. Furthermore, we implemented Doquet and tested it on a machine having a second generation of Intel SGX (TEE); the results show that Doquet offers up to an order of magnitude speedup in comparison with other fully oblivious and differentially oblivious approaches.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"58 1","pages":"4160-4173"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139344606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single Update Sketch with Variable Counter Structure 采用可变计数器结构的单次更新草图
Pub Date : 2023-09-01 DOI: 10.14778/3625054.3625065
D. Melissourgos, Haibo Wang, Shigang Chen, Chaoyi Ma, Shiping Chen
Per-flow size measurement is key to many streaming applications and management systems, particularly in high-speed networks. Performing such measurement on the data plane of a network device at the line rate requires on-chip memory and computing resources that are shared by other key network functions. It leads to the need for very compact and fast data structures, called sketches, which trade off space for accuracy. Such a need also arises in other application context for extremely large data sets. The goal of sketch design is two-fold: to measure flow size as accurately as possible and to do so as efficiently as possible (for low overhead and thus high processing throughput). The existing sketches can be broadly categorized to multi-update sketches and single update sketches. The former are more accurate but carry larger overhead. The latter incur small overhead but their accuracy is poor. This paper proposes a Single update Sketch with a Variable counter Structure (SSVS), a new sketch design which is several times faster than the existing multi-update sketches with comparable accuracy, and is several times more accurate than the existing single update sketches with comparable overhead. The new sketch design embodies several technical contributions that integrate the enabling properties from both multi-update sketches and single update sketches in a novel structure that effectively controls the measurement error with minimum processing overhead.
每流量大小测量是许多流媒体应用和管理系统的关键,尤其是在高速网络中。在网络设备的数据平面上以线路速率执行此类测量,需要与其他关键网络功能共享的片上内存和计算资源。这就需要非常紧凑和快速的数据结构(称为草图),以空间换精度。在其他应用中,超大数据集也需要这种结构。草图设计的目标有两个方面:尽可能精确地测量流量大小,并尽可能高效地完成测量(以降低开销,从而提高处理吞吐量)。现有草图大致可分为多更新草图和单更新草图。前者更精确,但开销更大。后者开销小,但准确性差。本文提出了一种具有可变计数器结构的单更新草图(SSVS),这是一种新的草图设计,其速度比现有的多更新草图快几倍,精度也相当,而且比现有的单更新草图精确几倍,开销也相当。新草图设计体现了多项技术贡献,将多更新草图和单更新草图的有利特性整合到一个新结构中,以最小的处理开销有效控制测量误差。
{"title":"Single Update Sketch with Variable Counter Structure","authors":"D. Melissourgos, Haibo Wang, Shigang Chen, Chaoyi Ma, Shiping Chen","doi":"10.14778/3625054.3625065","DOIUrl":"https://doi.org/10.14778/3625054.3625065","url":null,"abstract":"Per-flow size measurement is key to many streaming applications and management systems, particularly in high-speed networks. Performing such measurement on the data plane of a network device at the line rate requires on-chip memory and computing resources that are shared by other key network functions. It leads to the need for very compact and fast data structures, called sketches, which trade off space for accuracy. Such a need also arises in other application context for extremely large data sets. The goal of sketch design is two-fold: to measure flow size as accurately as possible and to do so as efficiently as possible (for low overhead and thus high processing throughput). The existing sketches can be broadly categorized to multi-update sketches and single update sketches. The former are more accurate but carry larger overhead. The latter incur small overhead but their accuracy is poor. This paper proposes a Single update Sketch with a Variable counter Structure (SSVS), a new sketch design which is several times faster than the existing multi-update sketches with comparable accuracy, and is several times more accurate than the existing single update sketches with comparable overhead. The new sketch design embodies several technical contributions that integrate the enabling properties from both multi-update sketches and single update sketches in a novel structure that effectively controls the measurement error with minimum processing overhead.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"70 1","pages":"4296-4309"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139345716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShadowAQP: Efficient Approximate Group-by and Join Query via Attribute-oriented Sample Size Allocation and Data Generation ShadowAQP:通过面向属性的样本量分配和数据生成实现高效的近似分组和连接查询
Pub Date : 2023-09-01 DOI: 10.14778/3625054.3625059
Rong Gu, Han Li, Haipeng Dai, Wenjie Huang, Jie Xue, Meng Li, Jiaqi Zheng, Haoran Cai, Yihua Huang, Guihai Chen
Approximate query processing (AQP) is one of the key techniques to cope with big data querying problem on account that it obtains approximate answers efficiently. To address non-trivial sample selection and heavy sampling cost issues in AQP, we propose ShadowAQP, an efficient and accurate approach based on attribute-oriented sample size allocation and data generation. We select samples according to group-by and join attributes, and determine the sample size for each group of unique value combinations to improve query accuracy. We design a conditional variational autoencoder model with automatic table data encoding and model update strategies. To further improve accuracy and efficiency, we propose a set of extensions, including parallel multi-round sampling aggregation, data outlier-aware sampling, and dimension reduction optimization. Evaluation results on diversified datasets show that, compared with SOTA approaches, ShadowAQP achieves 5.8× query speed performance improvement on average (up to 12.8×), while reducing query error by 74% on average (up to 95%) at the same time.
近似查询处理(AQP)是应对大数据查询问题的关键技术之一,因为它能高效地获得近似答案。为了解决近似查询处理中样本选择困难和采样成本高的问题,我们提出了一种基于面向属性的样本大小分配和数据生成的高效、精确的方法--ShadowAQP。我们根据分组和连接属性选择样本,并确定每组唯一值组合的样本大小,以提高查询准确性。我们设计了一个条件变分自动编码器模型,该模型具有自动表数据编码和模型更新策略。为了进一步提高准确性和效率,我们提出了一系列扩展方案,包括并行多轮采样聚合、数据离群感知采样和降维优化。在多样化数据集上的评估结果表明,与 SOTA 方法相比,ShadowAQP 的查询速度平均提高了 5.8 倍(最高达 12.8 倍),同时查询错误平均减少了 74%(最高达 95%)。
{"title":"ShadowAQP: Efficient Approximate Group-by and Join Query via Attribute-oriented Sample Size Allocation and Data Generation","authors":"Rong Gu, Han Li, Haipeng Dai, Wenjie Huang, Jie Xue, Meng Li, Jiaqi Zheng, Haoran Cai, Yihua Huang, Guihai Chen","doi":"10.14778/3625054.3625059","DOIUrl":"https://doi.org/10.14778/3625054.3625059","url":null,"abstract":"Approximate query processing (AQP) is one of the key techniques to cope with big data querying problem on account that it obtains approximate answers efficiently. To address non-trivial sample selection and heavy sampling cost issues in AQP, we propose ShadowAQP, an efficient and accurate approach based on attribute-oriented sample size allocation and data generation. We select samples according to group-by and join attributes, and determine the sample size for each group of unique value combinations to improve query accuracy. We design a conditional variational autoencoder model with automatic table data encoding and model update strategies. To further improve accuracy and efficiency, we propose a set of extensions, including parallel multi-round sampling aggregation, data outlier-aware sampling, and dimension reduction optimization. Evaluation results on diversified datasets show that, compared with SOTA approaches, ShadowAQP achieves 5.8× query speed performance improvement on average (up to 12.8×), while reducing query error by 74% on average (up to 95%) at the same time.","PeriodicalId":20467,"journal":{"name":"Proc. VLDB Endow.","volume":"4 1","pages":"4216-4229"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139346226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proc. VLDB Endow.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1