Pub Date : 2022-11-29DOI: 10.46586/tches.v2023.i1.538-556
Anju Alexander, Annapurna Valiveti, S. Vivek
Masking of S-boxes using lookup tables is an effective countermeasure to thwart side-channel attacks on block ciphers implemented in software. At first and second orders, the Table-based Masking (TBM) schemes can be very efficient and even faster than circuit-based masking schemes. Ever since the customised second-order TBM schemes were proposed, the focus has been on designing and optimising Higher-Order Table-based Masking (HO-TBM) schemes that facilitate masking at arbitrary order. One of the reasons for this trend is that at large orders HO-TBM schemes are significantly slower and consume a prohibitive amount of RAM memory compared to circuit-based masking schemes such as bit-sliced masking, and hence efforts were targeted in this direction. However, a recent work due to Valiveti and Vivek (TCHES 2021) has demonstrated that the HO-TBM scheme of Coron et al. (TCHES 2018) is feasible to be implemented on memory-constrained devices with pre-processing capability and a competitive online execution time. Yet, currently, there are no customised designs for third-order TBM that are more efficient than instantiating a HO-TBM scheme at third order.In this work, we propose a third-order TBM scheme for arbitrary S-boxes that is secure in the probing model and under compositions, i.e., 3-SNI secure. It is very efficient in terms of the overall running time, compared to the third-order instantiations of state-of-the-art HO-TBM schemes. It also supports the pre-processing functionality. For example, the overall running time of a single execution of the third-order masked AES-128 on a 32-bit ARM-Cortex M4 micro-controller is reduced by about 80% without any overhead on the online execution time. This implies that the online execution time of the proposed scheme is approximately eight times faster than the bit-sliced masked implementation at third order, and it is comparable to the recent scheme of Wang et al. (TCHES 2022) that makes use of reuse of shares. We also present the implementation results for the third-order masked PRESENT cipher. Our work suggests that there is a significant scope for tuning the performance of HO-TBM schemes at lower orders.
{"title":"A Faster Third-Order Masking of Lookup Tables","authors":"Anju Alexander, Annapurna Valiveti, S. Vivek","doi":"10.46586/tches.v2023.i1.538-556","DOIUrl":"https://doi.org/10.46586/tches.v2023.i1.538-556","url":null,"abstract":"Masking of S-boxes using lookup tables is an effective countermeasure to thwart side-channel attacks on block ciphers implemented in software. At first and second orders, the Table-based Masking (TBM) schemes can be very efficient and even faster than circuit-based masking schemes. Ever since the customised second-order TBM schemes were proposed, the focus has been on designing and optimising Higher-Order Table-based Masking (HO-TBM) schemes that facilitate masking at arbitrary order. One of the reasons for this trend is that at large orders HO-TBM schemes are significantly slower and consume a prohibitive amount of RAM memory compared to circuit-based masking schemes such as bit-sliced masking, and hence efforts were targeted in this direction. However, a recent work due to Valiveti and Vivek (TCHES 2021) has demonstrated that the HO-TBM scheme of Coron et al. (TCHES 2018) is feasible to be implemented on memory-constrained devices with pre-processing capability and a competitive online execution time. Yet, currently, there are no customised designs for third-order TBM that are more efficient than instantiating a HO-TBM scheme at third order.In this work, we propose a third-order TBM scheme for arbitrary S-boxes that is secure in the probing model and under compositions, i.e., 3-SNI secure. It is very efficient in terms of the overall running time, compared to the third-order instantiations of state-of-the-art HO-TBM schemes. It also supports the pre-processing functionality. For example, the overall running time of a single execution of the third-order masked AES-128 on a 32-bit ARM-Cortex M4 micro-controller is reduced by about 80% without any overhead on the online execution time. This implies that the online execution time of the proposed scheme is approximately eight times faster than the bit-sliced masked implementation at third order, and it is comparable to the recent scheme of Wang et al. (TCHES 2022) that makes use of reuse of shares. We also present the implementation results for the third-order masked PRESENT cipher. Our work suggests that there is a significant scope for tuning the performance of HO-TBM schemes at lower orders.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"109 1","pages":"538-556"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79217453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-29DOI: 10.46586/tches.v2023.i1.438-462
Danyang Zhu, Rong-Xian Zhang, Lun Ou, Jing Tian, Zhongfeng Wang
A verifiable delay function (VDF) is a function whose evaluation requires running a prescribed number of sequential steps over a group while the result can be efficiently verified. As a kind of cryptographic primitives, VDFs have been adopted in rapidly growing applications for decentralized systems. For the security of VDFs in practical applications, it is widely agreed that the fastest implementation for the VDF evaluation, sequential squarings in a group of unknown order, should be publicly provided. To this end, we propose a possible minimum latency hardware implementation for the squaring in class groups by algorithmic and architectural level co-optimization. Firstly, low-latency architectures for large-number division, multiplication, and addition are devised using redundant representation, respectively. Secondly, we present two hardware-friendly algorithms which avoid time-consuming divisions involved in calculations related to the extended greatest common divisor (XGCD) and design the corresponding low-latency architectures. Besides, we schedule and reuse these computation modules to achieve good resource utilization by using compact instruction control. Finally, we code and synthesize the proposed design under the TSMC 28nm CMOS technology. The experimental results show that our design can achieve a speedup of 3.6x compared to the state-of-the-art implementation of the squaring in the class group. Moreover, compared to the optimal C++ implementation over an advanced CPU, our implementation is 9.1x faster.
{"title":"Low-Latency Design and Implementation of the Squaring in Class Groups for Verifiable Delay Function Using Redundant Representation","authors":"Danyang Zhu, Rong-Xian Zhang, Lun Ou, Jing Tian, Zhongfeng Wang","doi":"10.46586/tches.v2023.i1.438-462","DOIUrl":"https://doi.org/10.46586/tches.v2023.i1.438-462","url":null,"abstract":"A verifiable delay function (VDF) is a function whose evaluation requires running a prescribed number of sequential steps over a group while the result can be efficiently verified. As a kind of cryptographic primitives, VDFs have been adopted in rapidly growing applications for decentralized systems. For the security of VDFs in practical applications, it is widely agreed that the fastest implementation for the VDF evaluation, sequential squarings in a group of unknown order, should be publicly provided. To this end, we propose a possible minimum latency hardware implementation for the squaring in class groups by algorithmic and architectural level co-optimization. Firstly, low-latency architectures for large-number division, multiplication, and addition are devised using redundant representation, respectively. Secondly, we present two hardware-friendly algorithms which avoid time-consuming divisions involved in calculations related to the extended greatest common divisor (XGCD) and design the corresponding low-latency architectures. Besides, we schedule and reuse these computation modules to achieve good resource utilization by using compact instruction control. Finally, we code and synthesize the proposed design under the TSMC 28nm CMOS technology. The experimental results show that our design can achieve a speedup of 3.6x compared to the state-of-the-art implementation of the squaring in the class group. Moreover, compared to the optimal C++ implementation over an advanced CPU, our implementation is 9.1x faster.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"20 1","pages":"438-462"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86058388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-29DOI: 10.46586/tches.v2023.i1.32-59
Loïc Masure, Valence Cristiani, Maxime Lecomte, François-Xavier Standaert
Over the past few years, deep-learning-based attacks have emerged as a de facto standard, thanks to their ability to break implementations of cryptographic primitives without pre-processing, even against widely used counter-measures such as hiding and masking. However, the recent works of Bronchain and Standaert at Tches 2020 questioned the soundness of such tools if used in an uninformed setting to evaluate implementations protected with higher-order masking. On the opposite, worst-case evaluations may be seen as possibly far from what a real-world adversary could do, thereby leading to too conservative security bounds. In this paper, we propose a new threat model that we name scheme-aware benefiting from a trade-off between uninformed and worst-case models. Our scheme-aware model is closer to a real-world adversary, in the sense that it does not need to have access to the random nonces used by masking during the profiling phase like in a worst-case model, while it does not need to learn the masking scheme as implicitly done by an uninformed adversary. We show how to combine the power of deep learning with the prior knowledge of scheme-aware modeling. As a result, we show on simulations and experiments on public datasets how it sometimes allows to reduce by an order of magnitude the profiling complexity, i.e., the number of profiling traces needed to satisfyingly train a model, compared to a fully uninformed adversary.
{"title":"Don't Learn What You Already Know Scheme-Aware Modeling for Profiling Side-Channel Analysis against Masking","authors":"Loïc Masure, Valence Cristiani, Maxime Lecomte, François-Xavier Standaert","doi":"10.46586/tches.v2023.i1.32-59","DOIUrl":"https://doi.org/10.46586/tches.v2023.i1.32-59","url":null,"abstract":"Over the past few years, deep-learning-based attacks have emerged as a de facto standard, thanks to their ability to break implementations of cryptographic primitives without pre-processing, even against widely used counter-measures such as hiding and masking. However, the recent works of Bronchain and Standaert at Tches 2020 questioned the soundness of such tools if used in an uninformed setting to evaluate implementations protected with higher-order masking. On the opposite, worst-case evaluations may be seen as possibly far from what a real-world adversary could do, thereby leading to too conservative security bounds. In this paper, we propose a new threat model that we name scheme-aware benefiting from a trade-off between uninformed and worst-case models. Our scheme-aware model is closer to a real-world adversary, in the sense that it does not need to have access to the random nonces used by masking during the profiling phase like in a worst-case model, while it does not need to learn the masking scheme as implicitly done by an uninformed adversary. We show how to combine the power of deep learning with the prior knowledge of scheme-aware modeling. As a result, we show on simulations and experiments on public datasets how it sometimes allows to reduce by an order of magnitude the profiling complexity, i.e., the number of profiling traces needed to satisfyingly train a model, compared to a fully uninformed adversary.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"75 1","pages":"32-59"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82570216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-29DOI: 10.46586/tches.v2023.i1.238-276
Sonia Belaïd, Matthieu Rivain
Elliptic-curve implementations protected with state-of-the-art countermeasures against side-channel attacks might still be vulnerable to advanced attacks that recover secret information from a single leakage trace. The effectiveness of these attacks is boosted by the emergence of deep learning techniques for side-channel analysis which relax the control or knowledge an adversary must have on the target implementation. In this paper, we provide generic countermeasures to withstand these attacks for a wide range of regular elliptic-curve implementations. We first introduce a framework to formally model a regular algebraic program which consists of a sequence of algebraic operations indexed by key-dependent values. We then introduce a generic countermeasure to protect these types of programs against advanced single-trace side-channel attacks. Our scheme achieves provable security in the noisy leakage model under a formal assumption on the leakage of randomized variables. To demonstrate the applicability of our solution, we provide concrete examples on several widely deployed scalar multiplication algorithms and report some benchmarks for a protected implementation on a smart card.
{"title":"High Order Side-Channel Security for Elliptic-Curve Implementations","authors":"Sonia Belaïd, Matthieu Rivain","doi":"10.46586/tches.v2023.i1.238-276","DOIUrl":"https://doi.org/10.46586/tches.v2023.i1.238-276","url":null,"abstract":"Elliptic-curve implementations protected with state-of-the-art countermeasures against side-channel attacks might still be vulnerable to advanced attacks that recover secret information from a single leakage trace. The effectiveness of these attacks is boosted by the emergence of deep learning techniques for side-channel analysis which relax the control or knowledge an adversary must have on the target implementation. In this paper, we provide generic countermeasures to withstand these attacks for a wide range of regular elliptic-curve implementations. We first introduce a framework to formally model a regular algebraic program which consists of a sequence of algebraic operations indexed by key-dependent values. We then introduce a generic countermeasure to protect these types of programs against advanced single-trace side-channel attacks. Our scheme achieves provable security in the noisy leakage model under a formal assumption on the leakage of randomized variables. To demonstrate the applicability of our solution, we provide concrete examples on several widely deployed scalar multiplication algorithms and report some benchmarks for a protected implementation on a smart card.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"6 1","pages":"238-276"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85353252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.3390/cryptography7010005
Michael Ampatzis, T. Andronikos
Suppose that the renowned spymaster Alice controls a network of spies who all happen to be deployed in different geographical locations. Let us further assume that all spies have managed to get their hands on a small, albeit incomplete by itself, secret, which actually is just a part of a bigger secret. In this work, we consider the following problem: given the above situation, is it possible for the spies to securely transmit all these partial secrets to the spymaster so that they can be combined together in order to reveal the big secret to Alice? We call this problem, which, to the best of our knowledge, is a novel one for the relevant literature, the quantum secret aggregation problem. We propose a protocol, in the form of a quantum game, that addresses this problem in complete generality. Our protocol relies on the use of maximally entangled GHZ tuples, shared among Alice and all her spies. It is the power of entanglement that makes possible the secure transmission of the small partial secrets from the agents to the spymaster. As an additional bonus, entanglement guarantees the security of the protocol, by making it statistically improbable for the notorious eavesdropper Eve to steal the big secret.
{"title":"Quantum Secret Aggregation Utilizing a Network of Agents","authors":"Michael Ampatzis, T. Andronikos","doi":"10.3390/cryptography7010005","DOIUrl":"https://doi.org/10.3390/cryptography7010005","url":null,"abstract":"Suppose that the renowned spymaster Alice controls a network of spies who all happen to be deployed in different geographical locations. Let us further assume that all spies have managed to get their hands on a small, albeit incomplete by itself, secret, which actually is just a part of a bigger secret. In this work, we consider the following problem: given the above situation, is it possible for the spies to securely transmit all these partial secrets to the spymaster so that they can be combined together in order to reveal the big secret to Alice? We call this problem, which, to the best of our knowledge, is a novel one for the relevant literature, the quantum secret aggregation problem. We propose a protocol, in the form of a quantum game, that addresses this problem in complete generality. Our protocol relies on the use of maximally entangled GHZ tuples, shared among Alice and all her spies. It is the power of entanglement that makes possible the secure transmission of the small partial secrets from the agents to the spymaster. As an additional bonus, entanglement guarantees the security of the protocol, by making it statistically improbable for the notorious eavesdropper Eve to steal the big secret.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"28 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74932380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-22DOI: 10.3390/cryptography6040060
Guangwei Zhao, Kaveh Shamsi
Logic locking is a technique that can help hinder reverse-engineering-based attacks in the IC supply chain from untrusted foundries or end-users. In 2015, the Boolean Satisfiability (SAT) attack was introduced. Although the SAT attack is effective in deobfuscating a wide range of logic locking schemes, its execution time varies widely from a few seconds to months. Previous research has shown that Graph Convolutional Networks (GCN) may be used to estimate this deobfuscation time for locked circuits with varied key sizes. In this paper, we explore whether GCN models truly understand/capture the structural/functional sources of deobfuscation hardness. In order to tackle this, we generate different curated training datasets: traditional ISCAS benchmark circuits locked with varying key sizes, as well as an important novel class of synthetic benchmarks: Substitution-Permutation Networks (SPN), which are circuit structures used to produce the most secure and efficient keyed-functions used today: block-ciphers. We then test whether a GCN trained on a traditional benchmark can predict the simple fact that a deeper SPN is superior to a wide SPN of the same size. We find that surprisingly the GCN model fails at this. We propose to overcome this limitation by proposing a set of circuit features motivated by block-cipher design principles. These features can be used as stand-alone or combined with GCN models to provide deeper topological cues than what GCNs can access.
{"title":"Reevaluating Graph-Neural-Network-Based Runtime Prediction of SAT-Based Circuit Deobfuscation","authors":"Guangwei Zhao, Kaveh Shamsi","doi":"10.3390/cryptography6040060","DOIUrl":"https://doi.org/10.3390/cryptography6040060","url":null,"abstract":"Logic locking is a technique that can help hinder reverse-engineering-based attacks in the IC supply chain from untrusted foundries or end-users. In 2015, the Boolean Satisfiability (SAT) attack was introduced. Although the SAT attack is effective in deobfuscating a wide range of logic locking schemes, its execution time varies widely from a few seconds to months. Previous research has shown that Graph Convolutional Networks (GCN) may be used to estimate this deobfuscation time for locked circuits with varied key sizes. In this paper, we explore whether GCN models truly understand/capture the structural/functional sources of deobfuscation hardness. In order to tackle this, we generate different curated training datasets: traditional ISCAS benchmark circuits locked with varying key sizes, as well as an important novel class of synthetic benchmarks: Substitution-Permutation Networks (SPN), which are circuit structures used to produce the most secure and efficient keyed-functions used today: block-ciphers. We then test whether a GCN trained on a traditional benchmark can predict the simple fact that a deeper SPN is superior to a wide SPN of the same size. We find that surprisingly the GCN model fails at this. We propose to overcome this limitation by proposing a set of circuit features motivated by block-cipher design principles. These features can be used as stand-alone or combined with GCN models to provide deeper topological cues than what GCNs can access.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"24 1","pages":"60"},"PeriodicalIF":0.0,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77201227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.3390/cryptography6040059
J. Plusquellic
Physical unclonable functions (PUFs) are gaining traction as an attractive alternative to generating and storing device keying material over traditional secure non-volatile memory (NVM) technologies. In this paper, we propose an engineered delay-based PUF called the shift-register, reconvergent-fanout (SiRF) PUF, and present an analysis of the statistical quality of its bitstrings using data collected from a set of FPGAs subjected to extended industrial temperature-voltage environmental conditions. The SiRF PUF utilizes the Xilinx shift register primitive and an engineered network of logic gates that are designed to distribute signal paths over a wide region of the FPGA fabric using a MUXing scheme similar in principle to the shift-rows permutation function within the Advanced Encryption Standard algorithm. The shift register is utilized in a unique fashion to enable individual paths through a Xilinx 5-input LUT to be selected as a source of entropy by the challenge. The engineered logic gate network utilizes reconvergent-fanout as a means of adding entropy, eliminating bias and increasing uncertainty with respect to which paths are actually being timed and used in post-processing to produce the secret key or authentication bitstring. The SiRF PUF is a strong PUF build on top of a network with 10’s of millions of possible paths.
{"title":"Shift Register, Reconvergent-Fanout (SiRF) PUF Implementation on an FPGA","authors":"J. Plusquellic","doi":"10.3390/cryptography6040059","DOIUrl":"https://doi.org/10.3390/cryptography6040059","url":null,"abstract":"Physical unclonable functions (PUFs) are gaining traction as an attractive alternative to generating and storing device keying material over traditional secure non-volatile memory (NVM) technologies. In this paper, we propose an engineered delay-based PUF called the shift-register, reconvergent-fanout (SiRF) PUF, and present an analysis of the statistical quality of its bitstrings using data collected from a set of FPGAs subjected to extended industrial temperature-voltage environmental conditions. The SiRF PUF utilizes the Xilinx shift register primitive and an engineered network of logic gates that are designed to distribute signal paths over a wide region of the FPGA fabric using a MUXing scheme similar in principle to the shift-rows permutation function within the Advanced Encryption Standard algorithm. The shift register is utilized in a unique fashion to enable individual paths through a Xilinx 5-input LUT to be selected as a source of entropy by the challenge. The engineered logic gate network utilizes reconvergent-fanout as a means of adding entropy, eliminating bias and increasing uncertainty with respect to which paths are actually being timed and used in post-processing to produce the secret key or authentication bitstring. The SiRF PUF is a strong PUF build on top of a network with 10’s of millions of possible paths.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"os-45 1","pages":"59"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87239516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.3390/cryptography6040058
Mario Ciampi, Diego Romano, G. Schmid
In this work, we elaborate on the concept of process authenticity, which intuitively corresponds to the validity of all process steps and their proper binding. It represents the most exciting forefront of distributed ledger technology research concerning the primary challenge of reliably connecting distributed ledger networks to the physical context it must operate. More in detail, the paper describes a novel methodological approach to ensure the authenticity of business processes through blockchain and several security mechanisms applied to the digital twins of the actual processes. We illustrate difficulties and opportunities deriving from implementing process authenticity in concrete case studies in which we were involved as software designers belonging to three critical application domains: document dematerialization, e-voting, and healthcare.
{"title":"Process Authentication through Blockchain: Three Case Studies","authors":"Mario Ciampi, Diego Romano, G. Schmid","doi":"10.3390/cryptography6040058","DOIUrl":"https://doi.org/10.3390/cryptography6040058","url":null,"abstract":"In this work, we elaborate on the concept of process authenticity, which intuitively corresponds to the validity of all process steps and their proper binding. It represents the most exciting forefront of distributed ledger technology research concerning the primary challenge of reliably connecting distributed ledger networks to the physical context it must operate. More in detail, the paper describes a novel methodological approach to ensure the authenticity of business processes through blockchain and several security mechanisms applied to the digital twins of the actual processes. We illustrate difficulties and opportunities deriving from implementing process authenticity in concrete case studies in which we were involved as software designers belonging to three critical application domains: document dematerialization, e-voting, and healthcare.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"25 1","pages":"58"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81845336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.3390/cryptography6040057
A. Kudzin, Kentaroh Toyoda, S. Takayama, A. Ishigame
(1) Background: To solve the blockchain scaling issue, sharding has been proposed; however, this approach has its own scaling issue: the cross-shard communication method. To resolve the cross-shard communication scaling issue, rollups have been proposed and are being investigated. However, they also have their own scaling limitations, in particular, the degree of compression they can apply to transactions (TXs) affecting how many TXs can be included in one block. (2) Methods: In this paper, we propose a series of novel data structures for the compiling of cross-shard TXs sent using rollups for both public and private Ethereum. Our proposal removes redundant fields, consolidates repeated fields, and compresses any remaining fields in the rollup, modifying its data structure to compress the address, gas, and value fields. (3) Results: We have shown that our proposals can accommodate more cross-shard TXs in a block by reducing the TX size by up to 65% and 97.6% compared to the state-of-the-art in public and private Ethereum, respectively. This compression in TX size results in an over 2× increase in transactions per block (TPB) for our proposals targeting both types of Ethereum. (4) Conclusions: Our proposals will mitigate the scaling issue in a sharded blockchain that utilizes rollups for cross-shard communication. In particular, it will enable such sharded Ethereum networks to be deployed for large-scale decentralized systems.
{"title":"Scaling Ethereum 2.0s Cross-Shard Transactions with Refined Data Structures","authors":"A. Kudzin, Kentaroh Toyoda, S. Takayama, A. Ishigame","doi":"10.3390/cryptography6040057","DOIUrl":"https://doi.org/10.3390/cryptography6040057","url":null,"abstract":"(1) Background: To solve the blockchain scaling issue, sharding has been proposed; however, this approach has its own scaling issue: the cross-shard communication method. To resolve the cross-shard communication scaling issue, rollups have been proposed and are being investigated. However, they also have their own scaling limitations, in particular, the degree of compression they can apply to transactions (TXs) affecting how many TXs can be included in one block. (2) Methods: In this paper, we propose a series of novel data structures for the compiling of cross-shard TXs sent using rollups for both public and private Ethereum. Our proposal removes redundant fields, consolidates repeated fields, and compresses any remaining fields in the rollup, modifying its data structure to compress the address, gas, and value fields. (3) Results: We have shown that our proposals can accommodate more cross-shard TXs in a block by reducing the TX size by up to 65% and 97.6% compared to the state-of-the-art in public and private Ethereum, respectively. This compression in TX size results in an over 2× increase in transactions per block (TPB) for our proposals targeting both types of Ethereum. (4) Conclusions: Our proposals will mitigate the scaling issue in a sharded blockchain that utilizes rollups for cross-shard communication. In particular, it will enable such sharded Ethereum networks to be deployed for large-scale decentralized systems.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"197 1","pages":"57"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79959042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-09DOI: 10.3390/cryptography6040055
C. Adams
The promise of identity-based systems is that they maintain the functionality of public key cryptography while eliminating the need for public key certificates. The first efficient identity-based encryption (IBE) scheme was proposed by Boneh and Franklin in 2001; variations have been proposed by many researchers since then. However, a common drawback is the requirement for a private key generator (PKG) that uses its own master private key to compute private keys for end users. Thus, the PKG can potentially decrypt all ciphertext in the environment (regardless of who the intended recipient is), which can have undesirable privacy implications. This has led to limited adoption and deployment of IBE technology. There have been numerous proposals to address this situation (which are often characterized as methods to reduce trust in the PKG). These typically involve threshold mechanisms or separation-of-duty architectures, but unfortunately often rely on non-collusion assumptions that cannot be guaranteed in real-world settings. This paper proposes a separation architecture that instantiates several intermediate CAs (ICAs), rather than one (as in previous work). We employ digital credentials (containing a specially-designed attribute based on bilinear maps) as the blind tokens issued by the ICAs, which allows a user to easily obtain multiple layers of pseudonymization prior to interacting with the PKG. As a result, our proposed architecture does not rely on unrealistic non-collusion assumptions and allows a user to reduce the probability of a privacy breach to an arbitrarily small value.
{"title":"Improving User Privacy in Identity-Based Encryption Environments","authors":"C. Adams","doi":"10.3390/cryptography6040055","DOIUrl":"https://doi.org/10.3390/cryptography6040055","url":null,"abstract":"The promise of identity-based systems is that they maintain the functionality of public key cryptography while eliminating the need for public key certificates. The first efficient identity-based encryption (IBE) scheme was proposed by Boneh and Franklin in 2001; variations have been proposed by many researchers since then. However, a common drawback is the requirement for a private key generator (PKG) that uses its own master private key to compute private keys for end users. Thus, the PKG can potentially decrypt all ciphertext in the environment (regardless of who the intended recipient is), which can have undesirable privacy implications. This has led to limited adoption and deployment of IBE technology. There have been numerous proposals to address this situation (which are often characterized as methods to reduce trust in the PKG). These typically involve threshold mechanisms or separation-of-duty architectures, but unfortunately often rely on non-collusion assumptions that cannot be guaranteed in real-world settings. This paper proposes a separation architecture that instantiates several intermediate CAs (ICAs), rather than one (as in previous work). We employ digital credentials (containing a specially-designed attribute based on bilinear maps) as the blind tokens issued by the ICAs, which allows a user to easily obtain multiple layers of pseudonymization prior to interacting with the PKG. As a result, our proposed architecture does not rely on unrealistic non-collusion assumptions and allows a user to reduce the probability of a privacy breach to an arbitrarily small value.","PeriodicalId":13186,"journal":{"name":"IACR Trans. Cryptogr. Hardw. Embed. Syst.","volume":"139 1","pages":"55"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90639408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}