Association attacks aim to manipulate WiFi clients into associating with a malicious access point, by exploiting protocol vulnerabilities and usability features implemented on the network managers of modern operating systems. In this paper we classify association attacks based on the network manager features that each attack exploits. To validate their current validity status, we implement and test all known association attacks against the network managers of popular operating systems, by using our Wifiphisher tool. We analyze various strategies that may be implemented by an adversary in order to increase the success rate of association attacks. Furthermore, we examine the behavior of association attacks against upcoming security protocols and certifications for IEEE 802.11, such as WPA3, Wi-Fi Enhanced Open and Easy Connect. Our results show that even though the network managers have hampered the effectiveness of some known attacks (e.g. KARMA), other techniques (e.g. Known Beacons) are still active threats. More importantly, our results show that even the newer security protocols leave room for association attacks. Finally, we describe novel detection and prevention techniques for association attacks, as well as security controls based on user awareness.
关联攻击的目的是通过利用现代操作系统网络管理器上的协议漏洞和可用性特性,操纵WiFi客户端与恶意接入点关联。本文根据各种攻击所利用的网络管理器特征对关联攻击进行分类。为了验证其当前的有效性状态,我们使用我们的Wifiphisher工具对流行操作系统的网络管理器实施并测试了所有已知的关联攻击。我们分析了对手可能实施的各种策略,以提高关联攻击的成功率。此外,我们还研究了针对即将推出的IEEE 802.11安全协议和认证的关联攻击行为,例如WPA3, Wi-Fi Enhanced Open and Easy Connect。我们的结果表明,尽管网络管理器已经阻碍了一些已知攻击(例如KARMA)的有效性,但其他技术(例如已知信标)仍然是活跃的威胁。更重要的是,我们的结果表明,即使是较新的安全协议也为关联攻击留下了空间。最后,我们描述了新的关联攻击检测和预防技术,以及基于用户意识的安全控制。
{"title":"Exploiting WiFi usability features for association attacks in IEEE 802.11: Attack analysis and mitigation controls","authors":"George Chatzisofroniou, P. Kotzanikolaou","doi":"10.3233/jcs-210036","DOIUrl":"https://doi.org/10.3233/jcs-210036","url":null,"abstract":"Association attacks aim to manipulate WiFi clients into associating with a malicious access point, by exploiting protocol vulnerabilities and usability features implemented on the network managers of modern operating systems. In this paper we classify association attacks based on the network manager features that each attack exploits. To validate their current validity status, we implement and test all known association attacks against the network managers of popular operating systems, by using our Wifiphisher tool. We analyze various strategies that may be implemented by an adversary in order to increase the success rate of association attacks. Furthermore, we examine the behavior of association attacks against upcoming security protocols and certifications for IEEE 802.11, such as WPA3, Wi-Fi Enhanced Open and Easy Connect. Our results show that even though the network managers have hampered the effectiveness of some known attacks (e.g. KARMA), other techniques (e.g. Known Beacons) are still active threats. More importantly, our results show that even the newer security protocols leave room for association attacks. Finally, we describe novel detection and prevention techniques for association attacks, as well as security controls based on user awareness.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129340462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formal analysis of security is often focused on the technological side of the system. One implicitly assumes that the users will behave in the right way to preserve the relevant security properties. In real life, this cannot be taken for granted. In particular, security mechanisms that are difficult and costly to use are often ignored by the users, and do not really defend the system against possible attacks. Here, we propose a graded notion of security based on the complexity of the user’s strategic behavior. More precisely, we suggest that the level to which a security property φ is satisfied can be defined in terms of: (a) the complexity of the strategy that the user needs to execute to make φ true, and (b) the resources that the user must employ on the way. The simpler and cheaper to obtain φ, the higher the degree of security. We demonstrate how the idea works in a case study based on an electronic voting scenario. To this end, we model the vVote implementation of the Prêt à Voter voting protocol for coercion-resistant and voter-verifiable elections. Then, we identify “natural” strategies for the voter to obtain voter-verifiability, and measure the voter’s effort that they require. We also consider the dual view of graded security, measured by the complexity of the attacker’s strategy to compromise the relevant properties of the election.
{"title":"How to measure usable security: Natural strategies in voting protocols","authors":"W. Jamroga, Damian Kurpiewski, Vadim Malvone","doi":"10.3233/jcs-210049","DOIUrl":"https://doi.org/10.3233/jcs-210049","url":null,"abstract":"Formal analysis of security is often focused on the technological side of the system. One implicitly assumes that the users will behave in the right way to preserve the relevant security properties. In real life, this cannot be taken for granted. In particular, security mechanisms that are difficult and costly to use are often ignored by the users, and do not really defend the system against possible attacks. Here, we propose a graded notion of security based on the complexity of the user’s strategic behavior. More precisely, we suggest that the level to which a security property φ is satisfied can be defined in terms of: (a) the complexity of the strategy that the user needs to execute to make φ true, and (b) the resources that the user must employ on the way. The simpler and cheaper to obtain φ, the higher the degree of security. We demonstrate how the idea works in a case study based on an electronic voting scenario. To this end, we model the vVote implementation of the Prêt à Voter voting protocol for coercion-resistant and voter-verifiable elections. Then, we identify “natural” strategies for the voter to obtain voter-verifiability, and measure the voter’s effort that they require. We also consider the dual view of graded security, measured by the complexity of the attacker’s strategy to compromise the relevant properties of the election.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122915982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Socio-Technical Systems (STSs) combine the operations of technical systems with the choices and intervention of humans, namely the users of the technical systems. Designing such systems is far from trivial due to the interaction of heterogeneous components, including hardware components and software applications, physical elements such as tickets, user interfaces, such as touchscreens and displays, and notably, humans. While the possible security issues about the technical components are well known yet continuously investigated, the focus of this article is on the various levels of threat that human actors may pose, namely, the focus is on security ceremonies. The approach is to formally model human threats systematically and to formally verify whether they can break the security properties of a few running examples: two currently deployed Deposit-Return Systems (DRSs) and a variant that we designed to strengthen them. The two real-world DRSs are found to support security properties differently, and some relevant properties fail, yet our variant is verified to meet all the properties. Our human threat model is distributed and interacting: it formalises all humans as potential threatening users because they can execute rules that encode specific threats in addition to being honest, that is, to follow the prescribed rules of interaction with the technical system; additionally, humans may exchange information or objects directly, hence practically favour each other although no specific form of collusion is prescribed. We start by introducing four different human threat models, and some security properties are found to succumb against the strongest model, the addition of the four. The question then arises on what meaningful combinations of the four would not break the properties. This leads to the definition of a lattice of human threat models and to a general methodology to traverse it by verifying each node against the properties. The methodology is executed on our running example for the sake of demonstration. Our approach thus is modular and extensible to include additional threats, potentially even borrowed from existing works, and, consequently, to the growth of the corresponding lattice. STSs can easily become very complex, hence we deem modularity and extensibility of the human threat model as key factors. The current computer-assisted tool support is put to test but proves to be sufficient.
{"title":"Modelling human threats in security ceremonies","authors":"G. Bella, Rosario Giustolisi, C. Schürmann","doi":"10.3233/jcs-210059","DOIUrl":"https://doi.org/10.3233/jcs-210059","url":null,"abstract":"Socio-Technical Systems (STSs) combine the operations of technical systems with the choices and intervention of humans, namely the users of the technical systems. Designing such systems is far from trivial due to the interaction of heterogeneous components, including hardware components and software applications, physical elements such as tickets, user interfaces, such as touchscreens and displays, and notably, humans. While the possible security issues about the technical components are well known yet continuously investigated, the focus of this article is on the various levels of threat that human actors may pose, namely, the focus is on security ceremonies. The approach is to formally model human threats systematically and to formally verify whether they can break the security properties of a few running examples: two currently deployed Deposit-Return Systems (DRSs) and a variant that we designed to strengthen them. The two real-world DRSs are found to support security properties differently, and some relevant properties fail, yet our variant is verified to meet all the properties. Our human threat model is distributed and interacting: it formalises all humans as potential threatening users because they can execute rules that encode specific threats in addition to being honest, that is, to follow the prescribed rules of interaction with the technical system; additionally, humans may exchange information or objects directly, hence practically favour each other although no specific form of collusion is prescribed. We start by introducing four different human threat models, and some security properties are found to succumb against the strongest model, the addition of the four. The question then arises on what meaningful combinations of the four would not break the properties. This leads to the definition of a lattice of human threat models and to a general methodology to traverse it by verifying each node against the properties. The methodology is executed on our running example for the sake of demonstration. Our approach thus is modular and extensible to include additional threats, potentially even borrowed from existing works, and, consequently, to the growth of the corresponding lattice. STSs can easily become very complex, hence we deem modularity and extensibility of the human threat model as key factors. The current computer-assisted tool support is put to test but proves to be sufficient.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-interactive zero-knowledge proof or argument (NIZK) systems are widely used in many security sensitive applications to enhance computation integrity, privacy and scalability. In such systems, a prover wants to convince one or more verifiers that the result of a public function is correctly computed without revealing the (potential) private input, such as the witness. In this work, we introduce a new notion, called scriptable SNARK, where the prover and verifier(s) can specify the function (or language instance) to be proven via a script. We formalize this notion in UC framework and provide a generic trusted hardware based solution. We then instantiate our solution in both SGX and Trustzone with Lua script engine. The system can be easily used by typical programmers without any cryptographic background. The benchmark result shows that our solution is better than all the known SNARK proof systems w.r.t. prover’s running time (1000 times faster), verifier’s running time, and the proof size. In addition, we also give a lightweight scriptable SNARK protocol for hardware with limited state, e.g., Θ ( λ ) bits. Finally, we show how the proposed scriptable SNARK can be readily deployed to solve many well-known problems in the blockchain context, e.g. verifier’s dilemma, fast joining for new players, etc.
{"title":"Scriptable and composable SNARKs in the trusted hardware model","authors":"Zhelei Zhou, Bingsheng Zhang, Yuan Chen, Jiaqi Li, Yajin Zhou, Yibiao Lu, K. Ren, Phuc Thai, Hong-Sheng Zhou","doi":"10.3233/jcs-210167","DOIUrl":"https://doi.org/10.3233/jcs-210167","url":null,"abstract":"Non-interactive zero-knowledge proof or argument (NIZK) systems are widely used in many security sensitive applications to enhance computation integrity, privacy and scalability. In such systems, a prover wants to convince one or more verifiers that the result of a public function is correctly computed without revealing the (potential) private input, such as the witness. In this work, we introduce a new notion, called scriptable SNARK, where the prover and verifier(s) can specify the function (or language instance) to be proven via a script. We formalize this notion in UC framework and provide a generic trusted hardware based solution. We then instantiate our solution in both SGX and Trustzone with Lua script engine. The system can be easily used by typical programmers without any cryptographic background. The benchmark result shows that our solution is better than all the known SNARK proof systems w.r.t. prover’s running time (1000 times faster), verifier’s running time, and the proof size. In addition, we also give a lightweight scriptable SNARK protocol for hardware with limited state, e.g., Θ ( λ ) bits. Finally, we show how the proposed scriptable SNARK can be readily deployed to solve many well-known problems in the blockchain context, e.g. verifier’s dilemma, fast joining for new players, etc.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133838443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radu Ciucanu, P. Lafourcade, Marius Lombard-Platet, Marta Soare
We consider the problem of cumulative reward maximization in multi-armed bandits. We address the security concerns that occur when data and computations are outsourced to an honest-but-curious cloud i.e., that executes tasks dutifully, but tries to gain as much information as possible. We consider situations where data used in bandit algorithms is sensitive and has to be protected e.g., commercial or personal data. We rely on cryptographic schemes and propose UCB - MS, a secure multi-party protocol based on the UCB algorithm. We prove that UCB - MS computes the same cumulative reward as UCB while satisfying desirable security properties. In particular, cloud nodes cannot learn the cumulative reward or the sum of rewards for more than one arm. Moreover, by analyzing messages exchanged among cloud nodes, an external observer cannot learn the cumulative reward or the sum of rewards produced by some arm. We show that the overhead due to cryptographic primitives is linear in the size of the input. Our implementation confirms the linear-time behavior and the practical feasibility of our protocol, on both synthetic and real-world data.
{"title":"Secure protocols for cumulative reward maximization in stochastic multi-armed bandits","authors":"Radu Ciucanu, P. Lafourcade, Marius Lombard-Platet, Marta Soare","doi":"10.3233/jcs-210051","DOIUrl":"https://doi.org/10.3233/jcs-210051","url":null,"abstract":"We consider the problem of cumulative reward maximization in multi-armed bandits. We address the security concerns that occur when data and computations are outsourced to an honest-but-curious cloud i.e., that executes tasks dutifully, but tries to gain as much information as possible. We consider situations where data used in bandit algorithms is sensitive and has to be protected e.g., commercial or personal data. We rely on cryptographic schemes and propose UCB - MS, a secure multi-party protocol based on the UCB algorithm. We prove that UCB - MS computes the same cumulative reward as UCB while satisfying desirable security properties. In particular, cloud nodes cannot learn the cumulative reward or the sum of rewards for more than one arm. Moreover, by analyzing messages exchanged among cloud nodes, an external observer cannot learn the cumulative reward or the sum of rewards produced by some arm. We show that the overhead due to cryptographic primitives is linear in the size of the input. Our implementation confirms the linear-time behavior and the practical feasibility of our protocol, on both synthetic and real-world data.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123277883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan Sheatsley, Nicolas Papernot, Mike Weisman, Gunjan Verma, P. Mcdaniel
Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible.
{"title":"Adversarial examples for network intrusion detection systems","authors":"Ryan Sheatsley, Nicolas Papernot, Mike Weisman, Gunjan Verma, P. Mcdaniel","doi":"10.3233/jcs-210094","DOIUrl":"https://doi.org/10.3233/jcs-210094","url":null,"abstract":"Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114688173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a solution to data ownership in the surveillance age in the form of an ethically sustainable framework for managing personal and person-derived data. This framework is based on the concept of Datenherrschaft – mastery over data that all natural persons should have on data they themselves produce or is derived thereof. We give numerous examples and tie cases to robust ethical analysis, and also discuss technological dimensions.
{"title":"Personal data protection in the age of mass surveillance","authors":"Antti Hakkala, J. Koskinen","doi":"10.3233/jcs-200033","DOIUrl":"https://doi.org/10.3233/jcs-200033","url":null,"abstract":"We present a solution to data ownership in the surveillance age in the form of an ethically sustainable framework for managing personal and person-derived data. This framework is based on the concept of Datenherrschaft – mastery over data that all natural persons should have on data they themselves produce or is derived thereof. We give numerous examples and tie cases to robust ethical analysis, and also discuss technological dimensions.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"28 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123583394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special issue includes several papers that have been selected from the program of the 12th Conference on Security and Cryptography for Networks. The conference, originally planned in Amalfi (SA), Italy, was held online on Sept. 14–16, 2020, due to Covid-19. The papers appearing in the present issue have been extended from their original conference versions, and have gone through a second rigorous reviewing process. We briefly review the papers included in this issue: Efficient Protocols for Oblivious Linear Function Evaluation from Ring-LWE by Carsten Baum, Daniel Escudero, Alberto Pedrouzo-Ulloa, Peter Scholl and Juan Ramón Troncoso-Pastoriza constructs Oblivious Linear Function Evaluation (OLE) protocols from the Ring-LWE problem. OLE has recently been shown to be very useful in practical multiparty computation, and this work proposes lattice-based OLE protocols and analyzes their standalone efficiency. In Double-Authentication-Preventing Signatures in the Standard Model, Dario Catalano, Georg Fuchsbauer and Azam Soleimanian present efficient DAPS schemes that are secure in the standard model and support large address spaces. DAPS is a special type of signature meant to punish the signer if it signs two messages with the same “address.” For example, this may be desired if the signer issues two different certificates for the same domain. The paper Private Identity Agreement for Private Set Functionalities by Benjamin Terner, Benjamin Kreuter and Sarvar Patel explores an interesting twist on private set intersection. If we want to compute a function of the intersection of our data, we need to first “align” our data so that we hold identical identifiers for any records that match. The situation is even more complicated when identifiers are “fuzzy” as in real-world data. In those cases, one party may hold several records corresponding to the same person, but be unaware of this fact. Only when combined with another data set will this fact be evident (if the other data set contains a record that connects with both). This paper proposes a method for two parties to privately assign identifiers to records in this kind of scenario. The main challenge here is the transitive nature of whether two records match. In Fast Threshold ECDSA with Honest Majority, Ivan Damgård, Thomas P. Jakobsen, Jesper Buus Nielsen, Jakob Illeborg Pagter and Michael Bæksvang Østergaard propose a new faster threshold variant of the ECDSA signature scheme.
本特刊收录了从第12届网络安全与密码学会议中精选的几篇论文。该会议原定于2020年9月14日至16日在意大利阿马尔菲(SA)举行,原因是新冠肺炎。本期发表的论文已从原来的会议版本扩充,并经过了第二次严格的审查程序。我们简要回顾了这期中包含的论文:Carsten Baum, Daniel Escudero, Alberto Pedrouzo-Ulloa, Peter Scholl和Juan在Ring-LWE问题上构造的遗忘线性函数评估(Oblivious Linear Function Evaluation, OLE)协议。OLE最近在实际的多方计算中被证明是非常有用的,本工作提出了基于格的OLE协议并分析了它们的独立效率。Dario Catalano、Georg Fuchsbauer和Azam Soleimanian在《标准模型中的防止双重认证签名》中提出了在标准模型中安全且支持大地址空间的高效DAPS方案。DAPS是一种特殊类型的签名,如果签名者用相同的“地址”签署了两条消息,就会受到惩罚。例如,如果签名者为同一域颁发两个不同的证书,这可能是需要的。Benjamin Terner, Benjamin Kreuter和Sarvar Patel的论文Private Identity Agreement for Private Set functions探讨了关于Private Set intersection的一个有趣的转折。如果我们想要计算数据交集的函数,我们需要首先“对齐”我们的数据,以便我们为任何匹配的记录保留相同的标识符。当标识符像真实世界的数据一样“模糊”时,情况就更加复杂了。在这些情况下,一方可能持有与同一人对应的几份记录,但不知道这一事实。只有当与另一个数据集结合时,这个事实才会明显(如果另一个数据集包含与两者连接的记录)。在这种情况下,本文提出了一种双方私下为记录分配标识符的方法。这里的主要挑战是两个记录是否匹配的传递性。在Fast Threshold ECDSA with Honest Majority中,Ivan damg、Thomas P. Jakobsen、Jesper Buus Nielsen、Jakob Illeborg Pagter和Michael Bæksvang Østergaard提出了一种新的更快的ECDSA签名方案的阈值变体。
{"title":"Special issue: Security and Cryptography for Networks - SCN 2020","authors":"Clemente Galdi, V. Kolesnikov","doi":"10.3233/jcs-219000","DOIUrl":"https://doi.org/10.3233/jcs-219000","url":null,"abstract":"This special issue includes several papers that have been selected from the program of the 12th Conference on Security and Cryptography for Networks. The conference, originally planned in Amalfi (SA), Italy, was held online on Sept. 14–16, 2020, due to Covid-19. The papers appearing in the present issue have been extended from their original conference versions, and have gone through a second rigorous reviewing process. We briefly review the papers included in this issue: Efficient Protocols for Oblivious Linear Function Evaluation from Ring-LWE by Carsten Baum, Daniel Escudero, Alberto Pedrouzo-Ulloa, Peter Scholl and Juan Ramón Troncoso-Pastoriza constructs Oblivious Linear Function Evaluation (OLE) protocols from the Ring-LWE problem. OLE has recently been shown to be very useful in practical multiparty computation, and this work proposes lattice-based OLE protocols and analyzes their standalone efficiency. In Double-Authentication-Preventing Signatures in the Standard Model, Dario Catalano, Georg Fuchsbauer and Azam Soleimanian present efficient DAPS schemes that are secure in the standard model and support large address spaces. DAPS is a special type of signature meant to punish the signer if it signs two messages with the same “address.” For example, this may be desired if the signer issues two different certificates for the same domain. The paper Private Identity Agreement for Private Set Functionalities by Benjamin Terner, Benjamin Kreuter and Sarvar Patel explores an interesting twist on private set intersection. If we want to compute a function of the intersection of our data, we need to first “align” our data so that we hold identical identifiers for any records that match. The situation is even more complicated when identifiers are “fuzzy” as in real-world data. In those cases, one party may hold several records corresponding to the same person, but be unaware of this fact. Only when combined with another data set will this fact be evident (if the other data set contains a record that connects with both). This paper proposes a method for two parties to privately assign identifiers to records in this kind of scenario. The main challenge here is the transitive nature of whether two records match. In Fast Threshold ECDSA with Honest Majority, Ivan Damgård, Thomas P. Jakobsen, Jesper Buus Nielsen, Jakob Illeborg Pagter and Michael Bæksvang Østergaard propose a new faster threshold variant of the ECDSA signature scheme.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133214006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, online services, like e-commerce or streaming services, provide a personalized user experience through recommender systems. Recommender systems are built upon a vast amount of data about users/items acquired by the services. Such knowledge represents an invaluable resource. However, commonly, part of this knowledge is public and can be easily accessed via the Internet. Unfortunately, that same knowledge can be leveraged by competitors or malicious users. The literature offers a large number of works concerning attacks on recommender systems, but most of them assume that the attacker can easily access the full rating matrix. In practice, this is never the case. The only way to access the rating matrix is by gathering the ratings (e.g., reviews) by crawling the service’s website. Crawling a website has a cost in terms of time and resources. What is more, the targeted website can employ defensive measures to detect automatic scraping. In this paper, we assess the impact of a series of attacks on recommender systems. Our analysis aims to set up the most realistic scenarios considering both the possibilities and the potential attacker’s limitations. In particular, we assess the impact of different crawling approaches when attacking a recommendation service. From the collected information, we mount various profile injection attacks. We measure the value of the collected knowledge through the identification of the most similar user/item. Our empirical results show that while crawling can indeed bring knowledge to the attacker (up to 65% of neighborhood reconstruction on a mid-size dataset and up to 90% on a small-size dataset), this will not be enough to mount a successful shilling attack in practice.
{"title":"On the feasibility of crawling-based attacks against recommender systems","authors":"F. Aiolli, M. Conti, S. Picek, Mirko Polato","doi":"10.3233/jcs-210041","DOIUrl":"https://doi.org/10.3233/jcs-210041","url":null,"abstract":"Nowadays, online services, like e-commerce or streaming services, provide a personalized user experience through recommender systems. Recommender systems are built upon a vast amount of data about users/items acquired by the services. Such knowledge represents an invaluable resource. However, commonly, part of this knowledge is public and can be easily accessed via the Internet. Unfortunately, that same knowledge can be leveraged by competitors or malicious users. The literature offers a large number of works concerning attacks on recommender systems, but most of them assume that the attacker can easily access the full rating matrix. In practice, this is never the case. The only way to access the rating matrix is by gathering the ratings (e.g., reviews) by crawling the service’s website. Crawling a website has a cost in terms of time and resources. What is more, the targeted website can employ defensive measures to detect automatic scraping. In this paper, we assess the impact of a series of attacks on recommender systems. Our analysis aims to set up the most realistic scenarios considering both the possibilities and the potential attacker’s limitations. In particular, we assess the impact of different crawling approaches when attacking a recommendation service. From the collected information, we mount various profile injection attacks. We measure the value of the collected knowledge through the identification of the most similar user/item. Our empirical results show that while crawling can indeed bring knowledge to the attacker (up to 65% of neighborhood reconstruction on a mid-size dataset and up to 90% on a small-size dataset), this will not be enough to mount a successful shilling attack in practice.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}