Pub Date : 2022-05-05DOI: 10.1007/978-3-030-98347-5
Christian Plessl, M. Platzner, P. Schreier
{"title":"Approximate Computing","authors":"Christian Plessl, M. Platzner, P. Schreier","doi":"10.1007/978-3-030-98347-5","DOIUrl":"https://doi.org/10.1007/978-3-030-98347-5","url":null,"abstract":"","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42591358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge Echavarria, S. Wildermann, Oliver Keszocze, Faramarz Khosravi, A. Becher, Jürgen Teich
Abstract We present the design and a closed-form error analysis of accuracy-configurable multipliers via segmented carry chains. To address this problem, we model the approximate partial-product accumulations as a sequential process. According to a given splitting point of the carry chains, the technique herein discussed allows varying the quality of the accumulations and, consequently, the overall product. Due to these shorter critical paths, such kinds of approximate multipliers can trade-off accuracy for an increased performance whilst exploiting the inherent area savings of sequential over combinatorial approaches. We implemented multiple architectures targeting FPGAs and ASICs with different bit-widths and accuracy configurations to 1) estimate resources, power consumption, and delay, as well as to 2) evaluate those error metrics that belong to the so-called #P-complete class.
{"title":"Design and error analysis of accuracy-configurable sequential multipliers via segmented carry chains","authors":"Jorge Echavarria, S. Wildermann, Oliver Keszocze, Faramarz Khosravi, A. Becher, Jürgen Teich","doi":"10.1515/itit-2021-0040","DOIUrl":"https://doi.org/10.1515/itit-2021-0040","url":null,"abstract":"Abstract We present the design and a closed-form error analysis of accuracy-configurable multipliers via segmented carry chains. To address this problem, we model the approximate partial-product accumulations as a sequential process. According to a given splitting point of the carry chains, the technique herein discussed allows varying the quality of the accumulations and, consequently, the overall product. Due to these shorter critical paths, such kinds of approximate multipliers can trade-off accuracy for an increased performance whilst exploiting the inherent area savings of sequential over combinatorial approaches. We implemented multiple architectures targeting FPGAs and ASICs with different bit-widths and accuracy configurations to 1) estimate resources, power consumption, and delay, as well as to 2) evaluate those error metrics that belong to the so-called #P-complete class.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42424060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Chen, Peipei Yin, Weiqiang Liu, Fabrizio Lombardi
Abstract Approximate computing has become an emerging research topic for energy-efficient design of circuits and systems. Many approximate arithmetic circuits have been proposed, therefore it is critical to summarize the available approximation techniques to improve performance and energy efficiency at a acceptable accuracy loss. This paper presents an overview of circuit-level techniques used for approximate arithmetic. This paper provides a detailed review of circuit-level approximation techniques for the arithmetic data path. Its focus is on identifying critical circuit-level approximation techniques that apply to computational units and blocks. Approximate adders, multipliers, dividers, and squarer are introduced and classified according to their approximation methods. FFT and MAC are discussed as computational blocks that employ an approximate algorithm for implementation.
{"title":"A survey of approximate arithmetic circuits and blocks","authors":"Ke Chen, Peipei Yin, Weiqiang Liu, Fabrizio Lombardi","doi":"10.1515/itit-2021-0055","DOIUrl":"https://doi.org/10.1515/itit-2021-0055","url":null,"abstract":"Abstract Approximate computing has become an emerging research topic for energy-efficient design of circuits and systems. Many approximate arithmetic circuits have been proposed, therefore it is critical to summarize the available approximation techniques to improve performance and energy efficiency at a acceptable accuracy loss. This paper presents an overview of circuit-level techniques used for approximate arithmetic. This paper provides a detailed review of circuit-level approximation techniques for the arithmetic data path. Its focus is on identifying critical circuit-level approximation techniques that apply to computational units and blocks. Approximate adders, multipliers, dividers, and squarer are introduced and classified according to their approximation methods. FFT and MAC are discussed as computational blocks that employ an approximate algorithm for implementation.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47588057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Protecting users’ privacy in digital systems becomes more complex and challenging over time, as the amount of stored and exchanged data grows steadily and systems become increasingly involved and connected. Two techniques that try to approach this issue are the privacy-preserving protocols secure multi-party computation (MPC) and private information retrieval (PIR), which aim to enable practical computation while simultaneously keeping sensitive data private. In the dissertation [Daniel Demmler. “Towards Practical Privacy-Preserving Protocols”. Diss. Darmstadt: Technische Universität, 2018. url: http://tuprints.ulb.tu-darmstadt.de/8605/], summarized in this article, we present results showing how real-world applications can be executed in a privacy-preserving way. This is not only desired by users of such applications, but since 2018 also based on a strong legal foundation with the GDPR in the European Union, that enforces privacy protection of user data by design.
{"title":"Towards practical privacy-preserving protocols","authors":"Daniel Demmler","doi":"10.1515/itit-2022-0005","DOIUrl":"https://doi.org/10.1515/itit-2022-0005","url":null,"abstract":"Abstract Protecting users’ privacy in digital systems becomes more complex and challenging over time, as the amount of stored and exchanged data grows steadily and systems become increasingly involved and connected. Two techniques that try to approach this issue are the privacy-preserving protocols secure multi-party computation (MPC) and private information retrieval (PIR), which aim to enable practical computation while simultaneously keeping sensitive data private. In the dissertation [Daniel Demmler. “Towards Practical Privacy-Preserving Protocols”. Diss. Darmstadt: Technische Universität, 2018. url: http://tuprints.ulb.tu-darmstadt.de/8605/], summarized in this article, we present results showing how real-world applications can be executed in a privacy-preserving way. This is not only desired by users of such applications, but since 2018 also based on a strong legal foundation with the GDPR in the European Union, that enforces privacy protection of user data by design.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45935385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Wendzel, L. Caviglione, A. Mileva, Jean-François Lalande, W. Mazurczyk
Abstract This special issue presents five articles that address the topic of replicability and scientific methodology in information security research, featuring two extended articles from the 2021 International Workshop on Information Security Methodology and Replication Studies (IWSMR). This special issue also comprises two distinguished dissertations.
{"title":"Guest editorial: Information security methodology and replication studies","authors":"S. Wendzel, L. Caviglione, A. Mileva, Jean-François Lalande, W. Mazurczyk","doi":"10.1515/itit-2022-0016","DOIUrl":"https://doi.org/10.1515/itit-2022-0016","url":null,"abstract":"Abstract This special issue presents five articles that address the topic of replicability and scientific methodology in information security research, featuring two extended articles from the 2021 International Workshop on Information Security Methodology and Replication Studies (IWSMR). This special issue also comprises two distinguished dissertations.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47210367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Copstein, Egil Karlsen, Jeff Schwartzentruber, N. Zincir-Heywood, M. Heywood
Abstract In this research, we analyze the effect of lightweight syntactical feature extraction techniques from the field of information retrieval for log abstraction in information security. To this end, we evaluate three feature extraction techniques and three clustering algorithms on four different security datasets for anomaly detection. Results demonstrate that these techniques have a role to play for log abstraction in the form of extracting syntactic features which improves the identification of anomalous minority classes, specifically in homogeneous security datasets.
{"title":"Exploring syntactical features for anomaly detection in application logs","authors":"R. Copstein, Egil Karlsen, Jeff Schwartzentruber, N. Zincir-Heywood, M. Heywood","doi":"10.1515/itit-2021-0064","DOIUrl":"https://doi.org/10.1515/itit-2021-0064","url":null,"abstract":"Abstract In this research, we analyze the effect of lightweight syntactical feature extraction techniques from the field of information retrieval for log abstraction in information security. To this end, we evaluate three feature extraction techniques and three clustering algorithms on four different security datasets for anomaly detection. Results demonstrate that these techniques have a role to play for log abstraction in the form of extracting syntactic features which improves the identification of anomalous minority classes, specifically in homogeneous security datasets.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43217183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Single sign-on (SSO) systems, such as OpenID and OAuth, allow Web sites to delegate user authentication to third parties, such as Facebook or Google. These systems provide a convenient mechanism for users to log in and ease the burden of user authentication for Web sites. Conversely, by integrating such SSO systems, they become a crucial part of the security of the modern Web. So far, it has been hard to prove if Web standards and protocols actually meet their security goals. SSO systems, in particular, need to satisfy strong security and privacy properties. In this thesis, we develop a new systematic approach to rigorously and formally analyze and verify such strong properties with the Web Infrastructure Model (WIM), the most comprehensive model of the Web infrastructure to date. Our analyses reveal severe vulnerabilities in SSO systems that lead to critical attacks against their security and privacy. We propose fixes and formally verify that our proposals are sufficient to establish security. Our analyses, however, also show that even Mozilla’s proposal for a privacy-preserving SSO system does not meet its unique privacy goal. To fill this gap, we use our novel approach to develop a new SSO system, SPRESSO, and formally prove that our system indeed enjoys strong security and privacy properties.
{"title":"Privacy-preserving Web single sign-on: Formal security analysis and design","authors":"G. Schmitz","doi":"10.1515/itit-2022-0003","DOIUrl":"https://doi.org/10.1515/itit-2022-0003","url":null,"abstract":"Abstract Single sign-on (SSO) systems, such as OpenID and OAuth, allow Web sites to delegate user authentication to third parties, such as Facebook or Google. These systems provide a convenient mechanism for users to log in and ease the burden of user authentication for Web sites. Conversely, by integrating such SSO systems, they become a crucial part of the security of the modern Web. So far, it has been hard to prove if Web standards and protocols actually meet their security goals. SSO systems, in particular, need to satisfy strong security and privacy properties. In this thesis, we develop a new systematic approach to rigorously and formally analyze and verify such strong properties with the Web Infrastructure Model (WIM), the most comprehensive model of the Web infrastructure to date. Our analyses reveal severe vulnerabilities in SSO systems that lead to critical attacks against their security and privacy. We propose fixes and formally verify that our proposals are sufficient to establish security. Our analyses, however, also show that even Mozilla’s proposal for a privacy-preserving SSO system does not meet its unique privacy goal. To fill this gap, we use our novel approach to develop a new SSO system, SPRESSO, and formally prove that our system indeed enjoys strong security and privacy properties.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41626010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Data is being produced at an intractable pace. At the same time, there is an insatiable interest in using such data for use cases that span all imaginable domains, including health, climate, business, and gaming. Beyond the novel socio-technical challenges that surround data-driven innovations, there are still open data processing challenges that impede the usability of data-driven techniques. It is commonly acknowledged that overcoming heterogeneity of data with regard to syntax and semantics to combine various sources for a common goal is a major bottleneck. Furthermore, the quality of such data is always under question as the data science pipelines today are highly ad-hoc and without the necessary care for provenance. Finally, quality criteria that go beyond the syntactical and semantic correctness of individual values but also incorporate population-level constraints, such as equal parity and opportunity with regard to protected groups, play a more and more important role in this process. Traditional research on data integration was focused on post-merger integration of companies, where customer or product databases had to be integrated. While this is often hard enough, today the challenges aggravate because of the fact that more stakeholders are using data analytics tools to derive domain-specific insights. I call this phenomenon the democratization of data science, a process, which is both challenging and necessary. Novel systems need to be user-friendly in a way that not only trained database admins can handle them but also less computer science savvy stakeholders. Thus, our research focuses on scalable example-driven techniques for data preparation and curation. Furthermore, we believe that it is important to educate the breadth of society on implications of a data-driven world and actively promote the concept of data literacy as a fundamental competence.
{"title":"Enabling data-centric AI through data quality management and data literacy","authors":"Ziawasch Abedjan","doi":"10.1515/itit-2021-0048","DOIUrl":"https://doi.org/10.1515/itit-2021-0048","url":null,"abstract":"Abstract Data is being produced at an intractable pace. At the same time, there is an insatiable interest in using such data for use cases that span all imaginable domains, including health, climate, business, and gaming. Beyond the novel socio-technical challenges that surround data-driven innovations, there are still open data processing challenges that impede the usability of data-driven techniques. It is commonly acknowledged that overcoming heterogeneity of data with regard to syntax and semantics to combine various sources for a common goal is a major bottleneck. Furthermore, the quality of such data is always under question as the data science pipelines today are highly ad-hoc and without the necessary care for provenance. Finally, quality criteria that go beyond the syntactical and semantic correctness of individual values but also incorporate population-level constraints, such as equal parity and opportunity with regard to protected groups, play a more and more important role in this process. Traditional research on data integration was focused on post-merger integration of companies, where customer or product databases had to be integrated. While this is often hard enough, today the challenges aggravate because of the fact that more stakeholders are using data analytics tools to derive domain-specific insights. I call this phenomenon the democratization of data science, a process, which is both challenging and necessary. Novel systems need to be user-friendly in a way that not only trained database admins can handle them but also less computer science savvy stakeholders. Thus, our research focuses on scalable example-driven techniques for data preparation and curation. Furthermore, we believe that it is important to educate the breadth of society on implications of a data-driven world and actively promote the concept of data literacy as a fundamental competence.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42557398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The defense of a computer network requires defenders to both understand when an attack is taking place and understand the larger strategic goals of their attackers. In this paper we explore this topic through the replication of a prior study “Extracting Attack Narratives from Traffic Datasets” by Mireles et al. [Athanasiades, N., et al., Intrusion detection testing and benchmarking methodologies, in First IEEE International Workshop on Information Assurance. 2003, IEEE: Darmstadt, Germany]. In their original research Mireles et al. proposed a framework linking a particular cyber-attack model (the Mandiant Life Cycle Model) and identification of individual attack signatures into a process as to provide a higher-level insight of an attacker in what they termed as attack narratives. In our study we both replicate the original authors work while also moving the research forward by integrating many of the suggestions Mireles et al. provided that would have improved their study. Through our analysis, we confirm the concept that attack narratives can provide additional insight beyond the review of individual cyber-attacks. We also built upon one of their suggested areas by exploring their framework through the lens of Lockheed Martin Cyber Kill Chain. While we found the concept to be novel and potentially useful, we found challenges replicating the clarity Mireles et al. described. In our research we identify the need for additional research into describing additional components of an attack narrative including the nonlinear nature of cyber-attacks and issues of identity and attribution.
{"title":"Extracting network based attack narratives through use of the cyber kill chain: A replication study","authors":"Aaron Weathersby, M. Washington","doi":"10.1515/itit-2021-0059","DOIUrl":"https://doi.org/10.1515/itit-2021-0059","url":null,"abstract":"Abstract The defense of a computer network requires defenders to both understand when an attack is taking place and understand the larger strategic goals of their attackers. In this paper we explore this topic through the replication of a prior study “Extracting Attack Narratives from Traffic Datasets” by Mireles et al. [Athanasiades, N., et al., Intrusion detection testing and benchmarking methodologies, in First IEEE International Workshop on Information Assurance. 2003, IEEE: Darmstadt, Germany]. In their original research Mireles et al. proposed a framework linking a particular cyber-attack model (the Mandiant Life Cycle Model) and identification of individual attack signatures into a process as to provide a higher-level insight of an attacker in what they termed as attack narratives. In our study we both replicate the original authors work while also moving the research forward by integrating many of the suggestions Mireles et al. provided that would have improved their study. Through our analysis, we confirm the concept that attack narratives can provide additional insight beyond the review of individual cyber-attacks. We also built upon one of their suggested areas by exploring their framework through the lens of Lockheed Martin Cyber Kill Chain. While we found the concept to be novel and potentially useful, we found challenges replicating the clarity Mireles et al. described. In our research we identify the need for additional research into describing additional components of an attack narrative including the nonlinear nature of cyber-attacks and issues of identity and attribution.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46091062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With ReRAM being a non-volative memory technology, which features low power consumption, high scalability and allows for in-memory computing, it is a promising candidate for future computer architectures. Approximate computing is a design paradigm, which aims at reducing the complexity of hardware by trading off accuracy for area and/or delay. In this article, we introduce approximate computing techniques to in-memory computing. We extend existing compilation techniques for the Programmable Logic in-Memory (PLiM) computer architecture, by adapting state-of-the-art approximate computing techniques for arithmetic circuits. We use Cartesian Genetic Programming for the generation of approximate circuits and evaluate them using a Symbolic Computer Algebra-based technique with respect to error-metrics. In our experiments, we show that we can outperform state-of-the-art handcrafted approximate adder designs.
{"title":"Unlocking approximation for in-memory computing with Cartesian genetic programming and computer algebra for arithmetic circuits","authors":"Saman Froehlich, R. Drechsler","doi":"10.1515/itit-2021-0042","DOIUrl":"https://doi.org/10.1515/itit-2021-0042","url":null,"abstract":"Abstract With ReRAM being a non-volative memory technology, which features low power consumption, high scalability and allows for in-memory computing, it is a promising candidate for future computer architectures. Approximate computing is a design paradigm, which aims at reducing the complexity of hardware by trading off accuracy for area and/or delay. In this article, we introduce approximate computing techniques to in-memory computing. We extend existing compilation techniques for the Programmable Logic in-Memory (PLiM) computer architecture, by adapting state-of-the-art approximate computing techniques for arithmetic circuits. We use Cartesian Genetic Programming for the generation of approximate circuits and evaluate them using a Symbolic Computer Algebra-based technique with respect to error-metrics. In our experiments, we show that we can outperform state-of-the-art handcrafted approximate adder designs.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46374296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}