Pub Date : 2017-12-07DOI: 10.1109/CCST.2017.8167845
S. Albladi, G. Weir
The current research aims to gain insight on user competence in detecting security threats in the context of online social networks (OSNs) and investigates the multidimensional space that determines this user competence level. The role of user competence and its dimensions in facilitating the detection of online threats is still a controversial topic in the information security field. The dimensions used to measure the concept are self-efficacy, security awareness, privacy awareness, and cybercrime experience. The scales used to measure those factors can determine the level of user competence in evaluating risks associated with social network usage. The measurement scales employed here have been validated using an item-categorization approach that, to our knowledge, has never before been used in information security research. The result of this study provides evidence for the suitability and validity of the user competence dimensions and associated measurement scales.
{"title":"Competence measure in social networks","authors":"S. Albladi, G. Weir","doi":"10.1109/CCST.2017.8167845","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167845","url":null,"abstract":"The current research aims to gain insight on user competence in detecting security threats in the context of online social networks (OSNs) and investigates the multidimensional space that determines this user competence level. The role of user competence and its dimensions in facilitating the detection of online threats is still a controversial topic in the information security field. The dimensions used to measure the concept are self-efficacy, security awareness, privacy awareness, and cybercrime experience. The scales used to measure those factors can determine the level of user competence in evaluating risks associated with social network usage. The measurement scales employed here have been validated using an item-categorization approach that, to our knowledge, has never before been used in information security research. The result of this study provides evidence for the suitability and validity of the user competence dimensions and associated measurement scales.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133824631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-26DOI: 10.1109/CCST.2017.8167814
V. Smejkal, J. Kodl, L. Sieger, Frantisek Hortai, P. Tesar
The paper directly follows on from the prior research on the dynamic biometric signature (DBS), its properties, security, its resistance to forgery, and its stability. In our experiments, we used all the available pads produced by Signotec, which differ from each other in terms of their design, the size of the signature field, resolution, sampling rate, and even the scanning method used — a regular pen or a special pen using the ERT (Electromagnetic Resonance Technology). A less heterogenous sample was used than in the previous cases, as the objective of the experiments was to demonstrate a potential change in the DBS connected with the use of a different device, nevertheless the size of the sample means it is sufficiently statistically representative. The results showed that irrespective of the device used, the stability of scanning of the dynamic biometric signature was high for each person. The signature variability did not significantly differ between the devices for individual people. Once again it was confirmed that the use of the first signature as a “trial”, not included in the results, reduces the signature variability for each participant.
{"title":"Stability of a dynamic biometric signature created on various devices","authors":"V. Smejkal, J. Kodl, L. Sieger, Frantisek Hortai, P. Tesar","doi":"10.1109/CCST.2017.8167814","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167814","url":null,"abstract":"The paper directly follows on from the prior research on the dynamic biometric signature (DBS), its properties, security, its resistance to forgery, and its stability. In our experiments, we used all the available pads produced by Signotec, which differ from each other in terms of their design, the size of the signature field, resolution, sampling rate, and even the scanning method used — a regular pen or a special pen using the ERT (Electromagnetic Resonance Technology). A less heterogenous sample was used than in the previous cases, as the objective of the experiments was to demonstrate a potential change in the DBS connected with the use of a different device, nevertheless the size of the sample means it is sufficiently statistically representative. The results showed that irrespective of the device used, the stability of scanning of the dynamic biometric signature was high for each person. The signature variability did not significantly differ between the devices for individual people. Once again it was confirmed that the use of the first signature as a “trial”, not included in the results, reduces the signature variability for each participant.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124753925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167817
Jorge Sanchez-Casanova, Antonio Miranda-Escalada, R. Sánchez-Reillo, Pablo Bartolome-Molina
In this paper authors have studied the current state of the art, noting the achievements and gaps existing in published works. Most of the gaps are related to the testing data used, and therefore the reliability of the results obtained. With this in mind, the paper not only covers such review of the literature, but also the efforts of the authors in developing a solution that could demonstrate the real potential of this biometric modality.
{"title":"ECG biosignals in biometric recognition","authors":"Jorge Sanchez-Casanova, Antonio Miranda-Escalada, R. Sánchez-Reillo, Pablo Bartolome-Molina","doi":"10.1109/CCST.2017.8167817","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167817","url":null,"abstract":"In this paper authors have studied the current state of the art, noting the achievements and gaps existing in published works. Most of the gaps are related to the testing data used, and therefore the reliability of the results obtained. With this in mind, the paper not only covers such review of the literature, but also the efforts of the authors in developing a solution that could demonstrate the real potential of this biometric modality.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116931737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167826
Juan C. Sanmiguel, J. Sanchez, Luis Caro Campos
Nowadays, there is a growing demand for automated video-based surveillance systems due to increase security concerns. Anomaly detection is a popular application in this area where anomalous events of interest are defined as observed behavior that stands out from its context in space and time. In this paper, we present an approach for the detection of anomalous motion based on the extraction of object-size features that is independent of object size and video resolution. The proposed approach relies on a variable spatial window based on object size that has shown robustness in scenarios that present motion of objects of different sizes. We propose a system composed of four building blocks: background subtraction, feature extraction, event modeling and outlier detection. The proposed approach is evaluated on publicly available datasets which contain instances of abandoned objects of different sizes (considered as anomalies). The experiments carried out demonstrate that our approach outperforms the related state-of-the-art in the selected datasets. The proposal can identify anomalies associated to objects with different sizes and motion without increasing the number of false positives.
{"title":"Object-size invariant anomaly detection in video-surveillance","authors":"Juan C. Sanmiguel, J. Sanchez, Luis Caro Campos","doi":"10.1109/CCST.2017.8167826","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167826","url":null,"abstract":"Nowadays, there is a growing demand for automated video-based surveillance systems due to increase security concerns. Anomaly detection is a popular application in this area where anomalous events of interest are defined as observed behavior that stands out from its context in space and time. In this paper, we present an approach for the detection of anomalous motion based on the extraction of object-size features that is independent of object size and video resolution. The proposed approach relies on a variable spatial window based on object size that has shown robustness in scenarios that present motion of objects of different sizes. We propose a system composed of four building blocks: background subtraction, feature extraction, event modeling and outlier detection. The proposed approach is evaluated on publicly available datasets which contain instances of abandoned objects of different sizes (considered as anomalies). The experiments carried out demonstrate that our approach outperforms the related state-of-the-art in the selected datasets. The proposal can identify anomalies associated to objects with different sizes and motion without increasing the number of false positives.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125053614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167802
Gregorio Pitolli, Leonardo Aniello, Giuseppe Laurenza, Leonardo Querzoni, R. Baldoni
Identifying families of malware is today considered a fundamental problem in the context of computer security. The correct mapping of a malicious sample to a known family simplifies its analysis and allows experts to focus their efforts only on those samples presenting unknown characteristics or behaviours, thus improving the efficiency of the malware analysis process. Grouping malware in families is an activity that can be performed using widely different approaches, but that currently lacks a globally accepted ground truth to be used for comparison. This problem stems from the absence of a formal definition of what a malware family is. As a consequence, in the last few years researchers proposed different methodologies to group a dataset of malicious samples in families. Notable examples include solutions combining labels of commercial anti-malware software, where possible disagreements are solved by majority voting (e.g., AVclass), and dedicated solutions based on machine learning algorithms (e.g., Malheur). In this paper we first present an evaluation to assess the quality of two distinct malware family ground truth datasets. Both include the same set of malware, but one has labels produced by AVclass while the other is based on the clusters identified by Malheur. Then we propose a novel solution for identifying families of similar samples starting from an unlabelled dataset of malware. We leverage features extracted through both static and dynamic analysis, and cluster samples using the BIRCH clustering algorithm. The paper includes an experimental evaluation which shows that BIRCH fits well in the context of malware family identification. Indeed, we prove that BIRCH can be tuned to obtain an accuracy higher than or comparable to standard clustering algorithms, using the ground truths based on AVclass and Malheur. Furthermore, we provide a performance comparison where BIRCH stands out for the low clustering time it provides.
{"title":"Malware family identification with BIRCH clustering","authors":"Gregorio Pitolli, Leonardo Aniello, Giuseppe Laurenza, Leonardo Querzoni, R. Baldoni","doi":"10.1109/CCST.2017.8167802","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167802","url":null,"abstract":"Identifying families of malware is today considered a fundamental problem in the context of computer security. The correct mapping of a malicious sample to a known family simplifies its analysis and allows experts to focus their efforts only on those samples presenting unknown characteristics or behaviours, thus improving the efficiency of the malware analysis process. Grouping malware in families is an activity that can be performed using widely different approaches, but that currently lacks a globally accepted ground truth to be used for comparison. This problem stems from the absence of a formal definition of what a malware family is. As a consequence, in the last few years researchers proposed different methodologies to group a dataset of malicious samples in families. Notable examples include solutions combining labels of commercial anti-malware software, where possible disagreements are solved by majority voting (e.g., AVclass), and dedicated solutions based on machine learning algorithms (e.g., Malheur). In this paper we first present an evaluation to assess the quality of two distinct malware family ground truth datasets. Both include the same set of malware, but one has labels produced by AVclass while the other is based on the clusters identified by Malheur. Then we propose a novel solution for identifying families of similar samples starting from an unlabelled dataset of malware. We leverage features extracted through both static and dynamic analysis, and cluster samples using the BIRCH clustering algorithm. The paper includes an experimental evaluation which shows that BIRCH fits well in the context of malware family identification. Indeed, we prove that BIRCH can be tuned to obtain an accuracy higher than or comparable to standard clustering algorithms, using the ground truths based on AVclass and Malheur. Furthermore, we provide a performance comparison where BIRCH stands out for the low clustering time it provides.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125101751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167827
Sung Choi, A. Chavez, Marcos Torres, Cheolhyeon Kwon, Inseok Hwang
Conventional cyber defenses require continual maintenance: virus, firmware, and software updates; costly functional impact tests; and dedicated staff within a security operations center. The conventional defenses require access to external sources for the latest updates. The whitelisted system, however, is ideally a system that can sustain itself freed from external inputs. Cyber-Physical Systems (CPS), have the following unique traits: digital commands are physically observable and verifiable; possible combinations of commands are limited and finite. These CPS traits, combined with a trust anchor to secure an unclonable digital identity (i.e., digitally unclonable function [DUF] — Patent Application #15/183,454; CodeLock), offers an excellent opportunity to explore defenses built on whitelisting approach called “Trustworthy Design Architecture (TDA).” There exist significant research challenges in defining what are the physically verifiable whitelists as well as the criteria for cyber-physical traits that can be used as the unclonable identity. One goal of the project is to identify a set of physical and/or digital characteristics that can uniquely identify an endpoint. The measurements must have the properties of being reliable, reproducible, and trustworthy. Given that adversaries naturally evolve with any defense, the adversary will have the goal of disrupting or spoofing this process. To protect against such disruptions, we provide a unique system engineering technique, when applied to CPSs (e.g., nuclear processing facilities, critical infrastructures), that will sustain a secure operational state without ever needing external information or active inputs from cybersecurity subject-matter experts (i.e., virus updates, IDS scans, patch management, vulnerability updates). We do this by eliminating system dependencies on external sources for protection. Instead, all internal communication is actively sealed and protected with integrity, authenticity and assurance checks that only cyber identities bound to the physical component can deliver. As CPSs continue to advance (i.e., IoTs, drones, ICSs), resilient-maintenance free solutions are needed to neutralize/reduce cyber risks. TDA is a conceptual system engineering framework specifically designed to address cyber-physical systems that can potentially be maintained and operated without the persistent need or demand for vulnerability or security patch updates.
{"title":"Trustworthy design architecture: Cyber-physical system","authors":"Sung Choi, A. Chavez, Marcos Torres, Cheolhyeon Kwon, Inseok Hwang","doi":"10.1109/CCST.2017.8167827","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167827","url":null,"abstract":"Conventional cyber defenses require continual maintenance: virus, firmware, and software updates; costly functional impact tests; and dedicated staff within a security operations center. The conventional defenses require access to external sources for the latest updates. The whitelisted system, however, is ideally a system that can sustain itself freed from external inputs. Cyber-Physical Systems (CPS), have the following unique traits: digital commands are physically observable and verifiable; possible combinations of commands are limited and finite. These CPS traits, combined with a trust anchor to secure an unclonable digital identity (i.e., digitally unclonable function [DUF] — Patent Application #15/183,454; CodeLock), offers an excellent opportunity to explore defenses built on whitelisting approach called “Trustworthy Design Architecture (TDA).” There exist significant research challenges in defining what are the physically verifiable whitelists as well as the criteria for cyber-physical traits that can be used as the unclonable identity. One goal of the project is to identify a set of physical and/or digital characteristics that can uniquely identify an endpoint. The measurements must have the properties of being reliable, reproducible, and trustworthy. Given that adversaries naturally evolve with any defense, the adversary will have the goal of disrupting or spoofing this process. To protect against such disruptions, we provide a unique system engineering technique, when applied to CPSs (e.g., nuclear processing facilities, critical infrastructures), that will sustain a secure operational state without ever needing external information or active inputs from cybersecurity subject-matter experts (i.e., virus updates, IDS scans, patch management, vulnerability updates). We do this by eliminating system dependencies on external sources for protection. Instead, all internal communication is actively sealed and protected with integrity, authenticity and assurance checks that only cyber identities bound to the physical component can deliver. As CPSs continue to advance (i.e., IoTs, drones, ICSs), resilient-maintenance free solutions are needed to neutralize/reduce cyber risks. TDA is a conceptual system engineering framework specifically designed to address cyber-physical systems that can potentially be maintained and operated without the persistent need or demand for vulnerability or security patch updates.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"52 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130521451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167848
W. Lee, Xian-Fu Wong, B. Goi, R. Phan
With the advent of Cloud Computing and IoT, secure communication has becoming an important aspect to protect the users and service providers from malicious attack. However, the adoption SSL/TLS is still not popular, due to the heavy computational requirements to implement them on the server side. Current solutions often rely on installing costly hardware accelerator to compute the cryptographic algorithms in order to offer responsive experience to the users (e.g. online payment and cloud storage). In this paper, we proposed to utilize GPU as an accelerator to compute the cryptographic algorithms, which is more cost effective compare to dedicated hardware accelerator. Firstly, we present several techniques to utilize the massively parallel architecture in GPU compute block ciphers (AES, Camelia, CAST5 and SEED) and public key cryptography (RSA). Secondly, we present a novel idea that utilizes warp shuffle instruction to speed up the implementation of SHA-3. Thirdly, we evaluated the performance of our implementation with state of the art GPU (Pascal architecture). Through extensive experiments, we show that CUDA-SSL is capable of achieving high-speed cryptography computation comparable to hardware accelerators, with only a fraction of their cost.
{"title":"CUDA-SSL: SSL/TLS accelerated by GPU","authors":"W. Lee, Xian-Fu Wong, B. Goi, R. Phan","doi":"10.1109/CCST.2017.8167848","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167848","url":null,"abstract":"With the advent of Cloud Computing and IoT, secure communication has becoming an important aspect to protect the users and service providers from malicious attack. However, the adoption SSL/TLS is still not popular, due to the heavy computational requirements to implement them on the server side. Current solutions often rely on installing costly hardware accelerator to compute the cryptographic algorithms in order to offer responsive experience to the users (e.g. online payment and cloud storage). In this paper, we proposed to utilize GPU as an accelerator to compute the cryptographic algorithms, which is more cost effective compare to dedicated hardware accelerator. Firstly, we present several techniques to utilize the massively parallel architecture in GPU compute block ciphers (AES, Camelia, CAST5 and SEED) and public key cryptography (RSA). Secondly, we present a novel idea that utilizes warp shuffle instruction to speed up the implementation of SHA-3. Thirdly, we evaluated the performance of our implementation with state of the art GPU (Pascal architecture). Through extensive experiments, we show that CUDA-SSL is capable of achieving high-speed cryptography computation comparable to hardware accelerators, with only a fraction of their cost.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121194860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167813
Alexander W. Miranda, S. Goldsmith
This paper presents a risk assessment method for evaluating a grid-connected Commercial Photovoltaic (PV) plants. Commercial PV plants are heavily dependent on technical information from control systems, some of which are dated relative to modern processors and communications. Through an initial case study of an existing PV plant, this paper explores the cybersecurity posture of a PV plant, examines the vulnerabilities and attack vectors against a PV plant, and identifies some issues that are unique to its Industrial Control System (ICS) architecture. Finally, the paper presents an initial risk management framework that addresses cybersecurity finding and best practices.
{"title":"Cyber-physical risk management for PV photovoltaic plants","authors":"Alexander W. Miranda, S. Goldsmith","doi":"10.1109/CCST.2017.8167813","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167813","url":null,"abstract":"This paper presents a risk assessment method for evaluating a grid-connected Commercial Photovoltaic (PV) plants. Commercial PV plants are heavily dependent on technical information from control systems, some of which are dated relative to modern processors and communications. Through an initial case study of an existing PV plant, this paper explores the cybersecurity posture of a PV plant, examines the vulnerabilities and attack vectors against a PV plant, and identifies some issues that are unique to its Industrial Control System (ICS) architecture. Finally, the paper presents an initial risk management framework that addresses cybersecurity finding and best practices.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114705254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167824
Patrick Schuch, Simon-Daniel Schulz, C. Busch
Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.
{"title":"Minutia-based enhancement of fingerprint samples","authors":"Patrick Schuch, Simon-Daniel Schulz, C. Busch","doi":"10.1109/CCST.2017.8167824","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167824","url":null,"abstract":"Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134506678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/CCST.2017.8167847
Peter T. Breuer, Jonathan P. Bowen, Esther Palomar, Zhiming Liu
Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic ‘semantic security’ for user data against the operator and system insiders.
{"title":"Encrypted computing: Speed, security and provable obfuscation against insiders","authors":"Peter T. Breuer, Jonathan P. Bowen, Esther Palomar, Zhiming Liu","doi":"10.1109/CCST.2017.8167847","DOIUrl":"https://doi.org/10.1109/CCST.2017.8167847","url":null,"abstract":"Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic ‘semantic security’ for user data against the operator and system insiders.","PeriodicalId":371622,"journal":{"name":"2017 International Carnahan Conference on Security Technology (ICCST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115125192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}