首页 > 最新文献

2021 International Carnahan Conference on Security Technology (ICCST)最新文献

英文 中文
Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged, and Older Adults 调查年轻人、中年人和老年人眼部生物识别的公平性
Pub Date : 2021-10-04 DOI: 10.1109/ICCST49569.2021.9717383
Anoop Krishnan, Ali Almadan, A. Rattani
A number of studies suggest bias of the face biometrics, i.e., face recognition and soft-biometric estimation methods, across gender, race, and age-groups. There is a recent urge to investigate the bias of different biometric modalities toward the deployment of fair and trustworthy biometric solutions. Ocular biometrics has obtained increased attention from academia and industry due to its high accuracy, security, privacy, and ease of use in mobile devices. A recent study in 2020 also suggested the fairness of ocular-based user recognition across males and females. This paper aims to evaluate the fairness of ocular biometrics in the visible spectrum among age-groups; young, middle, and older adults. Thanks to the availability of the latest large-scale 2020 UFPR ocular biometric dataset, with subjects acquired in the age range 18–79 years, to facilitate this study. Experimental results suggest the overall equivalent performance of ocular biometrics across gender and age-groups in user verification and gender-classification. Performance difference for older adults at lower false match rate and young adults was noted at user verification and age-classification, respectively. This could be attributed to inherent characteristics of the biometric data from these age-groups impacting specific applications, which suggest a need for advancement in sensor technology and software solutions.
许多研究表明,面部生物识别和软生物识别估计方法在性别、种族和年龄组之间存在偏见。最近有一种迫切需要调查不同生物识别模式对公平和值得信赖的生物识别解决方案的部署的偏见。眼生物识别技术因其准确性高、安全性好、隐私性强、易于在移动设备中使用等优点,越来越受到学术界和工业界的关注。2020年的一项最新研究也表明,男性和女性基于眼睛的用户识别是公平的。本文的目的是评估不同年龄群体的眼部生物特征在可见光谱上的公平性;年轻人、中年人和老年人。得益于最新的大规模2020年UFPR眼部生物特征数据集,获得的受试者年龄在18-79岁之间,为本研究提供了便利。实验结果表明,眼生物识别技术在用户验证和性别分类方面具有跨性别和年龄组的总体等效性能。在用户验证和年龄分类中,老年人在低错误匹配率和年轻人的表现差异分别被注意到。这可能归因于这些年龄组生物识别数据的固有特征影响了特定的应用,这表明需要在传感器技术和软件解决方案方面取得进步。
{"title":"Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged, and Older Adults","authors":"Anoop Krishnan, Ali Almadan, A. Rattani","doi":"10.1109/ICCST49569.2021.9717383","DOIUrl":"https://doi.org/10.1109/ICCST49569.2021.9717383","url":null,"abstract":"A number of studies suggest bias of the face biometrics, i.e., face recognition and soft-biometric estimation methods, across gender, race, and age-groups. There is a recent urge to investigate the bias of different biometric modalities toward the deployment of fair and trustworthy biometric solutions. Ocular biometrics has obtained increased attention from academia and industry due to its high accuracy, security, privacy, and ease of use in mobile devices. A recent study in 2020 also suggested the fairness of ocular-based user recognition across males and females. This paper aims to evaluate the fairness of ocular biometrics in the visible spectrum among age-groups; young, middle, and older adults. Thanks to the availability of the latest large-scale 2020 UFPR ocular biometric dataset, with subjects acquired in the age range 18–79 years, to facilitate this study. Experimental results suggest the overall equivalent performance of ocular biometrics across gender and age-groups in user verification and gender-classification. Performance difference for older adults at lower false match rate and young adults was noted at user verification and age-classification, respectively. This could be attributed to inherent characteristics of the biometric data from these age-groups impacting specific applications, which suggest a need for advancement in sensor technology and software solutions.","PeriodicalId":101539,"journal":{"name":"2021 International Carnahan Conference on Security Technology (ICCST)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124648267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Experimental Evaluation on Deepfake Detection using Deep Face Recognition 基于深度人脸识别的深度伪造检测实验评估
Pub Date : 2021-10-04 DOI: 10.1109/ICCST49569.2021.9717407
Sreeraj Ramachandran, Aakash Varma Nadimpalli, A. Rattani
Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.
深度学习的重大进展已经在各种计算机视觉应用中获得了标志性的准确率。然而,深度生成模型的进步也导致了非常逼真的虚假内容的产生,也被称为深度造假,对隐私、民主和国家安全造成了威胁。目前大多数深度伪造检测方法都被认为是一个使用两类卷积神经网络(cnn)来区分真实图像或视频与虚假图像或视频的二分类问题。这些方法是基于检测由深度生成模型产生的视觉伪影、时间或颜色不一致。然而,这些方法需要大量的真实和虚假数据进行模型训练,并且在使用高级深度生成技术生成的样本进行交叉数据集评估时,其性能显着下降。在本文中,我们使用不同的损失函数和深度伪造生成技术,全面评估了深度人脸识别在识别深度伪造方面的有效性。对Celeb-DF和face取证++深度伪造数据集的实验研究表明,深度人脸识别在识别两类cnn和眼模态的深度伪造方面是有效的。报告的结果表明,在Celeb-DF数据集上使用人脸识别检测深度伪造时,曲线下面积(AUC)的最大值为0.98,相等错误率(EER)为7.1%。与在Celeb-DF数据集上使用两类CNN和眼模态获得的EER相比,该EER降低了16.6%。在face取证++数据集上,AUC为0.99,EER为2.04%。使用生物特征人脸识别技术的优点是绕过了对大量假数据进行模型训练的需要,并且可以更好地推广到不断发展的深度假创建技术。
{"title":"An Experimental Evaluation on Deepfake Detection using Deep Face Recognition","authors":"Sreeraj Ramachandran, Aakash Varma Nadimpalli, A. Rattani","doi":"10.1109/ICCST49569.2021.9717407","DOIUrl":"https://doi.org/10.1109/ICCST49569.2021.9717407","url":null,"abstract":"Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.","PeriodicalId":101539,"journal":{"name":"2021 International Carnahan Conference on Security Technology (ICCST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130518352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Resilience-Based Performance Measures for Next-Generation Systems Security Engineering 下一代系统安全工程中基于弹性的性能度量
Pub Date : 2021-10-01 DOI: 10.1109/ICCST49569.2021.9717388
Adam D. Williams, Thomas Adams, Jamie Wingo, G. Birch, S. Caskey, Elizabeth S. Fleming, T. Gunda
Performance measures commonly used in systems security engineering tend to be static, linear, and have limited utility in addressing challenges to security performance from increasingly complex risk environments, adversary innovation, and disruptive technologies. Leveraging key concepts from resilience science offers an opportunity to advance next-generation systems security engineering to better describe the complexities, dynamism, and nonlinearity observed in security performance—particularly in response to these challenges. This article introduces a multilayer network model and modified Continuous Time Markov Chain model that explicitly captures interdependencies in systems security engineering. The results and insights from a multilayer network model of security for a hypothetical nuclear power plant introduce how network-based metrics can incorporate resilience concepts into performance metrics for next generation systems security engineering.
通常在系统安全工程中使用的性能度量往往是静态的、线性的,并且在处理来自日益复杂的风险环境、对手创新和破坏性技术的安全性能挑战方面具有有限的效用。利用弹性科学中的关键概念为推进下一代系统安全工程提供了一个机会,以更好地描述安全性能中观察到的复杂性、动态性和非线性,特别是在应对这些挑战时。本文介绍了一种多层网络模型和改进的连续时间马尔可夫链模型,该模型显式地捕获了系统安全工程中的相互依赖性。假设核电站的多层安全网络模型的结果和见解介绍了基于网络的度量如何将弹性概念纳入下一代系统安全工程的性能度量。
{"title":"Resilience-Based Performance Measures for Next-Generation Systems Security Engineering","authors":"Adam D. Williams, Thomas Adams, Jamie Wingo, G. Birch, S. Caskey, Elizabeth S. Fleming, T. Gunda","doi":"10.1109/ICCST49569.2021.9717388","DOIUrl":"https://doi.org/10.1109/ICCST49569.2021.9717388","url":null,"abstract":"Performance measures commonly used in systems security engineering tend to be static, linear, and have limited utility in addressing challenges to security performance from increasingly complex risk environments, adversary innovation, and disruptive technologies. Leveraging key concepts from resilience science offers an opportunity to advance next-generation systems security engineering to better describe the complexities, dynamism, and nonlinearity observed in security performance—particularly in response to these challenges. This article introduces a multilayer network model and modified Continuous Time Markov Chain model that explicitly captures interdependencies in systems security engineering. The results and insights from a multilayer network model of security for a hypothetical nuclear power plant introduce how network-based metrics can incorporate resilience concepts into performance metrics for next generation systems security engineering.","PeriodicalId":101539,"journal":{"name":"2021 International Carnahan Conference on Security Technology (ICCST)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 International Carnahan Conference on Security Technology (ICCST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1