首页 > 最新文献

2012 IEEE International Carnahan Conference on Security Technology (ICCST)最新文献

英文 中文
Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation:” A retrospective 国防部指令85002“信息保障(IA)实施”回顾
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393557
P. Campbell
From the time of its publication on February 6, 2003, the Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation” (DoDI 8500.2) has provided the definitions and controls that form the basis for IA across the DoD. This is the document to which compliance has been mandatory. For over 9 years, as the world of computer security has swirled through revision after revision and upgrade after upgrade, moving, for example, from DITSCAP to DIACAP, this instruction has remained unrevised, in its original form. As this venerable instruction now nears end of life it is appropriate that we step back and consider what we have learned from it and what its place is in context. In this paper we first review the peculiar structure of DoDI 8500.2, including its attachments, its “Subject Areas,” its “baseline IA levels,” its implicit use of type, signatures (full, half, left, and right), and signature patterns, along with span, and class. To provide context and contrast we briefly present three other control sets, namely (1) the DITSCAP checklists that preceded DoDI 8500.2, (2) the up and coming NIST 800-53 that it appears will follow DoDI 8500.2, and (3) Cobit from the commercial world. We then compare the scope of DoDI 8500.2 with those three control sets. The paper concludes with observations concerning DoDI 8500.2 and control sets in general.
自2003年2月6日发布以来,国防部指令85002“信息保障(IA)实施”(DoDI 85002)提供了构成整个国防部IA基础的定义和控制。这是一个必须遵守的文件。9年多来,计算机安全领域经历了一次又一次的修订和一次又一次的升级,例如,从DITSCAP到DIACAP,这一指令一直保持着原始的形式。随着这一令人尊敬的教诲现在接近生命的终点,我们应该退后一步,思考一下我们从它中学到了什么,以及它在语境中的地位。在本文中,我们首先回顾了DoDI 85002的特殊结构,包括它的附件、它的“主题领域”、它的“基线IA级别”、它对类型、签名(全、半、左、右)和签名模式的隐式使用,以及跨度和类。为了提供上下文和对比,我们简要地介绍了其他三个控制集,即(1)在DoDI 85002之前的DITSCAP检查表,(2)即将推出的NIST 800-53,它将遵循DoDI 85002,以及(3)来自商业世界的Cobit。然后,我们将DoDI 85002的范围与这三个控制集进行比较。最后对DoDI 85002和一般控制集进行了观察。
{"title":"Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation:” A retrospective","authors":"P. Campbell","doi":"10.1109/CCST.2012.6393557","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393557","url":null,"abstract":"From the time of its publication on February 6, 2003, the Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation” (DoDI 8500.2) has provided the definitions and controls that form the basis for IA across the DoD. This is the document to which compliance has been mandatory. For over 9 years, as the world of computer security has swirled through revision after revision and upgrade after upgrade, moving, for example, from DITSCAP to DIACAP, this instruction has remained unrevised, in its original form. As this venerable instruction now nears end of life it is appropriate that we step back and consider what we have learned from it and what its place is in context. In this paper we first review the peculiar structure of DoDI 8500.2, including its attachments, its “Subject Areas,” its “baseline IA levels,” its implicit use of type, signatures (full, half, left, and right), and signature patterns, along with span, and class. To provide context and contrast we briefly present three other control sets, namely (1) the DITSCAP checklists that preceded DoDI 8500.2, (2) the up and coming NIST 800-53 that it appears will follow DoDI 8500.2, and (3) Cobit from the commercial world. We then compare the scope of DoDI 8500.2 with those three control sets. The paper concludes with observations concerning DoDI 8500.2 and control sets in general.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114270202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Tracking formants in spectrograms and its application in speaker verification 谱图跟踪共振峰及其在说话人验证中的应用
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393541
J. Leu, Liang-tsair Geeng, C. Pu, Jyh-Bin Shiau
Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this paper we present a formant tracking approach based on image processing techniques. Our approach is to first find the running directions of the formants in a spectrogram. Then we perform smoothing on the spectrogram along the directions of the formants to produce formants that are more continuous and stable. Then we perform ridge detection to find formant track candidates in the spectrogram. After removing tracks that are too short or too weak, we fit the remaining tracks with 2nd degree polynomial curves to extract formants that are both smooth and continuous. Besides extracting thin formant tracks, we also extracted formant tracks with width. These thick formants are able to indication not only the locations of the formants but also the width of the formants. Using the voices of 70 people, we conducted experiments to test the effectiveness of the thin formants and the thick formants when they are used in speaker verification. Using only one sentence (6 to 10 words, 3 seconds in length) for comparison, the thin formants and the thick formants are able to achieve 88.3% and 93.8% of accuracy in speaker verification, respectively. When the number of sentences for comparison increased to seven, the accuracy rate improved to 93.8% and 98.7%, respectively.
共振峰是声谱图中最明显的特征,也是最有价值的语音信息。传统的方法是先在单个帧中找到共振峰点,然后将相邻帧中的共振峰点连接在一起形成轨迹。本文提出了一种基于图像处理技术的峰形跟踪方法。我们的方法是首先在谱图中找到共振峰的运行方向。然后沿着共振峰的方向对谱图进行平滑处理,得到更连续、更稳定的共振峰。然后进行脊检测,在谱图中寻找形成峰轨迹候选点。在去除太短或太弱的轨道后,我们用2次多项式曲线拟合剩余的轨道,以提取既光滑又连续的共振峰。除了提取薄的形成峰轨迹外,还提取了宽的形成峰轨迹。这些厚的共振峰不仅能够指示共振峰的位置,而且还能指示共振峰的宽度。我们利用70个人的声音进行了实验,测试了薄共振峰和厚共振峰用于说话人验证时的有效性。仅用一个句子(6 ~ 10个单词,长度为3秒)进行比较,薄共振峰和厚共振峰在说话人验证中的准确率分别达到了88.3%和93.8%。当对比句数增加到7句时,准确率分别提高到93.8%和98.7%。
{"title":"Tracking formants in spectrograms and its application in speaker verification","authors":"J. Leu, Liang-tsair Geeng, C. Pu, Jyh-Bin Shiau","doi":"10.1109/CCST.2012.6393541","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393541","url":null,"abstract":"Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this paper we present a formant tracking approach based on image processing techniques. Our approach is to first find the running directions of the formants in a spectrogram. Then we perform smoothing on the spectrogram along the directions of the formants to produce formants that are more continuous and stable. Then we perform ridge detection to find formant track candidates in the spectrogram. After removing tracks that are too short or too weak, we fit the remaining tracks with 2nd degree polynomial curves to extract formants that are both smooth and continuous. Besides extracting thin formant tracks, we also extracted formant tracks with width. These thick formants are able to indication not only the locations of the formants but also the width of the formants. Using the voices of 70 people, we conducted experiments to test the effectiveness of the thin formants and the thick formants when they are used in speaker verification. Using only one sentence (6 to 10 words, 3 seconds in length) for comparison, the thin formants and the thick formants are able to achieve 88.3% and 93.8% of accuracy in speaker verification, respectively. When the number of sentences for comparison increased to seven, the accuracy rate improved to 93.8% and 98.7%, respectively.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122275031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of physical protection system effectiveness 物理防护系统有效性评估
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393532
Z. Vintr, M. Vintr, J. Malach
The ability of physical protection system (PPS) to withstand a possible attack and prevent an adversary from achieving his objectives is generally characterized as PPS effectiveness. The evaluation of effectiveness serves then as a basis for assessing whether a relevant PPS meets an intended aim. The article presents the selected results of an extensive analysis tackling the present state of PPS system effectiveness evaluation. There has been determined the achieved state of the development of a PPS efficiency evaluation theory, and also the possibilities of practical application of known algorithms, analyses, models, program products, tests and exercises which might be used for evaluating the PPS effectiveness. The authors of the paper have proposed that a couple of ways can be selected to develop these methods more when following the results of the performed analysis.
物理防护系统(PPS)抵御可能的攻击并阻止对手实现其目标的能力通常被称为PPS有效性。然后,效能评估可作为评估有关计划是否达到预期目标的基础。本文介绍了针对PPS系统有效性评估现状的广泛分析的选择结果。已经确定了PPS效率评估理论发展的实现状态,以及可能用于评估PPS有效性的已知算法、分析、模型、方案产品、测试和练习的实际应用可能性。本文的作者提出,可以选择几种方法来开发这些方法时,遵循执行的分析结果。
{"title":"Evaluation of physical protection system effectiveness","authors":"Z. Vintr, M. Vintr, J. Malach","doi":"10.1109/CCST.2012.6393532","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393532","url":null,"abstract":"The ability of physical protection system (PPS) to withstand a possible attack and prevent an adversary from achieving his objectives is generally characterized as PPS effectiveness. The evaluation of effectiveness serves then as a basis for assessing whether a relevant PPS meets an intended aim. The article presents the selected results of an extensive analysis tackling the present state of PPS system effectiveness evaluation. There has been determined the achieved state of the development of a PPS efficiency evaluation theory, and also the possibilities of practical application of known algorithms, analyses, models, program products, tests and exercises which might be used for evaluating the PPS effectiveness. The authors of the paper have proposed that a couple of ways can be selected to develop these methods more when following the results of the performed analysis.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Super-resolution processing of the partial pictorial image of the single pictorial image which eliminated artificiality 对单幅图像的部分图像进行超分辨率处理,消除了人为因素
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393582
Yuichiro Yamada, Daisuke Sasagawa
Recently, the number of security cameras (CCTV) for crime prevention has increased rapidly in Japan for the purpose of establishing safe society. As a result, the number of opportunity of utilizing movies of CCTVs has increased in criminal investigation. Last year, we proposed the super-resolution technique using manual sub-pixel shifting process in the 45th ICCST. However, if manually processed images are used as evidence in a criminal trial, a doubt about artificiality in those images might come out. For the images to be treated more properly, a new utilizable super-resolution technique in criminal investigation is proposed in this paper. The biggest point of the proposed technique is that the target images are processed automatically without any artificiality. The new super-resolution technique utilizes a newly designed algorithm with Bilateral Filter. We have developed the new application software using the algorithm. We also compare images processed by the proposed technique with ones processed by the manual sub-pixel shifting process, in order to evaluate the effectiveness of the proposed technique. In conclusion, any person can get the same result by using this technique. Doubts will not arise if images processed by the proposal are used as evidential images in criminal trials.
最近,在日本,为了建立安全社会,用于预防犯罪的闭路电视(CCTV)的数量迅速增加。因此,在刑事调查中利用闭路电视影像的机会增加了。去年,我们在第45届ICCST上提出了人工亚像素移动的超分辨率技术。然而,如果人工处理的图像被用作刑事审判的证据,可能会对这些图像的人为性产生怀疑。为了使图像得到更好的处理,本文提出了一种新的刑事侦查应用超分辨率技术。该技术最大的特点是对目标图像进行自动处理,不需要任何人为因素。新的超分辨率技术采用了一种新设计的双边滤波算法。我们利用该算法开发了新的应用软件。我们还将该技术处理的图像与人工亚像素移位处理的图像进行了比较,以评估该技术的有效性。总之,任何人都可以通过使用这种技术得到相同的结果。将该方案处理后的图像作为刑事审判的证据图像使用,不会产生疑点。
{"title":"Super-resolution processing of the partial pictorial image of the single pictorial image which eliminated artificiality","authors":"Yuichiro Yamada, Daisuke Sasagawa","doi":"10.1109/CCST.2012.6393582","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393582","url":null,"abstract":"Recently, the number of security cameras (CCTV) for crime prevention has increased rapidly in Japan for the purpose of establishing safe society. As a result, the number of opportunity of utilizing movies of CCTVs has increased in criminal investigation. Last year, we proposed the super-resolution technique using manual sub-pixel shifting process in the 45th ICCST. However, if manually processed images are used as evidence in a criminal trial, a doubt about artificiality in those images might come out. For the images to be treated more properly, a new utilizable super-resolution technique in criminal investigation is proposed in this paper. The biggest point of the proposed technique is that the target images are processed automatically without any artificiality. The new super-resolution technique utilizes a newly designed algorithm with Bilateral Filter. We have developed the new application software using the algorithm. We also compare images processed by the proposed technique with ones processed by the manual sub-pixel shifting process, in order to evaluate the effectiveness of the proposed technique. In conclusion, any person can get the same result by using this technique. Doubts will not arise if images processed by the proposal are used as evidential images in criminal trials.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CPA performance comparison based on Wavelet Transform 基于小波变换的CPA性能比较
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393559
Aesun Park, Dong‐Guk Han, J. Ryoo
Correlation Power Analysis (CPA) is a very effective attack method for finding secret keys using the statistical features of power consumption signals from cryptosystems. However, the power consumption signal of the encryption device is greatly affected or distorted by noise arising from peripheral devices. When a side channel attack is carried out, this distorted signal, which is affected by noise and time inconsistency, is the major factor that reduces the attack performance. A signal processing method based on the Wavelet Transform (WT) has been proposed to enhance the attack performance. Selecting the decomposition level and the wavelet basis is very important because the CPA performance based on the WT depends on these two factors. In this paper, the CPA performance, in terms of noise reduction and the transform domain, is compared and analyzed from the viewpoint of attack time and the minimum number of signals required to find the secret key. In addition, methods for selecting the decomposition level and the wavelet basis using the features of power consumption are proposed, and validated through experiments.
相关功率分析(CPA)是利用密码系统功耗信号的统计特征查找密钥的一种非常有效的攻击方法。但是,加密设备的功耗信号受到外围设备噪声的很大影响或失真。在进行侧信道攻击时,这种受噪声和时间不一致影响的失真信号是降低攻击性能的主要因素。为了提高攻击性能,提出了一种基于小波变换的信号处理方法。选择分解层次和小波基是非常重要的,因为基于小波变换的CPA性能取决于这两个因素。本文从攻击时间和找到密钥所需的最小信号数的角度,对CPA的降噪性能和变换域性能进行了比较和分析。此外,提出了利用功耗特征选择分解层次和小波基的方法,并通过实验进行了验证。
{"title":"CPA performance comparison based on Wavelet Transform","authors":"Aesun Park, Dong‐Guk Han, J. Ryoo","doi":"10.1109/CCST.2012.6393559","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393559","url":null,"abstract":"Correlation Power Analysis (CPA) is a very effective attack method for finding secret keys using the statistical features of power consumption signals from cryptosystems. However, the power consumption signal of the encryption device is greatly affected or distorted by noise arising from peripheral devices. When a side channel attack is carried out, this distorted signal, which is affected by noise and time inconsistency, is the major factor that reduces the attack performance. A signal processing method based on the Wavelet Transform (WT) has been proposed to enhance the attack performance. Selecting the decomposition level and the wavelet basis is very important because the CPA performance based on the WT depends on these two factors. In this paper, the CPA performance, in terms of noise reduction and the transform domain, is compared and analyzed from the viewpoint of attack time and the minimum number of signals required to find the secret key. In addition, methods for selecting the decomposition level and the wavelet basis using the features of power consumption are proposed, and validated through experiments.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122881221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CCTV Operator Performance Benchmarking 中央电视台营办商表现基准
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393580
S. Rankin, N. Cohen, K. MacLennan-Brown, K. Sage
A range of automated video analytics systems (VAS) is increasingly being used to improve efficiency and effectiveness within CCTV control rooms. Their role is to manage the large volumes of surveillance data that are constantly generated and to sift the data in real time to detect incidents or activities of specific interest. The CCTV Operator Performance Benchmarking project investigated, through commissioning operator trials, the performance of human operators on detection and tracking tasks and the factors that affect the performance of human operators. Human performance can then be compared against that obtained from automated systems. Through the development of publicly available guidance documents, CAST hopes to improve the benefits brought by VAS to the control room environment, and to ensure they are deployed in an appropriate manner. The paper will discuss the trials and explore the results obtained. It will discuss the factors that can have both a positive and detrimental effect on operators' performance and the appropriate environments where the introduction of VAS can bring great benefits.
一系列自动视频分析系统(VAS)越来越多地被用于提高CCTV控制室的效率和有效性。他们的作用是管理不断产生的大量监控数据,并实时筛选数据以检测特定兴趣的事件或活动。CCTV操作员表现基准项目通过调试操作员试验,调查了人工操作员在检测和跟踪任务中的表现,以及影响人工操作员表现的因素。然后,可以将人的表现与自动化系统的表现进行比较。CAST希望通过制定公开可用的指导文件,提高VAS给控制室环境带来的好处,并确保它们以适当的方式部署。本文将讨论试验并探讨所获得的结果。它将讨论可能对运营商的业绩产生积极和有害影响的因素,以及引入VAS可以带来巨大好处的适当环境。
{"title":"CCTV Operator Performance Benchmarking","authors":"S. Rankin, N. Cohen, K. MacLennan-Brown, K. Sage","doi":"10.1109/CCST.2012.6393580","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393580","url":null,"abstract":"A range of automated video analytics systems (VAS) is increasingly being used to improve efficiency and effectiveness within CCTV control rooms. Their role is to manage the large volumes of surveillance data that are constantly generated and to sift the data in real time to detect incidents or activities of specific interest. The CCTV Operator Performance Benchmarking project investigated, through commissioning operator trials, the performance of human operators on detection and tracking tasks and the factors that affect the performance of human operators. Human performance can then be compared against that obtained from automated systems. Through the development of publicly available guidance documents, CAST hopes to improve the benefits brought by VAS to the control room environment, and to ensure they are deployed in an appropriate manner. The paper will discuss the trials and explore the results obtained. It will discuss the factors that can have both a positive and detrimental effect on operators' performance and the appropriate environments where the introduction of VAS can bring great benefits.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Normalization and feature extraction on ear images 耳图像的归一化与特征提取
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393543
E. González, L. Álvarez, L. Mazorra
Ear image analysis is an emerging biometrie application. A method for normalizing ear images and extracting from them a set of measurable features (feature vector) that can be used to identify its owner is proposed. The identification would be made based on the comparison between the feature vector of the input image and all feature vectors of the images in the database we work with. The feature vector is based on the ear contours. One important goal of this paper is to identify the most significant areas in the ear contour for human being identification purpose. Another important contribution of the paper is the combination of active contours techniques and ovoid model ear fitting (used to normalize ear features) and a high accurate invariant approach of internal and external ear contours. Ear geometry is characterized using a set of distances to external and internal contours points. This set of distances, along with six ovoid parameters is considered as the feature vector of the image. To test the method a new ear images database has been created. The proposed method identifies front-parallel views pretty good, even when varying the distance of the individual to the camera or the camera lens.
耳图像分析是一种新兴的生物识别技术。提出了一种将耳图像归一化并从中提取一组可测量特征(特征向量)的方法,该特征向量可用于识别其所有者。将输入图像的特征向量与我们使用的数据库中所有图像的特征向量进行比较,从而进行识别。特征向量是基于耳朵轮廓的。本文的一个重要目标是识别出耳朵轮廓中最重要的区域,以供人类识别。本文的另一个重要贡献是结合了主动轮廓技术和卵形模型耳拟合(用于归一化耳特征)以及高精度的内耳和外耳轮廓不变性方法。耳朵几何形状的特征是使用一组距离的外部和内部轮廓点。这组距离以及六个卵形参数被认为是图像的特征向量。为了测试该方法,创建了一个新的耳朵图像数据库。所提出的方法可以很好地识别正面平行视图,即使个人与相机或相机镜头的距离发生变化。
{"title":"Normalization and feature extraction on ear images","authors":"E. González, L. Álvarez, L. Mazorra","doi":"10.1109/CCST.2012.6393543","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393543","url":null,"abstract":"Ear image analysis is an emerging biometrie application. A method for normalizing ear images and extracting from them a set of measurable features (feature vector) that can be used to identify its owner is proposed. The identification would be made based on the comparison between the feature vector of the input image and all feature vectors of the images in the database we work with. The feature vector is based on the ear contours. One important goal of this paper is to identify the most significant areas in the ear contour for human being identification purpose. Another important contribution of the paper is the combination of active contours techniques and ovoid model ear fitting (used to normalize ear features) and a high accurate invariant approach of internal and external ear contours. Ear geometry is characterized using a set of distances to external and internal contours points. This set of distances, along with six ovoid parameters is considered as the feature vector of the image. To test the method a new ear images database has been created. The proposed method identifies front-parallel views pretty good, even when varying the distance of the individual to the camera or the camera lens.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Communication among incident responders — A study 事件响应者之间的沟通——一项研究
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393555
Brett C. Tjaden, Robert Floodeen
Responding to some future incident might require significant cooperation by multiple teams or organizations within an incident response community. To study the effectiveness of that cooperation, the Carnegie Mellon® Software Engineering Institute (SEI) conducted a study using a group of volunteer, autonomous incident response organizations. These organizations completed special SEI-designed tasks that required them to work together. The study identified three factors as likely to help or hinder the cooperation of incident responders: being prepared, being organized, and following incident response best practices. This technical note describes those factors and offers recommendations for implementing each one.
对未来事件的响应可能需要事件响应社区内多个团队或组织的大力合作。为了研究这种合作的有效性,卡内基梅隆软件工程研究所(Carnegie Mellon®Software Engineering Institute, SEI)使用一组志愿者、自主事件响应组织进行了一项研究。这些组织完成了需要他们一起工作的特殊的sei设计的任务。该研究确定了可能有助于或阻碍事件响应者合作的三个因素:准备、组织和遵循事件响应最佳实践。本技术说明描述了这些因素,并提供了实现每个因素的建议。
{"title":"Communication among incident responders — A study","authors":"Brett C. Tjaden, Robert Floodeen","doi":"10.1109/CCST.2012.6393555","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393555","url":null,"abstract":"Responding to some future incident might require significant cooperation by multiple teams or organizations within an incident response community. To study the effectiveness of that cooperation, the Carnegie Mellon® Software Engineering Institute (SEI) conducted a study using a group of volunteer, autonomous incident response organizations. These organizations completed special SEI-designed tasks that required them to work together. The study identified three factors as likely to help or hinder the cooperation of incident responders: being prepared, being organized, and following incident response best practices. This technical note describes those factors and offers recommendations for implementing each one.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125648248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Video based system for railroad collision warning 基于视频的铁路碰撞预警系统
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393573
J. A. Uribe, Luis Fonseca, J. Vargas
Autonomous systems can assist humans in the important task of safe driving. Such systems can warn people about possible risks, take actions to avoid accidents or guide the vehicle without human supervision. In railway scenarios a camera in front of the train can aid drivers with the identification of obstacles or strange objects that can pose danger to the route. Image processing in these applications is not easy of performing. The changing conditions create scenes where background is hard to detect, lighting varies and process speed must be fast. This article describes a first approximation to the solution of the problem where two complementary approaches are followed for detecting and tracking obstacles on videos captured from a train driver perspective. The first strategy is a simple-frame-based approach where every video frame is analyzed using the Hough transform for detecting the rails. On every rail a systematic search is done detecting obstacles that can be dangerous for the train course. The second approach uses consecutive frames for detecting the trajectory of moving objects. Analyzing the sparse optical flow the candidate objects are tracked and their trajectories computed in order to determine their possible route to collision. For testing the system we have used videos where preselected fixed and moving obstacles have been superimposed using the Chroma key effect. The system had shown a real time performance in detecting and tracking the objects. Future work includes the test of the system on real scenarios and the validation over changing weather conditions.
自动驾驶系统可以帮助人类完成安全驾驶的重要任务。这样的系统可以警告人们可能存在的风险,采取措施避免事故,或者在没有人类监督的情况下引导车辆。在铁路场景中,列车前方的摄像头可以帮助司机识别可能对路线构成危险的障碍物或奇怪物体。在这些应用程序中进行图像处理并不容易。不断变化的条件创造了难以检测背景,光线变化和处理速度必须快的场景。本文描述了该问题的第一个近似解决方案,其中遵循两种互补的方法来检测和跟踪从火车驾驶员角度捕获的视频中的障碍物。第一种策略是一种简单的基于帧的方法,其中使用霍夫变换分析每个视频帧以检测轨道。在每条铁轨上都进行了系统的搜索,以检测可能对火车路线构成危险的障碍物。第二种方法使用连续帧来检测运动物体的轨迹。通过对稀疏光流的分析,对候选目标进行跟踪并计算其轨迹,以确定其可能的碰撞路径。为了测试系统,我们使用了视频,其中预选的固定和移动障碍物已经使用色度键效果叠加。该系统在检测和跟踪目标方面具有较好的实时性。未来的工作包括在真实场景中对系统进行测试,并在不断变化的天气条件下进行验证。
{"title":"Video based system for railroad collision warning","authors":"J. A. Uribe, Luis Fonseca, J. Vargas","doi":"10.1109/CCST.2012.6393573","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393573","url":null,"abstract":"Autonomous systems can assist humans in the important task of safe driving. Such systems can warn people about possible risks, take actions to avoid accidents or guide the vehicle without human supervision. In railway scenarios a camera in front of the train can aid drivers with the identification of obstacles or strange objects that can pose danger to the route. Image processing in these applications is not easy of performing. The changing conditions create scenes where background is hard to detect, lighting varies and process speed must be fast. This article describes a first approximation to the solution of the problem where two complementary approaches are followed for detecting and tracking obstacles on videos captured from a train driver perspective. The first strategy is a simple-frame-based approach where every video frame is analyzed using the Hough transform for detecting the rails. On every rail a systematic search is done detecting obstacles that can be dangerous for the train course. The second approach uses consecutive frames for detecting the trajectory of moving objects. Analyzing the sparse optical flow the candidate objects are tracked and their trajectories computed in order to determine their possible route to collision. For testing the system we have used videos where preselected fixed and moving obstacles have been superimposed using the Chroma key effect. The system had shown a real time performance in detecting and tracking the objects. Future work includes the test of the system on real scenarios and the validation over changing weather conditions.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130296651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
State space blow-up in the verification of secure smartcard interoperability 智能卡安全互操作性验证中的状态空间爆炸问题
Pub Date : 2012-12-31 DOI: 10.1109/CCST.2012.6393546
M. Talamo, M. Galinium, C. Schunck, F. Arcieri
Smartcards are used in a wide range of applications including electronic (e-) driving licenses, e-identity cards, e-payments, e-health cards, and digital signatures. Nevertheless secure smartcard interoperability has remained a significant challenge. Currently the secure operation of smartcards is certified (e.g. through the Common Criteria) for a specific and closed environment that does not comprise the presence of other smartcards and their corresponding applications. To enable secure smartcard interoperability one must, however, explicitly consider settings in which different smartcards interact with their corresponding applications, i.e. not in isolation. Consequently the interoperability problem is only insufficiently addressed in security verification processes. In an ideal scenario one should be able to certify that introducing a new type of smartcard into an environment in which several smartcards safely interoperate will have no detrimental side-effects for the security and interoperability of the existing system as well as for the new smartcard and its associated applications. In this work, strong experimental evidence is presented demonstrating that such certification cannot be provided through common model checking approaches for security verification due to state space blow-up. Furthermore it is shown how the state space blow-up can be prevented by employing a verification protocol which, by taking the results of the Common Criteria certification into account, avoids checking any transitions that occur after an illegal transition has been detected.
智能卡的应用范围很广,包括电子驾驶执照、电子身份证、电子支付、电子医疗卡和数字签名。然而,安全的智能卡互操作性仍然是一个重大挑战。目前,智能卡的安全操作是在一个特定和封闭的环境中认证的(例如通过通用准则),该环境不包括其他智能卡及其相应应用的存在。然而,要实现安全的智能卡互操作性,必须明确考虑不同智能卡与其相应应用程序交互的设置,即不是孤立的。因此,互操作性问题在安全验证过程中没有得到充分的解决。在理想的情况下,人们应该能够证明,将一种新型智能卡引入一个多个智能卡安全互操作的环境中,不会对现有系统的安全性和互操作性以及新智能卡及其相关应用程序产生有害的副作用。在这项工作中,强有力的实验证据表明,由于状态空间爆炸,这种认证不能通过通用的模型检查方法来提供。此外,还展示了如何通过使用验证协议来防止状态空间爆炸,该协议通过考虑公共标准认证的结果,避免检查在检测到非法转换之后发生的任何转换。
{"title":"State space blow-up in the verification of secure smartcard interoperability","authors":"M. Talamo, M. Galinium, C. Schunck, F. Arcieri","doi":"10.1109/CCST.2012.6393546","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393546","url":null,"abstract":"Smartcards are used in a wide range of applications including electronic (e-) driving licenses, e-identity cards, e-payments, e-health cards, and digital signatures. Nevertheless secure smartcard interoperability has remained a significant challenge. Currently the secure operation of smartcards is certified (e.g. through the Common Criteria) for a specific and closed environment that does not comprise the presence of other smartcards and their corresponding applications. To enable secure smartcard interoperability one must, however, explicitly consider settings in which different smartcards interact with their corresponding applications, i.e. not in isolation. Consequently the interoperability problem is only insufficiently addressed in security verification processes. In an ideal scenario one should be able to certify that introducing a new type of smartcard into an environment in which several smartcards safely interoperate will have no detrimental side-effects for the security and interoperability of the existing system as well as for the new smartcard and its associated applications. In this work, strong experimental evidence is presented demonstrating that such certification cannot be provided through common model checking approaches for security verification due to state space blow-up. Furthermore it is shown how the state space blow-up can be prevented by employing a verification protocol which, by taking the results of the Common Criteria certification into account, avoids checking any transitions that occur after an illegal transition has been detected.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2012 IEEE International Carnahan Conference on Security Technology (ICCST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1