Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393557
P. Campbell
From the time of its publication on February 6, 2003, the Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation” (DoDI 8500.2) has provided the definitions and controls that form the basis for IA across the DoD. This is the document to which compliance has been mandatory. For over 9 years, as the world of computer security has swirled through revision after revision and upgrade after upgrade, moving, for example, from DITSCAP to DIACAP, this instruction has remained unrevised, in its original form. As this venerable instruction now nears end of life it is appropriate that we step back and consider what we have learned from it and what its place is in context. In this paper we first review the peculiar structure of DoDI 8500.2, including its attachments, its “Subject Areas,” its “baseline IA levels,” its implicit use of type, signatures (full, half, left, and right), and signature patterns, along with span, and class. To provide context and contrast we briefly present three other control sets, namely (1) the DITSCAP checklists that preceded DoDI 8500.2, (2) the up and coming NIST 800-53 that it appears will follow DoDI 8500.2, and (3) Cobit from the commercial world. We then compare the scope of DoDI 8500.2 with those three control sets. The paper concludes with observations concerning DoDI 8500.2 and control sets in general.
{"title":"Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation:” A retrospective","authors":"P. Campbell","doi":"10.1109/CCST.2012.6393557","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393557","url":null,"abstract":"From the time of its publication on February 6, 2003, the Department of Defense Instruction 8500.2 “Information Assurance (IA) Implementation” (DoDI 8500.2) has provided the definitions and controls that form the basis for IA across the DoD. This is the document to which compliance has been mandatory. For over 9 years, as the world of computer security has swirled through revision after revision and upgrade after upgrade, moving, for example, from DITSCAP to DIACAP, this instruction has remained unrevised, in its original form. As this venerable instruction now nears end of life it is appropriate that we step back and consider what we have learned from it and what its place is in context. In this paper we first review the peculiar structure of DoDI 8500.2, including its attachments, its “Subject Areas,” its “baseline IA levels,” its implicit use of type, signatures (full, half, left, and right), and signature patterns, along with span, and class. To provide context and contrast we briefly present three other control sets, namely (1) the DITSCAP checklists that preceded DoDI 8500.2, (2) the up and coming NIST 800-53 that it appears will follow DoDI 8500.2, and (3) Cobit from the commercial world. We then compare the scope of DoDI 8500.2 with those three control sets. The paper concludes with observations concerning DoDI 8500.2 and control sets in general.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114270202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393541
J. Leu, Liang-tsair Geeng, C. Pu, Jyh-Bin Shiau
Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this paper we present a formant tracking approach based on image processing techniques. Our approach is to first find the running directions of the formants in a spectrogram. Then we perform smoothing on the spectrogram along the directions of the formants to produce formants that are more continuous and stable. Then we perform ridge detection to find formant track candidates in the spectrogram. After removing tracks that are too short or too weak, we fit the remaining tracks with 2nd degree polynomial curves to extract formants that are both smooth and continuous. Besides extracting thin formant tracks, we also extracted formant tracks with width. These thick formants are able to indication not only the locations of the formants but also the width of the formants. Using the voices of 70 people, we conducted experiments to test the effectiveness of the thin formants and the thick formants when they are used in speaker verification. Using only one sentence (6 to 10 words, 3 seconds in length) for comparison, the thin formants and the thick formants are able to achieve 88.3% and 93.8% of accuracy in speaker verification, respectively. When the number of sentences for comparison increased to seven, the accuracy rate improved to 93.8% and 98.7%, respectively.
{"title":"Tracking formants in spectrograms and its application in speaker verification","authors":"J. Leu, Liang-tsair Geeng, C. Pu, Jyh-Bin Shiau","doi":"10.1109/CCST.2012.6393541","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393541","url":null,"abstract":"Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this paper we present a formant tracking approach based on image processing techniques. Our approach is to first find the running directions of the formants in a spectrogram. Then we perform smoothing on the spectrogram along the directions of the formants to produce formants that are more continuous and stable. Then we perform ridge detection to find formant track candidates in the spectrogram. After removing tracks that are too short or too weak, we fit the remaining tracks with 2nd degree polynomial curves to extract formants that are both smooth and continuous. Besides extracting thin formant tracks, we also extracted formant tracks with width. These thick formants are able to indication not only the locations of the formants but also the width of the formants. Using the voices of 70 people, we conducted experiments to test the effectiveness of the thin formants and the thick formants when they are used in speaker verification. Using only one sentence (6 to 10 words, 3 seconds in length) for comparison, the thin formants and the thick formants are able to achieve 88.3% and 93.8% of accuracy in speaker verification, respectively. When the number of sentences for comparison increased to seven, the accuracy rate improved to 93.8% and 98.7%, respectively.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122275031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393532
Z. Vintr, M. Vintr, J. Malach
The ability of physical protection system (PPS) to withstand a possible attack and prevent an adversary from achieving his objectives is generally characterized as PPS effectiveness. The evaluation of effectiveness serves then as a basis for assessing whether a relevant PPS meets an intended aim. The article presents the selected results of an extensive analysis tackling the present state of PPS system effectiveness evaluation. There has been determined the achieved state of the development of a PPS efficiency evaluation theory, and also the possibilities of practical application of known algorithms, analyses, models, program products, tests and exercises which might be used for evaluating the PPS effectiveness. The authors of the paper have proposed that a couple of ways can be selected to develop these methods more when following the results of the performed analysis.
{"title":"Evaluation of physical protection system effectiveness","authors":"Z. Vintr, M. Vintr, J. Malach","doi":"10.1109/CCST.2012.6393532","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393532","url":null,"abstract":"The ability of physical protection system (PPS) to withstand a possible attack and prevent an adversary from achieving his objectives is generally characterized as PPS effectiveness. The evaluation of effectiveness serves then as a basis for assessing whether a relevant PPS meets an intended aim. The article presents the selected results of an extensive analysis tackling the present state of PPS system effectiveness evaluation. There has been determined the achieved state of the development of a PPS efficiency evaluation theory, and also the possibilities of practical application of known algorithms, analyses, models, program products, tests and exercises which might be used for evaluating the PPS effectiveness. The authors of the paper have proposed that a couple of ways can be selected to develop these methods more when following the results of the performed analysis.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393582
Yuichiro Yamada, Daisuke Sasagawa
Recently, the number of security cameras (CCTV) for crime prevention has increased rapidly in Japan for the purpose of establishing safe society. As a result, the number of opportunity of utilizing movies of CCTVs has increased in criminal investigation. Last year, we proposed the super-resolution technique using manual sub-pixel shifting process in the 45th ICCST. However, if manually processed images are used as evidence in a criminal trial, a doubt about artificiality in those images might come out. For the images to be treated more properly, a new utilizable super-resolution technique in criminal investigation is proposed in this paper. The biggest point of the proposed technique is that the target images are processed automatically without any artificiality. The new super-resolution technique utilizes a newly designed algorithm with Bilateral Filter. We have developed the new application software using the algorithm. We also compare images processed by the proposed technique with ones processed by the manual sub-pixel shifting process, in order to evaluate the effectiveness of the proposed technique. In conclusion, any person can get the same result by using this technique. Doubts will not arise if images processed by the proposal are used as evidential images in criminal trials.
{"title":"Super-resolution processing of the partial pictorial image of the single pictorial image which eliminated artificiality","authors":"Yuichiro Yamada, Daisuke Sasagawa","doi":"10.1109/CCST.2012.6393582","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393582","url":null,"abstract":"Recently, the number of security cameras (CCTV) for crime prevention has increased rapidly in Japan for the purpose of establishing safe society. As a result, the number of opportunity of utilizing movies of CCTVs has increased in criminal investigation. Last year, we proposed the super-resolution technique using manual sub-pixel shifting process in the 45th ICCST. However, if manually processed images are used as evidence in a criminal trial, a doubt about artificiality in those images might come out. For the images to be treated more properly, a new utilizable super-resolution technique in criminal investigation is proposed in this paper. The biggest point of the proposed technique is that the target images are processed automatically without any artificiality. The new super-resolution technique utilizes a newly designed algorithm with Bilateral Filter. We have developed the new application software using the algorithm. We also compare images processed by the proposed technique with ones processed by the manual sub-pixel shifting process, in order to evaluate the effectiveness of the proposed technique. In conclusion, any person can get the same result by using this technique. Doubts will not arise if images processed by the proposal are used as evidential images in criminal trials.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393559
Aesun Park, Dong‐Guk Han, J. Ryoo
Correlation Power Analysis (CPA) is a very effective attack method for finding secret keys using the statistical features of power consumption signals from cryptosystems. However, the power consumption signal of the encryption device is greatly affected or distorted by noise arising from peripheral devices. When a side channel attack is carried out, this distorted signal, which is affected by noise and time inconsistency, is the major factor that reduces the attack performance. A signal processing method based on the Wavelet Transform (WT) has been proposed to enhance the attack performance. Selecting the decomposition level and the wavelet basis is very important because the CPA performance based on the WT depends on these two factors. In this paper, the CPA performance, in terms of noise reduction and the transform domain, is compared and analyzed from the viewpoint of attack time and the minimum number of signals required to find the secret key. In addition, methods for selecting the decomposition level and the wavelet basis using the features of power consumption are proposed, and validated through experiments.
{"title":"CPA performance comparison based on Wavelet Transform","authors":"Aesun Park, Dong‐Guk Han, J. Ryoo","doi":"10.1109/CCST.2012.6393559","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393559","url":null,"abstract":"Correlation Power Analysis (CPA) is a very effective attack method for finding secret keys using the statistical features of power consumption signals from cryptosystems. However, the power consumption signal of the encryption device is greatly affected or distorted by noise arising from peripheral devices. When a side channel attack is carried out, this distorted signal, which is affected by noise and time inconsistency, is the major factor that reduces the attack performance. A signal processing method based on the Wavelet Transform (WT) has been proposed to enhance the attack performance. Selecting the decomposition level and the wavelet basis is very important because the CPA performance based on the WT depends on these two factors. In this paper, the CPA performance, in terms of noise reduction and the transform domain, is compared and analyzed from the viewpoint of attack time and the minimum number of signals required to find the secret key. In addition, methods for selecting the decomposition level and the wavelet basis using the features of power consumption are proposed, and validated through experiments.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122881221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393580
S. Rankin, N. Cohen, K. MacLennan-Brown, K. Sage
A range of automated video analytics systems (VAS) is increasingly being used to improve efficiency and effectiveness within CCTV control rooms. Their role is to manage the large volumes of surveillance data that are constantly generated and to sift the data in real time to detect incidents or activities of specific interest. The CCTV Operator Performance Benchmarking project investigated, through commissioning operator trials, the performance of human operators on detection and tracking tasks and the factors that affect the performance of human operators. Human performance can then be compared against that obtained from automated systems. Through the development of publicly available guidance documents, CAST hopes to improve the benefits brought by VAS to the control room environment, and to ensure they are deployed in an appropriate manner. The paper will discuss the trials and explore the results obtained. It will discuss the factors that can have both a positive and detrimental effect on operators' performance and the appropriate environments where the introduction of VAS can bring great benefits.
{"title":"CCTV Operator Performance Benchmarking","authors":"S. Rankin, N. Cohen, K. MacLennan-Brown, K. Sage","doi":"10.1109/CCST.2012.6393580","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393580","url":null,"abstract":"A range of automated video analytics systems (VAS) is increasingly being used to improve efficiency and effectiveness within CCTV control rooms. Their role is to manage the large volumes of surveillance data that are constantly generated and to sift the data in real time to detect incidents or activities of specific interest. The CCTV Operator Performance Benchmarking project investigated, through commissioning operator trials, the performance of human operators on detection and tracking tasks and the factors that affect the performance of human operators. Human performance can then be compared against that obtained from automated systems. Through the development of publicly available guidance documents, CAST hopes to improve the benefits brought by VAS to the control room environment, and to ensure they are deployed in an appropriate manner. The paper will discuss the trials and explore the results obtained. It will discuss the factors that can have both a positive and detrimental effect on operators' performance and the appropriate environments where the introduction of VAS can bring great benefits.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393543
E. González, L. Álvarez, L. Mazorra
Ear image analysis is an emerging biometrie application. A method for normalizing ear images and extracting from them a set of measurable features (feature vector) that can be used to identify its owner is proposed. The identification would be made based on the comparison between the feature vector of the input image and all feature vectors of the images in the database we work with. The feature vector is based on the ear contours. One important goal of this paper is to identify the most significant areas in the ear contour for human being identification purpose. Another important contribution of the paper is the combination of active contours techniques and ovoid model ear fitting (used to normalize ear features) and a high accurate invariant approach of internal and external ear contours. Ear geometry is characterized using a set of distances to external and internal contours points. This set of distances, along with six ovoid parameters is considered as the feature vector of the image. To test the method a new ear images database has been created. The proposed method identifies front-parallel views pretty good, even when varying the distance of the individual to the camera or the camera lens.
{"title":"Normalization and feature extraction on ear images","authors":"E. González, L. Álvarez, L. Mazorra","doi":"10.1109/CCST.2012.6393543","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393543","url":null,"abstract":"Ear image analysis is an emerging biometrie application. A method for normalizing ear images and extracting from them a set of measurable features (feature vector) that can be used to identify its owner is proposed. The identification would be made based on the comparison between the feature vector of the input image and all feature vectors of the images in the database we work with. The feature vector is based on the ear contours. One important goal of this paper is to identify the most significant areas in the ear contour for human being identification purpose. Another important contribution of the paper is the combination of active contours techniques and ovoid model ear fitting (used to normalize ear features) and a high accurate invariant approach of internal and external ear contours. Ear geometry is characterized using a set of distances to external and internal contours points. This set of distances, along with six ovoid parameters is considered as the feature vector of the image. To test the method a new ear images database has been created. The proposed method identifies front-parallel views pretty good, even when varying the distance of the individual to the camera or the camera lens.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393555
Brett C. Tjaden, Robert Floodeen
Responding to some future incident might require significant cooperation by multiple teams or organizations within an incident response community. To study the effectiveness of that cooperation, the Carnegie Mellon® Software Engineering Institute (SEI) conducted a study using a group of volunteer, autonomous incident response organizations. These organizations completed special SEI-designed tasks that required them to work together. The study identified three factors as likely to help or hinder the cooperation of incident responders: being prepared, being organized, and following incident response best practices. This technical note describes those factors and offers recommendations for implementing each one.
对未来事件的响应可能需要事件响应社区内多个团队或组织的大力合作。为了研究这种合作的有效性,卡内基梅隆软件工程研究所(Carnegie Mellon®Software Engineering Institute, SEI)使用一组志愿者、自主事件响应组织进行了一项研究。这些组织完成了需要他们一起工作的特殊的sei设计的任务。该研究确定了可能有助于或阻碍事件响应者合作的三个因素:准备、组织和遵循事件响应最佳实践。本技术说明描述了这些因素,并提供了实现每个因素的建议。
{"title":"Communication among incident responders — A study","authors":"Brett C. Tjaden, Robert Floodeen","doi":"10.1109/CCST.2012.6393555","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393555","url":null,"abstract":"Responding to some future incident might require significant cooperation by multiple teams or organizations within an incident response community. To study the effectiveness of that cooperation, the Carnegie Mellon® Software Engineering Institute (SEI) conducted a study using a group of volunteer, autonomous incident response organizations. These organizations completed special SEI-designed tasks that required them to work together. The study identified three factors as likely to help or hinder the cooperation of incident responders: being prepared, being organized, and following incident response best practices. This technical note describes those factors and offers recommendations for implementing each one.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125648248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393573
J. A. Uribe, Luis Fonseca, J. Vargas
Autonomous systems can assist humans in the important task of safe driving. Such systems can warn people about possible risks, take actions to avoid accidents or guide the vehicle without human supervision. In railway scenarios a camera in front of the train can aid drivers with the identification of obstacles or strange objects that can pose danger to the route. Image processing in these applications is not easy of performing. The changing conditions create scenes where background is hard to detect, lighting varies and process speed must be fast. This article describes a first approximation to the solution of the problem where two complementary approaches are followed for detecting and tracking obstacles on videos captured from a train driver perspective. The first strategy is a simple-frame-based approach where every video frame is analyzed using the Hough transform for detecting the rails. On every rail a systematic search is done detecting obstacles that can be dangerous for the train course. The second approach uses consecutive frames for detecting the trajectory of moving objects. Analyzing the sparse optical flow the candidate objects are tracked and their trajectories computed in order to determine their possible route to collision. For testing the system we have used videos where preselected fixed and moving obstacles have been superimposed using the Chroma key effect. The system had shown a real time performance in detecting and tracking the objects. Future work includes the test of the system on real scenarios and the validation over changing weather conditions.
{"title":"Video based system for railroad collision warning","authors":"J. A. Uribe, Luis Fonseca, J. Vargas","doi":"10.1109/CCST.2012.6393573","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393573","url":null,"abstract":"Autonomous systems can assist humans in the important task of safe driving. Such systems can warn people about possible risks, take actions to avoid accidents or guide the vehicle without human supervision. In railway scenarios a camera in front of the train can aid drivers with the identification of obstacles or strange objects that can pose danger to the route. Image processing in these applications is not easy of performing. The changing conditions create scenes where background is hard to detect, lighting varies and process speed must be fast. This article describes a first approximation to the solution of the problem where two complementary approaches are followed for detecting and tracking obstacles on videos captured from a train driver perspective. The first strategy is a simple-frame-based approach where every video frame is analyzed using the Hough transform for detecting the rails. On every rail a systematic search is done detecting obstacles that can be dangerous for the train course. The second approach uses consecutive frames for detecting the trajectory of moving objects. Analyzing the sparse optical flow the candidate objects are tracked and their trajectories computed in order to determine their possible route to collision. For testing the system we have used videos where preselected fixed and moving obstacles have been superimposed using the Chroma key effect. The system had shown a real time performance in detecting and tracking the objects. Future work includes the test of the system on real scenarios and the validation over changing weather conditions.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130296651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-31DOI: 10.1109/CCST.2012.6393546
M. Talamo, M. Galinium, C. Schunck, F. Arcieri
Smartcards are used in a wide range of applications including electronic (e-) driving licenses, e-identity cards, e-payments, e-health cards, and digital signatures. Nevertheless secure smartcard interoperability has remained a significant challenge. Currently the secure operation of smartcards is certified (e.g. through the Common Criteria) for a specific and closed environment that does not comprise the presence of other smartcards and their corresponding applications. To enable secure smartcard interoperability one must, however, explicitly consider settings in which different smartcards interact with their corresponding applications, i.e. not in isolation. Consequently the interoperability problem is only insufficiently addressed in security verification processes. In an ideal scenario one should be able to certify that introducing a new type of smartcard into an environment in which several smartcards safely interoperate will have no detrimental side-effects for the security and interoperability of the existing system as well as for the new smartcard and its associated applications. In this work, strong experimental evidence is presented demonstrating that such certification cannot be provided through common model checking approaches for security verification due to state space blow-up. Furthermore it is shown how the state space blow-up can be prevented by employing a verification protocol which, by taking the results of the Common Criteria certification into account, avoids checking any transitions that occur after an illegal transition has been detected.
{"title":"State space blow-up in the verification of secure smartcard interoperability","authors":"M. Talamo, M. Galinium, C. Schunck, F. Arcieri","doi":"10.1109/CCST.2012.6393546","DOIUrl":"https://doi.org/10.1109/CCST.2012.6393546","url":null,"abstract":"Smartcards are used in a wide range of applications including electronic (e-) driving licenses, e-identity cards, e-payments, e-health cards, and digital signatures. Nevertheless secure smartcard interoperability has remained a significant challenge. Currently the secure operation of smartcards is certified (e.g. through the Common Criteria) for a specific and closed environment that does not comprise the presence of other smartcards and their corresponding applications. To enable secure smartcard interoperability one must, however, explicitly consider settings in which different smartcards interact with their corresponding applications, i.e. not in isolation. Consequently the interoperability problem is only insufficiently addressed in security verification processes. In an ideal scenario one should be able to certify that introducing a new type of smartcard into an environment in which several smartcards safely interoperate will have no detrimental side-effects for the security and interoperability of the existing system as well as for the new smartcard and its associated applications. In this work, strong experimental evidence is presented demonstrating that such certification cannot be provided through common model checking approaches for security verification due to state space blow-up. Furthermore it is shown how the state space blow-up can be prevented by employing a verification protocol which, by taking the results of the Common Criteria certification into account, avoids checking any transitions that occur after an illegal transition has been detected.","PeriodicalId":405531,"journal":{"name":"2012 IEEE International Carnahan Conference on Security Technology (ICCST)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}