Abnormal signal detection is a hot research in electronic test field. The digital storage oscilloscope detects abnormal signals by signal acquisition, abnormal signal identification, storage and display. Though the existing oscilloscope signal acquisition ability is technically improved by increasing the sampling rate, the waveform capture rate and other indicators, the identification, storage and display of abnormal signal show some deficiencies, which results in low abnormal signal detection efficiency and less application of detection function. The purpose of this paper is to improve the digital storage oscilloscope abnormal signal detection ability totally, design the abnormal signal detection system based on FPGA, study the method to improve the abnormal signal identification, storage and display, realize real-time, accurate identification and storage of abnormal signal in long period of time without manual monitoring, realize accurate offline positioning for abnormal signal in multiple display modes, and greatly improve the abnormal signal detection efficiency and practicability of the oscilloscope.
{"title":"A Study on Improving the Abnormal Signal Detection Ability of Digital Storage Oscilloscope","authors":"Jiang Jun, Ye Peng","doi":"10.1109/DASC.2013.70","DOIUrl":"https://doi.org/10.1109/DASC.2013.70","url":null,"abstract":"Abnormal signal detection is a hot research in electronic test field. The digital storage oscilloscope detects abnormal signals by signal acquisition, abnormal signal identification, storage and display. Though the existing oscilloscope signal acquisition ability is technically improved by increasing the sampling rate, the waveform capture rate and other indicators, the identification, storage and display of abnormal signal show some deficiencies, which results in low abnormal signal detection efficiency and less application of detection function. The purpose of this paper is to improve the digital storage oscilloscope abnormal signal detection ability totally, design the abnormal signal detection system based on FPGA, study the method to improve the abnormal signal identification, storage and display, realize real-time, accurate identification and storage of abnormal signal in long period of time without manual monitoring, realize accurate offline positioning for abnormal signal in multiple display modes, and greatly improve the abnormal signal detection efficiency and practicability of the oscilloscope.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127125750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a part of public transportation, taxi plays a very important role of passengers' travel from the airport to downtown. In order to exploring an efficient taxi dispatch mechanism and saving passenger waiting times, we develop an adaptive airport taxi dispatch system based on principal components analysis wavelet neural network (PCA-WNN). A series of new online short term time forecasting techniques are used to capture the relationship between taxi supply and demand. Then we proposed an adaptive feedback-based taxi dispatch algorithm for the effective response to the non-stationarity of taxi service under a changing environment. By using the real data of Beijing Capital International Airport within twenty weeks, our experiment demonstrated that this algorithm can predict accurately of the taxi service data and greatly improve the efficiency of taxi management.
{"title":"Adaptive Airport Taxi Dispatch Algorithm Based on PCA-WNN","authors":"Ke Zhang, Ke Zhang, S. Leng, Shuo Xu","doi":"10.1109/DASC.2013.86","DOIUrl":"https://doi.org/10.1109/DASC.2013.86","url":null,"abstract":"As a part of public transportation, taxi plays a very important role of passengers' travel from the airport to downtown. In order to exploring an efficient taxi dispatch mechanism and saving passenger waiting times, we develop an adaptive airport taxi dispatch system based on principal components analysis wavelet neural network (PCA-WNN). A series of new online short term time forecasting techniques are used to capture the relationship between taxi supply and demand. Then we proposed an adaptive feedback-based taxi dispatch algorithm for the effective response to the non-stationarity of taxi service under a changing environment. By using the real data of Beijing Capital International Airport within twenty weeks, our experiment demonstrated that this algorithm can predict accurately of the taxi service data and greatly improve the efficiency of taxi management.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In LTE/LTE-A system, low frequency reuse factor is used to improve the spectral efficiency. In order to mitigate the inter-cell interference (ICI), inter-cell interference coordination (ICIC) is introduced which divides the bandwidth into a 1-reuse sub-bands for cell center users and -reuse sub-bands for cell edge users. ICIC technique has been considered as a promising technology for alleviating the degradation caused by ICI and improving throughput of cell edge users. Coordinated multipoint transmission/reception (CoMP) technique has been proposed in 3GPP release 11 to avoid ICI and enhance both system average and cell edge throughput. In this paper, we combine the ICIC with joint reception receiver together in uplink CoMP system to futher remove the multiuser and inter-cell interference. According to the requirement for exchanging received signals' information among cooperation base stations (BSs), two detection methods are proposed for uplink CoMP system. Simulation results illustate the performance of the proposed methods compared to that obained through ICIC and CoMP techniques.
{"title":"Performance of LTE-A Uplink with Joint Reception and Inter-cell Interference Coordination","authors":"Yong Li, Zhangqin Huang","doi":"10.1109/DASC.2013.113","DOIUrl":"https://doi.org/10.1109/DASC.2013.113","url":null,"abstract":"In LTE/LTE-A system, low frequency reuse factor is used to improve the spectral efficiency. In order to mitigate the inter-cell interference (ICI), inter-cell interference coordination (ICIC) is introduced which divides the bandwidth into a 1-reuse sub-bands for cell center users and -reuse sub-bands for cell edge users. ICIC technique has been considered as a promising technology for alleviating the degradation caused by ICI and improving throughput of cell edge users. Coordinated multipoint transmission/reception (CoMP) technique has been proposed in 3GPP release 11 to avoid ICI and enhance both system average and cell edge throughput. In this paper, we combine the ICIC with joint reception receiver together in uplink CoMP system to futher remove the multiuser and inter-cell interference. According to the requirement for exchanging received signals' information among cooperation base stations (BSs), two detection methods are proposed for uplink CoMP system. Simulation results illustate the performance of the proposed methods compared to that obained through ICIC and CoMP techniques.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130555617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern science is increasingly data-driven and collaborative in nature. Comparing to ordinary data processing, big data processing that is mixed with great missing date must be processed rapidly. The Rough Set was generated to deal with the large data.In this paper, we proposed animproved algorithm for dynamic Cognitive extractionwhich deals with adaptive fuzzy attribute values and the fuzzy attribute reduction aiming at uncertainty datasuch asdata with diversity or missing character faced by the big data with using Fuzzy Rough Set Theory.At the aspect of information decision, according to the Real-time input information, it deep analyzes the dynamic information entropy of the data itself and chooses the biggest prediction information entropy direction for the cognitive rules to achieve rapid recognitive of data, complete information of quick decision.Because the algorithm is adopted to predict the best direction of information entropy, so the recognitive effect is also improved. At the end of the paper, we have analyzed superiority of the dynamic cognitive algorithm by using breast cancer data as the foundation.
{"title":"An Improved Algorithm for Dynamic Cognitive Extraction Based on Fuzzy Rough Set","authors":"Haitao Jia, Mei Xie, Qian Tang, Wei Zhang","doi":"10.1109/DASC.2013.106","DOIUrl":"https://doi.org/10.1109/DASC.2013.106","url":null,"abstract":"Modern science is increasingly data-driven and collaborative in nature. Comparing to ordinary data processing, big data processing that is mixed with great missing date must be processed rapidly. The Rough Set was generated to deal with the large data.In this paper, we proposed animproved algorithm for dynamic Cognitive extractionwhich deals with adaptive fuzzy attribute values and the fuzzy attribute reduction aiming at uncertainty datasuch asdata with diversity or missing character faced by the big data with using Fuzzy Rough Set Theory.At the aspect of information decision, according to the Real-time input information, it deep analyzes the dynamic information entropy of the data itself and chooses the biggest prediction information entropy direction for the cognitive rules to achieve rapid recognitive of data, complete information of quick decision.Because the algorithm is adopted to predict the best direction of information entropy, so the recognitive effect is also improved. At the end of the paper, we have analyzed superiority of the dynamic cognitive algorithm by using breast cancer data as the foundation.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"24 23","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113955466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Such information as system and application logs as well as the output from the deployed security measures, e.g., IDS alerts, firewall logs, scanning reports, etc., is important for the administrators or security operators to be aware at first time of the running state of the system and take efforts if necessary. In this context, high performance security analytics is proposed to address the challenges to rapidly gather, manage, process, and analyze the large amount of real-time information generated from the large scale of enterprise IT-Infrastructure while it is being operated. As an example of next generation Security Information and Event Management (SIEM) platform, Security Analytics Lab (SAL) has been designed and implemented based on the newly emerged In-Memory data management technique, which makes it possible to efficiently organize and access different types of event information through a consistent central storage and interface. To correlate the information from different sources and identify the meaningful information is another challenging task, which makes great sense for quickly judging the current situation and making the decision. In this paper, the multi-core processing technique is introduced in the SAL platform. Various correlation algorithms, e.g., k-means based algorithms, ROCK and QROCK clustering algorithms, have been implemented and integrated in the multi-core supported SAL architecture. Practical experiments are conducted and analyzed to proof that the performance of analytics can be significantly improved by applying multi-core processing technique in SAL.
{"title":"Multi-core Supported High Performance Security Analytics","authors":"Feng Cheng, Amir Azodi, David Jaeger, C. Meinel","doi":"10.1109/DASC.2013.136","DOIUrl":"https://doi.org/10.1109/DASC.2013.136","url":null,"abstract":"Such information as system and application logs as well as the output from the deployed security measures, e.g., IDS alerts, firewall logs, scanning reports, etc., is important for the administrators or security operators to be aware at first time of the running state of the system and take efforts if necessary. In this context, high performance security analytics is proposed to address the challenges to rapidly gather, manage, process, and analyze the large amount of real-time information generated from the large scale of enterprise IT-Infrastructure while it is being operated. As an example of next generation Security Information and Event Management (SIEM) platform, Security Analytics Lab (SAL) has been designed and implemented based on the newly emerged In-Memory data management technique, which makes it possible to efficiently organize and access different types of event information through a consistent central storage and interface. To correlate the information from different sources and identify the meaningful information is another challenging task, which makes great sense for quickly judging the current situation and making the decision. In this paper, the multi-core processing technique is introduced in the SAL platform. Various correlation algorithms, e.g., k-means based algorithms, ROCK and QROCK clustering algorithms, have been implemented and integrated in the multi-core supported SAL architecture. Practical experiments are conducted and analyzed to proof that the performance of analytics can be significantly improved by applying multi-core processing technique in SAL.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132983880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data centers host diverse applications with stringent QoS requirements. The key issue is to eliminate network congestions which severely degrade application performance. One effective solution is to balance the traffic load in the datacenter regular topologies. Many previous strategies focused on optimized flow routing, and these solutions can hardly achieve ideal load balance while guaranteeing QoS of different traffic flows due to the limitations in practical. In this paper, we discuss packet-level routing and analyze its merit for fine-grained load balance in data centers. Though packet-level routing interacts poorly with TCP in traditional network settings, we prove that it can be adapted to datacenter environment. Motived by the work done by Dixit [4] [5], we assert that packet-level routing is the right choice for data centers. Our simulation results demonstrate that packet-level routing better fulfills datacenter requirements.
{"title":"Analyzing Packet-Level Routing in Data Centers","authors":"Ruoyan Liu, Huaxi Gu, Yawen Chen, Haibo Zhang","doi":"10.1109/DASC.2013.141","DOIUrl":"https://doi.org/10.1109/DASC.2013.141","url":null,"abstract":"Data centers host diverse applications with stringent QoS requirements. The key issue is to eliminate network congestions which severely degrade application performance. One effective solution is to balance the traffic load in the datacenter regular topologies. Many previous strategies focused on optimized flow routing, and these solutions can hardly achieve ideal load balance while guaranteeing QoS of different traffic flows due to the limitations in practical. In this paper, we discuss packet-level routing and analyze its merit for fine-grained load balance in data centers. Though packet-level routing interacts poorly with TCP in traditional network settings, we prove that it can be adapted to datacenter environment. Motived by the work done by Dixit [4] [5], we assert that packet-level routing is the right choice for data centers. Our simulation results demonstrate that packet-level routing better fulfills datacenter requirements.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133386018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to guarantee the successful execution of temporal-aware service processes in cloud computing, one important requirement is to effectively and efficiently select the appropriate services for service processes. The existing methods don't consider the dynamic prices of candidate services, or lack the efficient and practical selecting ability when encountering complex and large scale candidate services. In this paper, we propose a new approach to selecting services for temporal-aware service processes with dynamic prices. First of all, the initial execution paths of service processes are obtained by local optimization policy. Then, we judge whether temporal violations occur on these execution paths. If there are some temporal violations, the problem of violation correction is automatically transformed to nonlinear programming model which can be solved efficiently. Finally, the optimal execution paths for service processes are obtained. The advantages of our approach are validated by a practical example.
{"title":"An Approach to Selecting Services with Dynamic Prices for Temporal-Aware Service Processes","authors":"Yanhua Du, Hong Li","doi":"10.1109/DASC.2013.47","DOIUrl":"https://doi.org/10.1109/DASC.2013.47","url":null,"abstract":"In order to guarantee the successful execution of temporal-aware service processes in cloud computing, one important requirement is to effectively and efficiently select the appropriate services for service processes. The existing methods don't consider the dynamic prices of candidate services, or lack the efficient and practical selecting ability when encountering complex and large scale candidate services. In this paper, we propose a new approach to selecting services for temporal-aware service processes with dynamic prices. First of all, the initial execution paths of service processes are obtained by local optimization policy. Then, we judge whether temporal violations occur on these execution paths. If there are some temporal violations, the problem of violation correction is automatically transformed to nonlinear programming model which can be solved efficiently. Finally, the optimal execution paths for service processes are obtained. The advantages of our approach are validated by a practical example.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115697356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, application of dense targets is one of the important means of penetration. In many observation conditions, the echoes from the dense targets mixed with many aliasing signals, and conventional radar signal processing algorithms do not take the aliasing signals into account. Therefore it is difficult for conventional algorithms to recognize multi-targets. In this paper, an improved ESPRIT algorithm is proposed which can recognize the multi-targets from the aliasing echoes and greatly reduce the computational complexity without changing the algorithm accuracy, especially can obtain a better estimation in the case of low SNR environment. The proposed algorithm can firstly quickly realize the estimate of scattering center parameters of target echoes, and then based on the estimation, the aliasing targets can be recognized. The Simulation also verifies the improved ESPRIT algorithm has a better identification and recognition capability of aliasing targets in low SNR condition. Moreover because of reduction of the computational complexity, the performance of proposed algorithm is faster than conventional methods, especially in the case of multiple aliasing scattering centers.
{"title":"Based on Improved ESPRIT Algorithm Radar Multi-target Recognition","authors":"Haitao Jia, Jian Li, Taoliu Yang, Wei Zhang","doi":"10.1109/DASC.2013.99","DOIUrl":"https://doi.org/10.1109/DASC.2013.99","url":null,"abstract":"At present, application of dense targets is one of the important means of penetration. In many observation conditions, the echoes from the dense targets mixed with many aliasing signals, and conventional radar signal processing algorithms do not take the aliasing signals into account. Therefore it is difficult for conventional algorithms to recognize multi-targets. In this paper, an improved ESPRIT algorithm is proposed which can recognize the multi-targets from the aliasing echoes and greatly reduce the computational complexity without changing the algorithm accuracy, especially can obtain a better estimation in the case of low SNR environment. The proposed algorithm can firstly quickly realize the estimate of scattering center parameters of target echoes, and then based on the estimation, the aliasing targets can be recognized. The Simulation also verifies the improved ESPRIT algorithm has a better identification and recognition capability of aliasing targets in low SNR condition. Moreover because of reduction of the computational complexity, the performance of proposed algorithm is faster than conventional methods, especially in the case of multiple aliasing scattering centers.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116788571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network coding in space, a new direction also named space information flow, is verified to have potential advantages over routing in space if the geometric conditions are satisfied. Cost advantage is adopted to measure the performance for network coding in space. Present literatures proved that only in regular (5 + 1) model, network coding in space is strictly superior to routing in terms of single-source multicast, comparing with other regular (n + 1) models. Focusing on irregular (5 + 1) model, this paper uses geometry to quantitatively study the constructions of network coding and optimal routing when a sink node moves without limits in space. Furthermore, the upperbound of cost advantage is figured out as well as the region where network coding is strictly superior to routing. Some properties of network coding in space are also presented.
{"title":"Cost Advantage of Network Coding in Space for Irregular (5 + 1) Model","authors":"Ting Wen, Xiaoxi Zhang, Xin Huang, Jiaqing Huang","doi":"10.1109/DASC.2013.140","DOIUrl":"https://doi.org/10.1109/DASC.2013.140","url":null,"abstract":"Network coding in space, a new direction also named space information flow, is verified to have potential advantages over routing in space if the geometric conditions are satisfied. Cost advantage is adopted to measure the performance for network coding in space. Present literatures proved that only in regular (5 + 1) model, network coding in space is strictly superior to routing in terms of single-source multicast, comparing with other regular (n + 1) models. Focusing on irregular (5 + 1) model, this paper uses geometry to quantitatively study the constructions of network coding and optimal routing when a sink node moves without limits in space. Furthermore, the upperbound of cost advantage is figured out as well as the region where network coding is strictly superior to routing. Some properties of network coding in space are also presented.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123557452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data require exceptional technologies to efficiently process large quantities of data within tolerable elapsed times, such as capture, curation, storage, search, sharing, transfer, analysis and visualization. Concept, features, construction importance, architecture, run mode, and its key technologies of big data are analyzed in this paper. Information sharing and data security under big data constructin are studied, at last, four measures for building big data are putforward, which can provide good decision-making for big data construction.
{"title":"Research on Big Data Architecture, Key Technologies and Its Measures","authors":"Xiaoquan Li, Fujiang Zhang, Yongliang Wang","doi":"10.1109/DASC.2013.28","DOIUrl":"https://doi.org/10.1109/DASC.2013.28","url":null,"abstract":"Big data require exceptional technologies to efficiently process large quantities of data within tolerable elapsed times, such as capture, curation, storage, search, sharing, transfer, analysis and visualization. Concept, features, construction importance, architecture, run mode, and its key technologies of big data are analyzed in this paper. Information sharing and data security under big data constructin are studied, at last, four measures for building big data are putforward, which can provide good decision-making for big data construction.","PeriodicalId":179557,"journal":{"name":"2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128250833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}