首页 > 最新文献

2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing最新文献

英文 中文
Generation of Mixed Broadside and Skewed-Load Diagnostic Test Sets for Transition Faults 过渡故障宽侧和斜载混合诊断试验集的生成
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.15
I. Pomeranz
This paper describes a diagnostic test generation procedure for transition faults that produces mixed test sets consisting of broadside and skewed-load tests. A mix of broadside and skewed-load tests yields improved diagnostic resolution compared with a single test type. The procedure starts from a mixed test set generated for fault detection. It uses two procedures to obtain new tests that are useful for diagnosis starting from existing tests. Both procedures allow the type of a test to be modified (from broadside to skewed-load and from skewed-load to broadside). The first procedure is fault independent. The second procedure targets specific fault pairs. Experimental results show that diagnostic test generation changes the mix of broadside and skewed-load tests in the test set compared with a fault detection test set.
本文介绍了过渡故障诊断试验的生成过程,该过程产生由侧载和斜载试验组成的混合试验集。与单一测试类型相比,混合侧载和斜载测试可提高诊断分辨率。该过程从为故障检测而生成的混合测试集开始。它使用两种程序来获得对从现有测试开始的诊断有用的新测试。这两种程序都允许修改测试类型(从舷侧到斜载,从斜载到舷侧)。第一个过程是与故障无关的。第二个过程针对特定的故障对。实验结果表明,与故障检测测试集相比,诊断测试的生成改变了测试集中侧载和斜载测试的混合情况。
{"title":"Generation of Mixed Broadside and Skewed-Load Diagnostic Test Sets for Transition Faults","authors":"I. Pomeranz","doi":"10.1109/PRDC.2011.15","DOIUrl":"https://doi.org/10.1109/PRDC.2011.15","url":null,"abstract":"This paper describes a diagnostic test generation procedure for transition faults that produces mixed test sets consisting of broadside and skewed-load tests. A mix of broadside and skewed-load tests yields improved diagnostic resolution compared with a single test type. The procedure starts from a mixed test set generated for fault detection. It uses two procedures to obtain new tests that are useful for diagnosis starting from existing tests. Both procedures allow the type of a test to be modified (from broadside to skewed-load and from skewed-load to broadside). The first procedure is fault independent. The second procedure targets specific fault pairs. Experimental results show that diagnostic test generation changes the mix of broadside and skewed-load tests in the test set compared with a fault detection test set.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127271347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Task Mapping and Partition Allocation for Mixed-Criticality Real-Time Systems 混合临界实时系统的任务映射与分区分配
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.42
D. Tamas-Selicean, P. Pop
In this paper we address the mapping of mixed-criticality hard real-time applications on distributed embedded architectures. We assume that the architecture provides both spatial and temporal partitioning, thus enforcing enough separation between applications. With temporal partitioning, each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the applications are scheduled using static-cyclic scheduling. We are interested to determine the task mapping to processors, and the sequence and size of the time slots within the Major Frame on each processor, such that the applications are schedulable. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.
在本文中,我们讨论了混合临界硬实时应用在分布式嵌入式架构上的映射。我们假设体系结构提供空间和时间分区,从而在应用程序之间强制足够的分离。使用临时分区,每个应用程序在单独的分区中运行,并且在应用程序映射的处理器上为每个分区分配了几个时隙。处理器上所有应用程序的时隙序列被分组在一个主帧中,主帧周期性地重复。我们假设应用程序使用静态循环调度进行调度。我们感兴趣的是确定到处理器的任务映射,以及每个处理器上主帧内的时隙的顺序和大小,以便应用程序是可调度的。我们提出了一种基于禁忌搜索的方法来解决这个优化问题。所提出的算法已经使用几个合成和现实生活的基准进行了评估。
{"title":"Task Mapping and Partition Allocation for Mixed-Criticality Real-Time Systems","authors":"D. Tamas-Selicean, P. Pop","doi":"10.1109/PRDC.2011.42","DOIUrl":"https://doi.org/10.1109/PRDC.2011.42","url":null,"abstract":"In this paper we address the mapping of mixed-criticality hard real-time applications on distributed embedded architectures. We assume that the architecture provides both spatial and temporal partitioning, thus enforcing enough separation between applications. With temporal partitioning, each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the applications are scheduled using static-cyclic scheduling. We are interested to determine the task mapping to processors, and the sequence and size of the time slots within the Major Frame on each processor, such that the applications are schedulable. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Correcting DFT Codes with Modified Berlekamp-Massey Algorithm and Syndrome Extension 用改进的Berlekamp-Massey算法校正DFT码
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.39
G. Redinbo
Real number block codes derived from the discrete Fourier transform (DFT) are corrected by coupling a very modified Berlekamp-Massey algorithm with a syndrome extension process. Enhanced extension recursions based on Kalman syndrome extensions are examined.
实数分组码由离散傅里叶变换(DFT)衍生,通过耦合一个非常改进的Berlekamp-Massey算法与综合症扩展过程。研究了基于卡尔曼综合征扩展的增强扩展递归。
{"title":"Correcting DFT Codes with Modified Berlekamp-Massey Algorithm and Syndrome Extension","authors":"G. Redinbo","doi":"10.1109/PRDC.2011.39","DOIUrl":"https://doi.org/10.1109/PRDC.2011.39","url":null,"abstract":"Real number block codes derived from the discrete Fourier transform (DFT) are corrected by coupling a very modified Berlekamp-Massey algorithm with a syndrome extension process. Enhanced extension recursions based on Kalman syndrome extensions are examined.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Framework for Systematic Testing of Multi-threaded Applications 多线程应用系统测试框架
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.48
Mihai Florian
We present a framework that exhaustively explores the scheduling nondeterminism of multi-threaded applications and checks for concurrency errors. We use a flexible design that allows us to integrate multiple algorithms aimed at reducing the number of interleavings that have to be tested.
我们提出了一个框架,详尽地探讨了多线程应用程序的调度不确定性,并检查并发错误。我们使用灵活的设计,使我们能够集成多种算法,旨在减少必须测试的交错数量。
{"title":"A Framework for Systematic Testing of Multi-threaded Applications","authors":"Mihai Florian","doi":"10.1109/PRDC.2011.48","DOIUrl":"https://doi.org/10.1109/PRDC.2011.48","url":null,"abstract":"We present a framework that exhaustively explores the scheduling nondeterminism of multi-threaded applications and checks for concurrency errors. We use a flexible design that allows us to integrate multiple algorithms aimed at reducing the number of interleavings that have to be tested.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122882551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploiting Total Order Multicast in Weakly Consistent Transactional Caches 弱一致事务性缓存中全序多播的利用
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.21
P. Ruivo, Maria Couceiro, P. Romano, L. Rodrigues
Nowadays, distributed in-memory caches are increasingly used as a way to improve the performance of applications that require frequent access to large amounts of data. In order to maximize performance and scalability, these platforms typically rely on weakly consistent partial replication mechanisms. These schemes partition the data across the nodes and ensure a predefined (and typically very small) replication degree, thus maximizing the global memory capacity of the platform and ensuring that the cost to ensure replica consistency remains constant as the scale of the platform grows. Moreover, even though several of these platforms provide transactional support, they typically sacrifice consistency, ensuring guarantees that are weaker than classic 1-copy serializability, but that allow for more efficient implementations. This paper proposes and evaluates two partial replication techniques, providing different (weak) consistency guarantees, but having in common the reliance on total order multicast primitives to serialize transactions without incurring in distributed deadlocks, a main source of inefficiency of classical two-phase commit (2PC) based replication mechanisms. We integrate the proposed replication schemes into Infinispan, a prominent open-source distributed in-memory cache, which represents the reference clustering solution for the well-known JBoss AS platform. Our performance evaluation highlights speed-ups of up to 40x when using the proposed algorithms with respect to the native Infinispan replication mechanism, which relies on classic 2PC-based replication.
如今,分布式内存缓存越来越多地用于提高需要频繁访问大量数据的应用程序的性能。为了最大限度地提高性能和可伸缩性,这些平台通常依赖于弱一致性部分复制机制。这些模式跨节点对数据进行分区,并确保预定义的(通常非常小的)复制程度,从而最大化平台的全局内存容量,并确保确保副本一致性的成本随着平台规模的增长而保持不变。此外,尽管这些平台中有几个提供了事务支持,但它们通常会牺牲一致性,确保的保证比经典的1副本序列化性弱,但可以实现更高效的实现。本文提出并评估了两种部分复制技术,它们提供了不同的(弱)一致性保证,但共同点是依赖于总顺序多播原语来序列化事务,而不会产生分布式死锁,这是经典的基于两阶段提交(2PC)的复制机制效率低下的主要原因。我们将提出的复制方案集成到Infinispan中,Infinispan是一个著名的开源分布式内存缓存,它代表了著名的JBoss AS平台的参考集群解决方案。我们的性能评估强调,与本地Infinispan复制机制相比,使用拟议算法的速度提高了40倍,Infinispan复制机制依赖于经典的基于2pc的复制。
{"title":"Exploiting Total Order Multicast in Weakly Consistent Transactional Caches","authors":"P. Ruivo, Maria Couceiro, P. Romano, L. Rodrigues","doi":"10.1109/PRDC.2011.21","DOIUrl":"https://doi.org/10.1109/PRDC.2011.21","url":null,"abstract":"Nowadays, distributed in-memory caches are increasingly used as a way to improve the performance of applications that require frequent access to large amounts of data. In order to maximize performance and scalability, these platforms typically rely on weakly consistent partial replication mechanisms. These schemes partition the data across the nodes and ensure a predefined (and typically very small) replication degree, thus maximizing the global memory capacity of the platform and ensuring that the cost to ensure replica consistency remains constant as the scale of the platform grows. Moreover, even though several of these platforms provide transactional support, they typically sacrifice consistency, ensuring guarantees that are weaker than classic 1-copy serializability, but that allow for more efficient implementations. This paper proposes and evaluates two partial replication techniques, providing different (weak) consistency guarantees, but having in common the reliance on total order multicast primitives to serialize transactions without incurring in distributed deadlocks, a main source of inefficiency of classical two-phase commit (2PC) based replication mechanisms. We integrate the proposed replication schemes into Infinispan, a prominent open-source distributed in-memory cache, which represents the reference clustering solution for the well-known JBoss AS platform. Our performance evaluation highlights speed-ups of up to 40x when using the proposed algorithms with respect to the native Infinispan replication mechanism, which relies on classic 2PC-based replication.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121899440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Test Generation and Computational Complexity 测试生成和计算复杂性
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.40
J. Sziray
The paper is concerned with analyzing and comparing two exact algorithms from the viewpoint of computational complexity. They are: composite justification and the D-algorithm. Both serve for calculating fault-detection tests of digital circuits. As a result, it is pointed out that the composite justification requires significantly less computational step than the D-algorithm. From this fact it has been conjectured that possibly no other algorithm is available in this field with fewer computational steps. If the claim holds, then it follows directly that the test-generation problem is of exponential time, and so are all the other NP-complete problems in the field of computation theory.
本文从计算复杂度的角度对两种精确算法进行了分析和比较。它们是:复合校验和d算法。两者都用于计算数字电路的故障检测测试。结果表明,复合证明所需的计算步数明显少于d -算法。从这个事实可以推测,在这个领域可能没有其他的算法可以用更少的计算步骤。如果这个说法成立,那么直接得出,测试生成问题是指数时间的,计算理论领域中所有其他np完全问题也是指数时间的。
{"title":"Test Generation and Computational Complexity","authors":"J. Sziray","doi":"10.1109/PRDC.2011.40","DOIUrl":"https://doi.org/10.1109/PRDC.2011.40","url":null,"abstract":"The paper is concerned with analyzing and comparing two exact algorithms from the viewpoint of computational complexity. They are: composite justification and the D-algorithm. Both serve for calculating fault-detection tests of digital circuits. As a result, it is pointed out that the composite justification requires significantly less computational step than the D-algorithm. From this fact it has been conjectured that possibly no other algorithm is available in this field with fewer computational steps. If the claim holds, then it follows directly that the test-generation problem is of exponential time, and so are all the other NP-complete problems in the field of computation theory.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133491005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revisiting Fault-Injection Experiment-Platform Architectures 重新审视故障注入实验平台架构
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.46
Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk
Many years of research on dependable, fault-tolerant software systems yielded a myriad of tool implementations for vulnerability analysis and experimental validation of resilience measures. Trace recording and fault injection are among the core functionalities these tools provide for hardware debuggers or system simulators, partially including some means to automate larger experiment campaigns. We argue that current fault-injection tools are too highly specialized for specific hardware devices or simulators, and are developed in poorly modularized implementations impeding evolution and maintenance. In this article, we present a novel design approach for a fault-injection infrastructure that allows experimenting researchers to switch simulator or hardware back ends with little effort, fosters experiment code reuse, and retains a high level of maintainability.
多年来对可靠、容错软件系统的研究产生了无数用于漏洞分析和弹性措施实验验证的工具实现。跟踪记录和故障注入是这些工具为硬件调试器或系统模拟器提供的核心功能之一,部分包括一些自动化大型实验活动的方法。我们认为,当前的故障注入工具过于专一于特定的硬件设备或模拟器,并且以较差的模块化实现开发,阻碍了发展和维护。在本文中,我们提出了一种新的故障注入基础设施设计方法,它允许实验研究人员毫不费力地切换模拟器或硬件后端,促进实验代码重用,并保持高水平的可维护性。
{"title":"Revisiting Fault-Injection Experiment-Platform Architectures","authors":"Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk","doi":"10.1109/PRDC.2011.46","DOIUrl":"https://doi.org/10.1109/PRDC.2011.46","url":null,"abstract":"Many years of research on dependable, fault-tolerant software systems yielded a myriad of tool implementations for vulnerability analysis and experimental validation of resilience measures. Trace recording and fault injection are among the core functionalities these tools provide for hardware debuggers or system simulators, partially including some means to automate larger experiment campaigns. We argue that current fault-injection tools are too highly specialized for specific hardware devices or simulators, and are developed in poorly modularized implementations impeding evolution and maintenance. In this article, we present a novel design approach for a fault-injection infrastructure that allows experimenting researchers to switch simulator or hardware back ends with little effort, fosters experiment code reuse, and retains a high level of maintainability.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133058013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Trend Analyses of Accidents and Dependability Improvement in Financial Information Systems 金融信息系统事故趋势分析与可靠性改进
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.35
Koichi Bando, Kenji Tanaka
In this paper, we analyze the trends of significant accidents in financial information systems from the user viewpoint. Based on the analyses, we show the priority issues for dependability improvement. First, as a prerequisite in this study, we define gaccidents, h gtypes of accidents, h gseverity of accidents, h and gfaults.h Second, we collected as many accident cases of financial information systems as possible during 12 years (1997-2008) from the information contained in four national major newspapers in Japan, news releases on websites, magazines, and books. Third, we analyzed the accident information according to type, severity, faults, and combinations of these factors. As a result, we showed the general trends of significant accidents. Last, based on the result of the analyses, we showed the priority issues for dependability improvement.
本文从用户的角度分析了金融信息系统重大事故的发展趋势。在此基础上,提出了提高可靠性的优先事项。首先,作为本研究的前提,我们定义了“事故”、“事故类型”、“事故严重程度”、“事故和故障”。其次,我们从日本四家全国性报纸、网站、杂志和书籍上的新闻稿中收集了12年间(1997-2008)尽可能多的金融信息系统事故案例。第三,我们根据类型、严重程度、故障以及这些因素的组合分析事故信息。因此,我们展示了重大事故的总体趋势。最后,根据分析结果,提出了提高可靠性的优先事项。
{"title":"Trend Analyses of Accidents and Dependability Improvement in Financial Information Systems","authors":"Koichi Bando, Kenji Tanaka","doi":"10.1109/PRDC.2011.35","DOIUrl":"https://doi.org/10.1109/PRDC.2011.35","url":null,"abstract":"In this paper, we analyze the trends of significant accidents in financial information systems from the user viewpoint. Based on the analyses, we show the priority issues for dependability improvement. First, as a prerequisite in this study, we define gaccidents, h gtypes of accidents, h gseverity of accidents, h and gfaults.h Second, we collected as many accident cases of financial information systems as possible during 12 years (1997-2008) from the information contained in four national major newspapers in Japan, news releases on websites, magazines, and books. Third, we analyzed the accident information according to type, severity, faults, and combinations of these factors. As a result, we showed the general trends of significant accidents. Last, based on the result of the analyses, we showed the priority issues for dependability improvement.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Malware Profiler Based on Innovative Behavior-Awareness Technique 基于创新行为感知技术的恶意软件分析器
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.53
Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang
In order to steal valuable data, hackers are uninterrupted research and development new techniques to intrude computer systems. Opposite to hackers, security researchers are uninterrupted analysis and tracking new malicious techniques for protecting sensitive data. There are a lot of existing analyzers can be used to help security researchers to analyze and track new malicious techniques. However, these existing analyzers cannot provide sufficient information to security researchers to perform precise assessment and deep analysis. In this paper, we introduce a behavior-based malicious software profiler, named Holography platform, to assist security researchers to obtain sufficient information. Holography platform analyzes virtualization hardware data, including CPU instructions, CPU registers, memory data and disk data, to obtain high level behavior semantic of all running processes. High level behavior semantic can provide sufficient information to security researchers to perform precise assessment and deep analysis new malicious techniques, such as malicious advertisement attack(malvertising attack).
为了窃取有价值的数据,黑客们不间断地研究和开发侵入计算机系统的新技术。与黑客相反,安全研究人员正在不间断地分析和跟踪新的恶意技术,以保护敏感数据。有很多现有的分析工具可以用来帮助安全研究人员分析和跟踪新的恶意技术。然而,这些现有的分析工具无法为安全研究人员提供足够的信息来进行精确的评估和深入的分析。本文介绍了一种基于行为的恶意软件剖析器——全息平台,以帮助安全研究人员获取足够的信息。全息平台分析虚拟化硬件数据,包括CPU指令、CPU寄存器、内存数据和磁盘数据,获得所有运行进程的高级行为语义。高层次的行为语义可以为安全研究人员提供足够的信息,以便对恶意广告攻击(malvertising attack)等新型恶意技术进行精确评估和深入分析。
{"title":"Malware Profiler Based on Innovative Behavior-Awareness Technique","authors":"Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang","doi":"10.1109/PRDC.2011.53","DOIUrl":"https://doi.org/10.1109/PRDC.2011.53","url":null,"abstract":"In order to steal valuable data, hackers are uninterrupted research and development new techniques to intrude computer systems. Opposite to hackers, security researchers are uninterrupted analysis and tracking new malicious techniques for protecting sensitive data. There are a lot of existing analyzers can be used to help security researchers to analyze and track new malicious techniques. However, these existing analyzers cannot provide sufficient information to security researchers to perform precise assessment and deep analysis. In this paper, we introduce a behavior-based malicious software profiler, named Holography platform, to assist security researchers to obtain sufficient information. Holography platform analyzes virtualization hardware data, including CPU instructions, CPU registers, memory data and disk data, to obtain high level behavior semantic of all running processes. High level behavior semantic can provide sufficient information to security researchers to perform precise assessment and deep analysis new malicious techniques, such as malicious advertisement attack(malvertising attack).","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Parametric Bootstrapping for Assessing Software Reliability Measures 评估软件可靠性措施的参数引导
Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.10
Toshio Kaneishi, T. Dohi
The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.
自举是一种基于重采样复制底层数据的统计技术,使我们能够研究统计特性。从少量数据中估计概率分布的复杂参数的复杂估计量的标准误差和置信区间是有用的。在软件可靠性工程中,通常是从故障数据(故障检测时间数据)中估计软件可靠性度量,并且只关注点估计。然而,一般来说,如果不使用任何近似方法,很难进行区间估计或获得相关估计量的概率分布。本文假设系统测试中的软件故障检测过程是用非齐次泊松过程来描述的,并发展了一种综合的方法来研究重要软件可靠性度量的概率分布。在极大似然估计的基础上,通过参数自举方法对软件中剩余的初始软件故障数、软件强度函数、均值函数和软件可靠性函数等估计量的概率分布进行了估计。
{"title":"Parametric Bootstrapping for Assessing Software Reliability Measures","authors":"Toshio Kaneishi, T. Dohi","doi":"10.1109/PRDC.2011.10","DOIUrl":"https://doi.org/10.1109/PRDC.2011.10","url":null,"abstract":"The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1