Pub Date : 2013-08-01DOI: 10.1109/MIM.2013.6572948
P. B. Kelly
Testing involves applying stimulus to a device, called the Unit Under Test (UUT), and evaluating the measured response against the expected values. Traditional systems use discrete instruments to supply the stimulus and measure the response, but most devices are part of a larger system and may be a component of a closed control loop. Many devices are designed to respond to the inputs by generating outputs that are dependent on some part of the output being fed back to the inputs through the rest of the system. To be comprehensive, a test of such a device must include stimulus and response that matches, as closely as possible, the way the device is used in the full system. This requires test equipment that can alter the stimulus in response to the UUT's outputs. For low speed systems, software can often accomplish this, which is the traditional approach, but systems that require much faster response than practically accomplished in software are simply not tested in this fashion unless custom test hardware is designed to do it. This drives up the cost of test station and test program design, development, and maintenance, making it prohibitive except where crucial. Recent advancements in Field Programmable Gate Array (FPGA) technology have made a new class of instrument available to the test market. Modules based on standard interfaces that provide a large FPGA with external memory, multiple ADC and DAC channels with the digital side interfaced to the FPGA, and a large number of digital I/O pins plus programming interfaces that are fairly easy to use are now available at low cost. These modules can replace custom electronics that were required to achieve satisfactory test results in “Hardware in the Loop” test scenarios at very low acquisition and development cost. FPGA based test instruments allow rapid development of complex control systems without custom hardware development. The future impact of such implementations will be reduced station and test program maintenance cost and problems since the “custom hardware” is contained in the test program and the hardware it runs on is a commercially available standard part number.
{"title":"A new class of test instrument: The FPGA based module","authors":"P. B. Kelly","doi":"10.1109/MIM.2013.6572948","DOIUrl":"https://doi.org/10.1109/MIM.2013.6572948","url":null,"abstract":"Testing involves applying stimulus to a device, called the Unit Under Test (UUT), and evaluating the measured response against the expected values. Traditional systems use discrete instruments to supply the stimulus and measure the response, but most devices are part of a larger system and may be a component of a closed control loop. Many devices are designed to respond to the inputs by generating outputs that are dependent on some part of the output being fed back to the inputs through the rest of the system. To be comprehensive, a test of such a device must include stimulus and response that matches, as closely as possible, the way the device is used in the full system. This requires test equipment that can alter the stimulus in response to the UUT's outputs. For low speed systems, software can often accomplish this, which is the traditional approach, but systems that require much faster response than practically accomplished in software are simply not tested in this fashion unless custom test hardware is designed to do it. This drives up the cost of test station and test program design, development, and maintenance, making it prohibitive except where crucial. Recent advancements in Field Programmable Gate Array (FPGA) technology have made a new class of instrument available to the test market. Modules based on standard interfaces that provide a large FPGA with external memory, multiple ADC and DAC channels with the digital side interfaced to the FPGA, and a large number of digital I/O pins plus programming interfaces that are fairly easy to use are now available at low cost. These modules can replace custom electronics that were required to achieve satisfactory test results in “Hardware in the Loop” test scenarios at very low acquisition and development cost. FPGA based test instruments allow rapid development of complex control systems without custom hardware development. The future impact of such implementations will be reduced station and test program maintenance cost and problems since the “custom hardware” is contained in the test program and the hardware it runs on is a commercially available standard part number.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/MIM.2013.6572955
L. Gutterman
Air Force armament maintainers like their beer and they definitely love their beercans. Prior to loading live weapons on an aircraft, maintainers are required to verify that no stray voltage is present and that the firing signals are functioning properly. In the Air Force world, this is done with a small, battery-operated tester called an Armament Circuits Pre-Load Test Set (ACPTS), commonly referred to as a “beercan” due to its shape and size. The typical beercan is a rudimentary test set with few capabilities and limited performance. The beercan's function is to verify that there is no stray voltage on the critical firing lines (squibs), and to verify the presence of firing signals including magnitude and timing during a valid launch procedure. The typical beercan only has one or two measurement channels, necessitating the manual switching of various adapters to enable testing of multiple signals. The typical beercans also lack the ability to emulate weapon signals, precluding any effective “smart” weapons testing by beercans. If any type of fault is detected by the beercan, a different test set is required to troubleshoot and repair the fault. In the F-16 world, this is achieved by the 75501 tester and other aircraft have similar flight-line testers. These testers emulate the weapons and perform a complete test on the weapon system from the cockpit's Multi-Function Display (MFD) to the launch rails and are also capable of troubleshooting the faults. If the flight-line tester identifies a failure with the launcher or bomb rack, these are removed from the aircraft and taken to the shop for further testing by back-shop testers such as the 75501 (now SST). This test process requires three types of testers which in turn, complicates the maintenance logistics and increases maintenance costs. A new breed of beercans has been recently introduced to address this deficiency by improving the test capabilities of the beercan, thus eliminating the flight-line testers and simplifying the maintenance logistics while increasing performance and reducing test and maintenance time. This paper discusses the requirements of flight-line armament testers and introduces a universal beercan with capabilities previously unavailable for the flightline.
{"title":"Where's the beer? A paradigm shift in flight-line armament testing","authors":"L. Gutterman","doi":"10.1109/MIM.2013.6572955","DOIUrl":"https://doi.org/10.1109/MIM.2013.6572955","url":null,"abstract":"Air Force armament maintainers like their beer and they definitely love their beercans. Prior to loading live weapons on an aircraft, maintainers are required to verify that no stray voltage is present and that the firing signals are functioning properly. In the Air Force world, this is done with a small, battery-operated tester called an Armament Circuits Pre-Load Test Set (ACPTS), commonly referred to as a “beercan” due to its shape and size. The typical beercan is a rudimentary test set with few capabilities and limited performance. The beercan's function is to verify that there is no stray voltage on the critical firing lines (squibs), and to verify the presence of firing signals including magnitude and timing during a valid launch procedure. The typical beercan only has one or two measurement channels, necessitating the manual switching of various adapters to enable testing of multiple signals. The typical beercans also lack the ability to emulate weapon signals, precluding any effective “smart” weapons testing by beercans. If any type of fault is detected by the beercan, a different test set is required to troubleshoot and repair the fault. In the F-16 world, this is achieved by the 75501 tester and other aircraft have similar flight-line testers. These testers emulate the weapons and perform a complete test on the weapon system from the cockpit's Multi-Function Display (MFD) to the launch rails and are also capable of troubleshooting the faults. If the flight-line tester identifies a failure with the launcher or bomb rack, these are removed from the aircraft and taken to the shop for further testing by back-shop testers such as the 75501 (now SST). This test process requires three types of testers which in turn, complicates the maintenance logistics and increases maintenance costs. A new breed of beercans has been recently introduced to address this deficiency by improving the test capabilities of the beercan, thus eliminating the flight-line testers and simplifying the maintenance logistics while increasing performance and reducing test and maintenance time. This paper discusses the requirements of flight-line armament testers and introduces a universal beercan with capabilities previously unavailable for the flightline.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121416496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/MIM.2013.6572946
L. Ungar
Boundary scan is a testability tool intended to provide independent observability and controllability at the periphery of the IC. For the past two decades, it has been used successfully in manufacturing tests to identify shorts, opens and other manufacturing defects. Until now, however, it has not been widely used as a system level diagnostic tool, especially to diagnose military systems requiring field replacement or repair. In this paper, we show that the benefits of boundary scan are as compelling for the support environment as they are to manufacturing test. An important advantage is that with this technology, tests and diagnoses can be created without requiring intimate knowledge of the circuit design. This is significant as military systems use more commercial off the shelf (COTS) equipment, where schematics are either unavailable or unreliable. The metrics for fault isolation are different from those for fault detection, but as we shall demonstrate, systems containing boundary scan are considerably more diagnosable than those without.
{"title":"Boundary scan as a system-level diagnostic tool","authors":"L. Ungar","doi":"10.1109/MIM.2013.6572946","DOIUrl":"https://doi.org/10.1109/MIM.2013.6572946","url":null,"abstract":"Boundary scan is a testability tool intended to provide independent observability and controllability at the periphery of the IC. For the past two decades, it has been used successfully in manufacturing tests to identify shorts, opens and other manufacturing defects. Until now, however, it has not been widely used as a system level diagnostic tool, especially to diagnose military systems requiring field replacement or repair. In this paper, we show that the benefits of boundary scan are as compelling for the support environment as they are to manufacturing test. An important advantage is that with this technology, tests and diagnoses can be created without requiring intimate knowledge of the circuit design. This is significant as military systems use more commercial off the shelf (COTS) equipment, where schematics are either unavailable or unreliable. The metrics for fault isolation are different from those for fault detection, but as we shall demonstrate, systems containing boundary scan are considerably more diagnosable than those without.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126270217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/MIM.2013.6572947
D. Tagliente
In recent years, liquid crystal displays (LCD) have almost completely replaced older technologies such as cathode ray tube (CRT) displays in many industrial, commercial, aerospace, and military applications due to their increased efficiency, decreased weight, and smaller size. Likewise, the technology used to transmit video signals to LCD displays has evolved from analog standards such as the National Television Standards Council's (NTSC) RS-170 standard and the Phase Alternating Line (PAL) standard to higher speed digital standards such as the Digital Visual Interface (DVI) standard, High-Definition Multimedia Interface (HDMI) standard, and low-voltage differential signaling (LVDS) standard. This evolution of video standards has created a need for the test environments and test generation devices used to test video displays to mature as well.
{"title":"Low-cost and small footprint solution for testing low-voltage differential signal video displays","authors":"D. Tagliente","doi":"10.1109/MIM.2013.6572947","DOIUrl":"https://doi.org/10.1109/MIM.2013.6572947","url":null,"abstract":"In recent years, liquid crystal displays (LCD) have almost completely replaced older technologies such as cathode ray tube (CRT) displays in many industrial, commercial, aerospace, and military applications due to their increased efficiency, decreased weight, and smaller size. Likewise, the technology used to transmit video signals to LCD displays has evolved from analog standards such as the National Television Standards Council's (NTSC) RS-170 standard and the Phase Alternating Line (PAL) standard to higher speed digital standards such as the Digital Visual Interface (DVI) standard, High-Definition Multimedia Interface (HDMI) standard, and low-voltage differential signaling (LVDS) standard. This evolution of video standards has created a need for the test environments and test generation devices used to test video displays to mature as well.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"532 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131544054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334544
R. Grassetti, R. Ottoboni, M. Rossi, S. Toscani
The protection of the electrical plant, equipment and components in aerospace applications represents a topic of advanced researches. In the last years, in particular, great efforts have been focused on the problem of the arc-fault detection. The impressive advancement of the electronic devices has been exploited. As well known, in many cases the arcs are not detected by the conventional overcurrent breakers, despite their effects can be as serious as those produced by a short-circuit, since they may cause fires on board the aircrafts. Arc-fault detection requires recognizing the arc signature contained in the current waveform. For this reason, an inescapable choice to face this problem is to adopt proper digital signal processing techniques. The detection reliability strongly depends on the criteria adopted in order to discriminate the arcing condition from other possible artefacts, due for example to normal electrical transients. In a previous work, the authors have proposed a technique based on the estimation of the energy which may be related to the arcing activity. It has been proven that it allows to establish a solid decision-making process for the parallel arc detection. In this paper the aspects related to the practical implementation of the proposed method are faced, with particular care to the impact of the unavoidable measurements uncertainties on the reliability of the method. This analysis has led to the development of an advanced prototype of a low-cost single-chip parallel arc fault detector, which can be employed to develop a very attractive AFCB (Arc Fault Circuit Breaker). A deep experimental activity has been hence carried out in laboratory. The well-recognized guillotine test has been used in order to assess the actual behaviour of the developed device.
{"title":"A low-cost arc fault detector for aerospace applications","authors":"R. Grassetti, R. Ottoboni, M. Rossi, S. Toscani","doi":"10.1109/AUTEST.2012.6334544","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334544","url":null,"abstract":"The protection of the electrical plant, equipment and components in aerospace applications represents a topic of advanced researches. In the last years, in particular, great efforts have been focused on the problem of the arc-fault detection. The impressive advancement of the electronic devices has been exploited. As well known, in many cases the arcs are not detected by the conventional overcurrent breakers, despite their effects can be as serious as those produced by a short-circuit, since they may cause fires on board the aircrafts. Arc-fault detection requires recognizing the arc signature contained in the current waveform. For this reason, an inescapable choice to face this problem is to adopt proper digital signal processing techniques. The detection reliability strongly depends on the criteria adopted in order to discriminate the arcing condition from other possible artefacts, due for example to normal electrical transients. In a previous work, the authors have proposed a technique based on the estimation of the energy which may be related to the arcing activity. It has been proven that it allows to establish a solid decision-making process for the parallel arc detection. In this paper the aspects related to the practical implementation of the proposed method are faced, with particular care to the impact of the unavoidable measurements uncertainties on the reliability of the method. This analysis has led to the development of an advanced prototype of a low-cost single-chip parallel arc fault detector, which can be employed to develop a very attractive AFCB (Arc Fault Circuit Breaker). A deep experimental activity has been hence carried out in laboratory. The well-recognized guillotine test has been used in order to assess the actual behaviour of the developed device.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334571
C. Parkey, C. Hughes, M. Caulfield, M. Masquelier
Intermittent wire faults can be caused by harsh environments, handling or simply aging of the sheathing. These types of faults are difficult to isolate due to the intermittent nature. Recent advances in intermittent fault detection have provided the aerospace and defense industry new methods to test aging aircraft wiring. In particular the use of Low Energy High Voltage (LEHV) methods and Spread Spectrum Time Domain Reflectometry (SSTDR) has shown promise in locating intermittent faults in a variety of situations. These technologies have distinct advantages which best serve the industry in a combined package. This paper presents a novel method of combining these technologies in a portable fashion to solve the growing need for intermittent fault detection.
{"title":"A method of combining intermittent arc fault technologies","authors":"C. Parkey, C. Hughes, M. Caulfield, M. Masquelier","doi":"10.1109/AUTEST.2012.6334571","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334571","url":null,"abstract":"Intermittent wire faults can be caused by harsh environments, handling or simply aging of the sheathing. These types of faults are difficult to isolate due to the intermittent nature. Recent advances in intermittent fault detection have provided the aerospace and defense industry new methods to test aging aircraft wiring. In particular the use of Low Energy High Voltage (LEHV) methods and Spread Spectrum Time Domain Reflectometry (SSTDR) has shown promise in locating intermittent faults in a variety of situations. These technologies have distinct advantages which best serve the industry in a combined package. This paper presents a novel method of combining these technologies in a portable fashion to solve the growing need for intermittent fault detection.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334552
M. Cornish, M. Brown, Anand Jain, T. Lopes
This paper presents the outcome of a UK MoD sponsored development effort to provide a suite of source code that will be made available to contractors employed in the provision of test system software to the MoD and coalition partners. The primary purpose of this `open source software' is to provide a working test system software framework that meets the requirements of the MoD's DEFSTAN 66-31 [1] (Open Systems Architecture); in particular, the use of IEEE 1641 [2] and ATML [3]. Using the interfaces and data exchange formats defined by both IEEE 1641 and ATML, a software framework has been written to bring together COTS tools and test information, in an application that sees ATML Test Description through to UUT test pin. Specifically, the framework is broken down into the areas of: ATML Test Description Importer - Converting test requirements into a test program implementation carrying 1641 Test Procedure Language. 1641 Signal Translator - Mapping test signal requirements onto test resource capabilities (making use of ATML Test Station Description). Signal Routing - Connecting test resources to UUT pins. 1641 Test Signal Framework IDL Generator - Generating a run-time interface from 1641 signal libraries. 1641 Run-time - Implementing a 1641 runtime interface with calls to underlying test resources. COTS tools have been chosen from three different manufacturers, encompassing test program generation, test signal allocation and switch path routing. This project is known as the Open Systems Architecture Runtime System (OSA RTS).
{"title":"An open source software framework for the implementation of an open systems architecture, run-time system","authors":"M. Cornish, M. Brown, Anand Jain, T. Lopes","doi":"10.1109/AUTEST.2012.6334552","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334552","url":null,"abstract":"This paper presents the outcome of a UK MoD sponsored development effort to provide a suite of source code that will be made available to contractors employed in the provision of test system software to the MoD and coalition partners. The primary purpose of this `open source software' is to provide a working test system software framework that meets the requirements of the MoD's DEFSTAN 66-31 [1] (Open Systems Architecture); in particular, the use of IEEE 1641 [2] and ATML [3]. Using the interfaces and data exchange formats defined by both IEEE 1641 and ATML, a software framework has been written to bring together COTS tools and test information, in an application that sees ATML Test Description through to UUT test pin. Specifically, the framework is broken down into the areas of: ATML Test Description Importer - Converting test requirements into a test program implementation carrying 1641 Test Procedure Language. 1641 Signal Translator - Mapping test signal requirements onto test resource capabilities (making use of ATML Test Station Description). Signal Routing - Connecting test resources to UUT pins. 1641 Test Signal Framework IDL Generator - Generating a run-time interface from 1641 signal libraries. 1641 Run-time - Implementing a 1641 runtime interface with calls to underlying test resources. COTS tools have been chosen from three different manufacturers, encompassing test program generation, test signal allocation and switch path routing. This project is known as the Open Systems Architecture Runtime System (OSA RTS).","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123453262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334547
Xiaozhou Meng, B. Thornberg, L. Olsson
This paper discusses the component obsolescence problem and presents a mathematic model for life cycle analysis of long life cycle embedded system maintenance. This model can estimate minimized management costs for different system architecture. Matlab is used to generate a graph and Lingo is used for linear programming. A simple CAN controller system case study is shown to apply this model. A minimized management cost and an optimized management time schedule are given as the result. The responses from the experiments of the model meet our expectation. Although the model has lots of simplifications and limitations, it can give management strategy guidance to the designers who suffer from component obsolescence problems.
{"title":"Component obsolescence management model for long life cycle embedded system","authors":"Xiaozhou Meng, B. Thornberg, L. Olsson","doi":"10.1109/AUTEST.2012.6334547","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334547","url":null,"abstract":"This paper discusses the component obsolescence problem and presents a mathematic model for life cycle analysis of long life cycle embedded system maintenance. This model can estimate minimized management costs for different system architecture. Matlab is used to generate a graph and Lingo is used for linear programming. A simple CAN controller system case study is shown to apply this model. A minimized management cost and an optimized management time schedule are given as the result. The responses from the experiments of the model meet our expectation. Although the model has lots of simplifications and limitations, it can give management strategy guidance to the designers who suffer from component obsolescence problems.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"371 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122835601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334555
D. Nosbusch
With the PC industry evolving from PCI to PCI Express in late 2005, the PXI industry was able to take advantage of this increase in available bus bandwidth and subsequently introduced the PXI Express specification. The PCI Express bus continues to evolve, while maintaining backwards compatibility, with the release of PCI Express 2.0 in 2010, and the PXI Express platform performance follows. These advancements enable PXI to meet the requirements of test and measurement applications that demand high data throughput capabilities. At the same time, they can problematically add a level of complexity to system architectures that require these increased bus capacities. Most noticeably, PCI Express technology has enabled high-speed data streaming architectures where data transfer between instrument and memory occurs at a rate on the order of gigabytes per second. Applications that require this capability include RF record and playback, noise mapping, and algorithm prototyping. At the same time PCI Express has also enabled the PXI platform products like chassis and controllers to support the back-end of high performance PXI instrumentation where acquisition sample rates and signal bandwidths on the order of gigahertz are common. With the introduction of Field Programmable Gate Arrays (FPGAs) for test, came the need to communicate between PXI modules in a more direct form, from which peer-to-peer streaming was born. Combining all of these technologies enabled by PCI Express, peer-to-peer streaming between high performance instrumentation and an FPGA module co-processor can significantly reduce the time required to return a complex measurement. As PXI test and measurement systems continue to grow in this direction it becomes increasingly important to understand the components of high throughput systems and the considerations that must be taken to ensure bottlenecks are not created. An evaluation of system bandwidth capabilities must account for all of the communication links, from the instrumentation analog front-end to the capacities of data storage memory.
{"title":"Architecting high-throughput PXI systems","authors":"D. Nosbusch","doi":"10.1109/AUTEST.2012.6334555","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334555","url":null,"abstract":"With the PC industry evolving from PCI to PCI Express in late 2005, the PXI industry was able to take advantage of this increase in available bus bandwidth and subsequently introduced the PXI Express specification. The PCI Express bus continues to evolve, while maintaining backwards compatibility, with the release of PCI Express 2.0 in 2010, and the PXI Express platform performance follows. These advancements enable PXI to meet the requirements of test and measurement applications that demand high data throughput capabilities. At the same time, they can problematically add a level of complexity to system architectures that require these increased bus capacities. Most noticeably, PCI Express technology has enabled high-speed data streaming architectures where data transfer between instrument and memory occurs at a rate on the order of gigabytes per second. Applications that require this capability include RF record and playback, noise mapping, and algorithm prototyping. At the same time PCI Express has also enabled the PXI platform products like chassis and controllers to support the back-end of high performance PXI instrumentation where acquisition sample rates and signal bandwidths on the order of gigahertz are common. With the introduction of Field Programmable Gate Arrays (FPGAs) for test, came the need to communicate between PXI modules in a more direct form, from which peer-to-peer streaming was born. Combining all of these technologies enabled by PCI Express, peer-to-peer streaming between high performance instrumentation and an FPGA module co-processor can significantly reduce the time required to return a complex measurement. As PXI test and measurement systems continue to grow in this direction it becomes increasingly important to understand the components of high throughput systems and the considerations that must be taken to ensure bottlenecks are not created. An evaluation of system bandwidth capabilities must account for all of the communication links, from the instrumentation analog front-end to the capacities of data storage memory.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126507915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334554
Pedro Lopez Fernandez, B. Ceron
Several decades of experience at test means led Airbus Military, by year 2000, to define and develop a common approach to cover all test activities at aircraft life-cycle. The continuous evolution of the technologies involved, as well as the evolution of the industry requirements, pushes the company to a new generation of test means. The right choice at every issue will be critical to future test facilities, what will contribute, among other issues, to maintain or improve Airbus Military status at military transport aircraft world market.
{"title":"Test means at airbus military: Covering the aircraft test life-cycle with a common and standard approach","authors":"Pedro Lopez Fernandez, B. Ceron","doi":"10.1109/AUTEST.2012.6334554","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334554","url":null,"abstract":"Several decades of experience at test means led Airbus Military, by year 2000, to define and develop a common approach to cover all test activities at aircraft life-cycle. The continuous evolution of the technologies involved, as well as the evolution of the industry requirements, pushes the company to a new generation of test means. The right choice at every issue will be critical to future test facilities, what will contribute, among other issues, to maintain or improve Airbus Military status at military transport aircraft world market.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121982774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}