Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334557
David Murray
The user environment for wireless devices is complex and includes many sources of interference, from the multitude of devices in the wireless landscape or other non-communications-based RF sources. As the RF power is spread over wider bandwidths to improve range resolution, increase data rates, and decrease probability of detections, RF devices encounter a wider bandwidth of interference. As designers look to create robust solutions that can perform in such environments it becomes more important to create those conditions in the lab, or model those environments.
{"title":"Using RF Recording Techniques to Resolve Wireless Channel Interference Problems","authors":"David Murray","doi":"10.1109/AUTEST.2012.6334557","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334557","url":null,"abstract":"The user environment for wireless devices is complex and includes many sources of interference, from the multitude of devices in the wireless landscape or other non-communications-based RF sources. As the RF power is spread over wider bandwidths to improve range resolution, increase data rates, and decrease probability of detections, RF devices encounter a wider bandwidth of interference. As designers look to create robust solutions that can perform in such environments it becomes more important to create those conditions in the lab, or model those environments.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334543
J. Valfre
Weapon systems have become increasingly complex and customer funding has become constricted. Customers and contractors are in an environment where the cost of test systems has to be reduced yet still test effectively in order to remain competitive. In an effort to reduce the total development and lifecycle cost, companies are using Design-For-Test (DFT) methodologies to increase Built-In-Test (BIT) coverage and reduce the need for external Special Test Equipment (STE). Using test coverage analysis tools during prime hardware design efforts has benefits including identification of gaps in test capability, increased test coverage, test strategy optimization, increased accessibility, fault isolation and a reduction in overall test cost. This paper will explore the concept of testability modeling and how it can be applied to maximize system test coverage, derive STE and BIT requirements and provide increased circuit accessibility for usage in DFT considerations. Test modeling tools enable designers to formulate test coverage and testability analysis that assists in identification of suggested hardware design improvements in order to gain greater test coverage. The tools facilitate an iterative analysis of designs at multiple assembly levels and at different design maturities and can allow for designers and test engineers to relate functional test coverage, fault coverage and fault isolation to varying test cases such as production acceptance and design verification testing. Output of the testability model can assist in optimization of test strategies as well as provide insight into failure rates and failure modes with the inclusion of reliability data. Used as part of the design iteration, this process can be repeated at different design and verification stages to produce a product which provides circuitry access to test itself, maximize test coverage while minimizing test equipment, and predict failure modes and identify line replaceable units (LRUs). Testability modeling can significantly reduce the cost of test equipment development, lifecycle cost and recurring unit production cost thus making the product more affordable to build, deliver and deploy.
{"title":"Testability modeling usage in design-for-test and product lifecycle cost reduction","authors":"J. Valfre","doi":"10.1109/AUTEST.2012.6334543","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334543","url":null,"abstract":"Weapon systems have become increasingly complex and customer funding has become constricted. Customers and contractors are in an environment where the cost of test systems has to be reduced yet still test effectively in order to remain competitive. In an effort to reduce the total development and lifecycle cost, companies are using Design-For-Test (DFT) methodologies to increase Built-In-Test (BIT) coverage and reduce the need for external Special Test Equipment (STE). Using test coverage analysis tools during prime hardware design efforts has benefits including identification of gaps in test capability, increased test coverage, test strategy optimization, increased accessibility, fault isolation and a reduction in overall test cost. This paper will explore the concept of testability modeling and how it can be applied to maximize system test coverage, derive STE and BIT requirements and provide increased circuit accessibility for usage in DFT considerations. Test modeling tools enable designers to formulate test coverage and testability analysis that assists in identification of suggested hardware design improvements in order to gain greater test coverage. The tools facilitate an iterative analysis of designs at multiple assembly levels and at different design maturities and can allow for designers and test engineers to relate functional test coverage, fault coverage and fault isolation to varying test cases such as production acceptance and design verification testing. Output of the testability model can assist in optimization of test strategies as well as provide insight into failure rates and failure modes with the inclusion of reliability data. Used as part of the design iteration, this process can be repeated at different design and verification stages to produce a product which provides circuitry access to test itself, maximize test coverage while minimizing test equipment, and predict failure modes and identify line replaceable units (LRUs). Testability modeling can significantly reduce the cost of test equipment development, lifecycle cost and recurring unit production cost thus making the product more affordable to build, deliver and deploy.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334536
I. Williams
On large UUTs such as helicopters or jet aircraft, it is often difficult to perform required tests using a static test station located outside the UUT. For example, running a test sequence that requires a technician to follow a set of instructions, such as toggling breakers and switches in an aircraft cockpit, can be challenging and time-consuming. This situation may require the technician to move in and out of the cockpit after each test, or perhaps even require the use of two technicians - one reading off the instructions and the other performing the task. Integrating mobile devices into ATE systems allows the technician to freely move about the UUT, both inside and out, while running the TPS. The mobile device displays the instructions and receives the technician's response while the main ATE console executes the test program and collects the data. Mobile applications for ATE are rapidly moving from a niche market to industry mainstream. However, mobile devices are so ubiquitous in today's technology that targeting all device types from an ATE standpoint can be difficult and costly. Screen size, screen resolution, hardware characteristics, operating systems and even the operating system versions all need to be considered when targeting a mobile device. Tablets, with their 10 inch high-definition (HD) touch screens appear to be best suited for ATE mobile applications but almost any mobile device may be used. Arguably, there are four major tablet platforms available today, each with its own operating system: Apple iOS (iPad), Android, BlackBerry Tablet OS, and Microsoft. Any one of these is suitable for ATE mobile applications but each platform has advantages and disadvantages. This paper discusses the use of mobile devices to extend ATE test applications and what to consider when choosing to develop ATE applications that target mobile devices.
{"title":"Using mobile devices on ATE systems","authors":"I. Williams","doi":"10.1109/AUTEST.2012.6334536","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334536","url":null,"abstract":"On large UUTs such as helicopters or jet aircraft, it is often difficult to perform required tests using a static test station located outside the UUT. For example, running a test sequence that requires a technician to follow a set of instructions, such as toggling breakers and switches in an aircraft cockpit, can be challenging and time-consuming. This situation may require the technician to move in and out of the cockpit after each test, or perhaps even require the use of two technicians - one reading off the instructions and the other performing the task. Integrating mobile devices into ATE systems allows the technician to freely move about the UUT, both inside and out, while running the TPS. The mobile device displays the instructions and receives the technician's response while the main ATE console executes the test program and collects the data. Mobile applications for ATE are rapidly moving from a niche market to industry mainstream. However, mobile devices are so ubiquitous in today's technology that targeting all device types from an ATE standpoint can be difficult and costly. Screen size, screen resolution, hardware characteristics, operating systems and even the operating system versions all need to be considered when targeting a mobile device. Tablets, with their 10 inch high-definition (HD) touch screens appear to be best suited for ATE mobile applications but almost any mobile device may be used. Arguably, there are four major tablet platforms available today, each with its own operating system: Apple iOS (iPad), Android, BlackBerry Tablet OS, and Microsoft. Any one of these is suitable for ATE mobile applications but each platform has advantages and disadvantages. This paper discusses the use of mobile devices to extend ATE test applications and what to consider when choosing to develop ATE applications that target mobile devices.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125495392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334558
Jervin Justin, L. Lindstrom, A. Jain
An ATE software or test executive must perform a variety of tasks in addition to simply sequencing and running functional tests on the device under test (DUT). These tasks include prompting the test operator for a serial number and displaying test results, logging the results of the tests to a report, test system calibration and self-tests, and more. In most test executive software applications, the code responsible for these additional tasks is tightly integrated with the process code that is responsible for executing the actual tests. Because of this integration, it is difficult to modify one set of code without affecting the other. This paper discusses how to decouple the process code from the task code by implementing a plug-in-based architecture for the test software. A plug-in-based architecture offers several benefits. Plug-ins can be used to improve or modify existing behavior, such as results processing, or to add entirely new functionality, such as the ability to interface with web services or implement a power-on self-test sequence in a very modular fashion. These and other benefits will also be discussed in the paper. These topics will be discussed in the general context of automated test software. Finally, this paper will demonstrate an implementation of this plug-in based architecture using a COTS test executive.
{"title":"Using a plug-in model to simplify and enhance ATE test software capabilities","authors":"Jervin Justin, L. Lindstrom, A. Jain","doi":"10.1109/AUTEST.2012.6334558","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334558","url":null,"abstract":"An ATE software or test executive must perform a variety of tasks in addition to simply sequencing and running functional tests on the device under test (DUT). These tasks include prompting the test operator for a serial number and displaying test results, logging the results of the tests to a report, test system calibration and self-tests, and more. In most test executive software applications, the code responsible for these additional tasks is tightly integrated with the process code that is responsible for executing the actual tests. Because of this integration, it is difficult to modify one set of code without affecting the other. This paper discusses how to decouple the process code from the task code by implementing a plug-in-based architecture for the test software. A plug-in-based architecture offers several benefits. Plug-ins can be used to improve or modify existing behavior, such as results processing, or to add entirely new functionality, such as the ability to interface with web services or implement a power-on self-test sequence in a very modular fashion. These and other benefits will also be discussed in the paper. These topics will be discussed in the general context of automated test software. Finally, this paper will demonstrate an implementation of this plug-in based architecture using a COTS test executive.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121646947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334563
W. Duff, L. Ungar
Electronic equipment are subject to degradation and failure as a result of aging and corrosion. Diagnosing to the culprit component, however, is usually not obvious. Some strategic and novel electromagnetic interference (EMI) measurements can be utilized to detect and diagnose failures or degradations. Measuring and monitoring the EMI characteristics of a Unit Under Test (UUT) indicates that the circuit is experiencing an anomaly. The approach is ubiquitous, but in this paper we will focus our discussion to power supplies. Within a power supply, the Fourier series of a full wave rectifier contains only even harmonics of the input waveform. If one of the diodes is degraded or has failed, the output spectral component at the input signal frequency and the odd harmonics of the input will not be zero. An odd harmonic of the input signal frequency will provide an indication of degradation or failure of one of the diodes in the bridge. We present potential measuring and monitoring equipment that can provide in situ non-intrusive prognostic and diagnostic results in operational circuits. The EMI signatures will help pinpoint the culprit component, and repair decisions can be made before the unit is removed from its environment, resulting in substantial savings in support costs.
{"title":"Novel diagnostic and prognostic techniques using electromagnetic interference (EMI) measurements to detect degradation in electronic equipment","authors":"W. Duff, L. Ungar","doi":"10.1109/AUTEST.2012.6334563","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334563","url":null,"abstract":"Electronic equipment are subject to degradation and failure as a result of aging and corrosion. Diagnosing to the culprit component, however, is usually not obvious. Some strategic and novel electromagnetic interference (EMI) measurements can be utilized to detect and diagnose failures or degradations. Measuring and monitoring the EMI characteristics of a Unit Under Test (UUT) indicates that the circuit is experiencing an anomaly. The approach is ubiquitous, but in this paper we will focus our discussion to power supplies. Within a power supply, the Fourier series of a full wave rectifier contains only even harmonics of the input waveform. If one of the diodes is degraded or has failed, the output spectral component at the input signal frequency and the odd harmonics of the input will not be zero. An odd harmonic of the input signal frequency will provide an indication of degradation or failure of one of the diodes in the bridge. We present potential measuring and monitoring equipment that can provide in situ non-intrusive prognostic and diagnostic results in operational circuits. The EMI signatures will help pinpoint the culprit component, and repair decisions can be made before the unit is removed from its environment, resulting in substantial savings in support costs.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115940456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334521
Nathan E. West, G. Scheets
Adding a dither signal to a signal to be measured is a known technique for improving the accuracy of a quantizer output. In this paper a measurement called effective bits is used to compare un-dithered signals, stochastically dithered signals, and deterministically dithered signals. A deterministic dither signal is found that adds one effective bit using only two dither points. With this dither signal, the number of effective bits continues to grow logarithmically with the number of dither points added.
{"title":"Increasing the resolution of a uniform quantizer using a deterministic dithering signal","authors":"Nathan E. West, G. Scheets","doi":"10.1109/AUTEST.2012.6334521","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334521","url":null,"abstract":"Adding a dither signal to a signal to be measured is a known technique for improving the accuracy of a quantizer output. In this paper a measurement called effective bits is used to compare un-dithered signals, stochastically dithered signals, and deterministically dithered signals. A deterministic dither signal is found that adds one effective bit using only two dither points. With this dither signal, the number of effective bits continues to grow logarithmically with the number of dither points added.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115395191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334578
Chetan S. Kulkarni, J. Celaya, G. Biswas, K. Goebel
Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.
{"title":"Prognostics of Power Electronics, methods and validation experiments","authors":"Chetan S. Kulkarni, J. Celaya, G. Biswas, K. Goebel","doi":"10.1109/AUTEST.2012.6334578","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334578","url":null,"abstract":"Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127227211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334551
J. Orlet
Test system consolidation has been going on in the U.S. Department of Defense for quite some time for multiple pieces of support equipment ranging from Automatic Test Stations to Common O-Level Test Sets to Common pieces of Mechanical Support Equipment. With regards to Automatic Test Equipment / Stations, each branch does have a standard family of test equipment. This is a big shift from the days when each weapon system had its own family. As test equipment and instrumentation have become more capable and more flexible, it seems likely that there really should be even fewer types of test systems. Yet each branch of the DoD has its own family of test equipment and they have to constantly work to enforce usage of the standard equipment versus proliferation of new types of test equipment. Still, the same item will have different test equipment for each stage of development and production. Each will have a different test approach, strategy, and implementation. This leads to issues with test repeatability, verticality, and compatibility which drive up life cycle costs and impede system readiness. This paper will describe the efforts to study the problem and provide results of the findings from a system integrator's point of view. In addition to the analysis of the instrumentation typically found in a test system, the paper will discuss some of the features of system architectures that can enable consolidation such as translation tools. It will also discuss some of the impediments to test system consolidation such as legacy system emulation and compatibility. Finally, it will discuss the some of the system requirements that have typically driven systems to different solutions and provide recommendations to test equipment standardization.
{"title":"Test system consolidation","authors":"J. Orlet","doi":"10.1109/AUTEST.2012.6334551","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334551","url":null,"abstract":"Test system consolidation has been going on in the U.S. Department of Defense for quite some time for multiple pieces of support equipment ranging from Automatic Test Stations to Common O-Level Test Sets to Common pieces of Mechanical Support Equipment. With regards to Automatic Test Equipment / Stations, each branch does have a standard family of test equipment. This is a big shift from the days when each weapon system had its own family. As test equipment and instrumentation have become more capable and more flexible, it seems likely that there really should be even fewer types of test systems. Yet each branch of the DoD has its own family of test equipment and they have to constantly work to enforce usage of the standard equipment versus proliferation of new types of test equipment. Still, the same item will have different test equipment for each stage of development and production. Each will have a different test approach, strategy, and implementation. This leads to issues with test repeatability, verticality, and compatibility which drive up life cycle costs and impede system readiness. This paper will describe the efforts to study the problem and provide results of the findings from a system integrator's point of view. In addition to the analysis of the instrumentation typically found in a test system, the paper will discuss some of the features of system architectures that can enable consolidation such as translation tools. It will also discuss some of the impediments to test system consolidation such as legacy system emulation and compatibility. Finally, it will discuss the some of the system requirements that have typically driven systems to different solutions and provide recommendations to test equipment standardization.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122311177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334530
G. Scheck
With the new architecture, test programs are standardized both in look and functionality through sharing of a common interface and data and instrument handling routines. Programming time is greatly reduced with the separation of non-test related functions and through the inherent nature of code reuse. Verification and validation time is also reduced since testing is only required on the modified components. New ATE software development can now be measured in weeks versus months. Through its modular design and database/hardware abstraction, the software is highly scalable and flexible. Overall operator time is reduced, either in diminished training or in the ability to troubleshoot on the auto test system. Data is easily accessible from anywhere and can be queried with a multitude of tools, including SPC.
{"title":"Reducing the cost of ATE software development","authors":"G. Scheck","doi":"10.1109/AUTEST.2012.6334530","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334530","url":null,"abstract":"With the new architecture, test programs are standardized both in look and functionality through sharing of a common interface and data and instrument handling routines. Programming time is greatly reduced with the separation of non-test related functions and through the inherent nature of code reuse. Verification and validation time is also reduced since testing is only required on the modified components. New ATE software development can now be measured in weeks versus months. Through its modular design and database/hardware abstraction, the software is highly scalable and flexible. Overall operator time is reduced, either in diminished training or in the ability to troubleshoot on the auto test system. Data is easily accessible from anywhere and can be queried with a multitude of tools, including SPC.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-22DOI: 10.1109/AUTEST.2012.6334553
S. O'Donnell, P. Anzile
Selex Galileo, a Finmeccanica Company, has been instructed by the Italian Air Force to define a new Automatic Test Equipment (ATE) solution for second level of maintenance to be used for the Avionic Centre (CMA) with reference to the Eurofighter Program. In particular, this resulted in Test Program Set (TPS) design and development for several Line Replaceable Units (LRU) of European Fighter (EF2000 Block 2) configuration. A further requirement is to have the same high level performance/reliability ATE opportunity to re-host some TPSs for Block 1 configuration, already designed and purchased in the past using another ATE, for improved work load distribution and to maintain and support them for many years beyond their original projected life expectancy. Selex Galileo started the collaboration with Lockheed Martin and finalized the acceptance and delivery of their first LM-STAR® station (Galileo Euro Test Set variant) that met the technical requirements for the development of more than 30 TPS (considering both new T2 TPS and re-hosted T1 ones). The new LMSTAR® configuration posed many technological challenges from both a software and hardware perspective that had to be overcome. A limited budget combined with an aggressive schedule presented formidable obstacles. This paper will describe how a project can still maintain cost, schedule, and quality objectives while addressing evolving test requirements. The support of such a complex international program will also be explored. This paper will describe the TPS hardware configuration and in particular the New Versatile Panel Interface (NVPI) between LMSTAR® resources and TPS adapters. Where applicable, the same adapter has been utilized for multiple TPS. The NVPI consists of a single panel interface, transparent in respect to station resources, for all TPSs using different configuration modules (cap adapter). These resources can be routed on the front panel and can be accessible through connectors with high pin density to guarantee a reliable connection test after test. Finally, the re-hosting issues (related to TestStand/Lab Windows CVI and IEEE ATLAS 716/89 software development environment) of several TPSs T1 aircraft configuration previously designed on another ATE and now coded on the LM-STAR® will be examined. We will also address the Software Downloading Library (SDL), as a generic Bus Loader/Verifier (BLVR), designed to transfer and to verify the application software (flight code) usually into LRU EEPROM memory.
{"title":"TPS design and development for CMA program using LM-STAR®","authors":"S. O'Donnell, P. Anzile","doi":"10.1109/AUTEST.2012.6334553","DOIUrl":"https://doi.org/10.1109/AUTEST.2012.6334553","url":null,"abstract":"Selex Galileo, a Finmeccanica Company, has been instructed by the Italian Air Force to define a new Automatic Test Equipment (ATE) solution for second level of maintenance to be used for the Avionic Centre (CMA) with reference to the Eurofighter Program. In particular, this resulted in Test Program Set (TPS) design and development for several Line Replaceable Units (LRU) of European Fighter (EF2000 Block 2) configuration. A further requirement is to have the same high level performance/reliability ATE opportunity to re-host some TPSs for Block 1 configuration, already designed and purchased in the past using another ATE, for improved work load distribution and to maintain and support them for many years beyond their original projected life expectancy. Selex Galileo started the collaboration with Lockheed Martin and finalized the acceptance and delivery of their first LM-STAR® station (Galileo Euro Test Set variant) that met the technical requirements for the development of more than 30 TPS (considering both new T2 TPS and re-hosted T1 ones). The new LMSTAR® configuration posed many technological challenges from both a software and hardware perspective that had to be overcome. A limited budget combined with an aggressive schedule presented formidable obstacles. This paper will describe how a project can still maintain cost, schedule, and quality objectives while addressing evolving test requirements. The support of such a complex international program will also be explored. This paper will describe the TPS hardware configuration and in particular the New Versatile Panel Interface (NVPI) between LMSTAR® resources and TPS adapters. Where applicable, the same adapter has been utilized for multiple TPS. The NVPI consists of a single panel interface, transparent in respect to station resources, for all TPSs using different configuration modules (cap adapter). These resources can be routed on the front panel and can be accessible through connectors with high pin density to guarantee a reliable connection test after test. Finally, the re-hosting issues (related to TestStand/Lab Windows CVI and IEEE ATLAS 716/89 software development environment) of several TPSs T1 aircraft configuration previously designed on another ATE and now coded on the LM-STAR® will be examined. We will also address the Software Downloading Library (SDL), as a generic Bus Loader/Verifier (BLVR), designed to transfer and to verify the application software (flight code) usually into LRU EEPROM memory.","PeriodicalId":142978,"journal":{"name":"2012 IEEE AUTOTESTCON Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126116246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}