Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314028
W. J. Headrick, Kendall Correll, T. Sarfi
LXI has become an increasingly popular instrumentation platform that is being used in many ATE system designs. It provides simplified programming interface, convenient LAN-based hardware architecture, and the precision synchronization and triggering required by many test applications. When multiple copies of an LXI instrument are used in a system, potential issues can arise if the instruments are swapped during troubleshooting. Technicians must manually reconfigure the instruments or update the system configuration file. Due to time constraints and shift changes, the technician may inadvertently configure the system incorrectly, which can cause damage to the unit under test (UUT), especially if the instruments are power supplies. This paper will discuss an automated method for reconfiguring systems that significantly reduces the risk of UUT damage or risk to the operator.
{"title":"Safety considerations for configuring LXI-based ATE systems when IP addresses become logical addresses","authors":"W. J. Headrick, Kendall Correll, T. Sarfi","doi":"10.1109/AUTEST.2009.5314028","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314028","url":null,"abstract":"LXI has become an increasingly popular instrumentation platform that is being used in many ATE system designs. It provides simplified programming interface, convenient LAN-based hardware architecture, and the precision synchronization and triggering required by many test applications. When multiple copies of an LXI instrument are used in a system, potential issues can arise if the instruments are swapped during troubleshooting. Technicians must manually reconfigure the instruments or update the system configuration file. Due to time constraints and shift changes, the technician may inadvertently configure the system incorrectly, which can cause damage to the unit under test (UUT), especially if the instruments are power supplies. This paper will discuss an automated method for reconfiguring systems that significantly reduces the risk of UUT damage or risk to the operator.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133683373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314085
Xin Zhao, M. Xiao, Yuehong Zhou
Open-architecture test systems were supposed to simplify TPS development, which is an effective way to preserve existing TPS investment from legacy ATE to new test systems. However, when developing test applications, we find it difficult to get perfect performance because of the formality of the interface protocols and Object-levels of granularity. Objects are fine grained and not sufficiently abstracted away from the implementation design. In this paper, Service Oriented Architecture (SOA) is introduced and adopted to solve this challenge. The focus of this paper is on consideration for service-oriented test software architecture, within which, test applications, user interface, diagnostic data and other assets are viewed as services. Each of these services can be mixed and matched to create new, flexible test software. By encapsulating a test application behind capability-based interfaces, these services can be reused and create new value from existing TPSs.
{"title":"Research on the TPS development based on SOA","authors":"Xin Zhao, M. Xiao, Yuehong Zhou","doi":"10.1109/AUTEST.2009.5314085","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314085","url":null,"abstract":"Open-architecture test systems were supposed to simplify TPS development, which is an effective way to preserve existing TPS investment from legacy ATE to new test systems. However, when developing test applications, we find it difficult to get perfect performance because of the formality of the interface protocols and Object-levels of granularity. Objects are fine grained and not sufficiently abstracted away from the implementation design. In this paper, Service Oriented Architecture (SOA) is introduced and adopted to solve this challenge. The focus of this paper is on consideration for service-oriented test software architecture, within which, test applications, user interface, diagnostic data and other assets are viewed as services. Each of these services can be mixed and matched to create new, flexible test software. By encapsulating a test application behind capability-based interfaces, these services can be reused and create new value from existing TPSs.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130718148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314055
D. Lowenstein
Since the start of thermal testing it has always been an offline process. Therefore it has baffled the best industrial engineers for optimizing and balancing flows, added undo complexity for the test engineers to develop real-time tests and made it extremely expensive for low volumes especially in a depot setting. By throwing out all of our previous notions of how and what thermal testing should be, a new design and approach was invented. Using the Toyota Production System model of combining 6sigma and Lean Manufacturing, a new thermal process has been design and implemented. This has allowed the ability for real time testing under temperature in units of one, ability to debug under temperature, and dramatically reduced cycle times, energy and floor space costs. This paper will walk through the strategy, implementation and advantages of using this new approach.
{"title":"Adaptation of thermal testing for real - time testing in both the factory and depot","authors":"D. Lowenstein","doi":"10.1109/AUTEST.2009.5314055","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314055","url":null,"abstract":"Since the start of thermal testing it has always been an offline process. Therefore it has baffled the best industrial engineers for optimizing and balancing flows, added undo complexity for the test engineers to develop real-time tests and made it extremely expensive for low volumes especially in a depot setting. By throwing out all of our previous notions of how and what thermal testing should be, a new design and approach was invented. Using the Toyota Production System model of combining 6sigma and Lean Manufacturing, a new thermal process has been design and implemented. This has allowed the ability for real time testing under temperature in units of one, ability to debug under temperature, and dramatically reduced cycle times, energy and floor space costs. This paper will walk through the strategy, implementation and advantages of using this new approach.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"55 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116568712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314039
A. Alwardt, Nathan Mikeska, Richard J. Pandorf, Philip R. Tarpley
It is common practice for military hardware to be designed for testability; however, the testability of software is rarely considered. When software testability is addressed, the resultant design often does not readily support full coverage automated testing. Since software products must be tested to verify requirements are met, it only makes sense to consider software testability from day one of a project. Once the decision has been made to embrace the concept of designing testable software, there are best practices that enable a lean software development process. This paper will discuss 1) designing for software testability; 2) the automated software regression testing approach; 3) the correlation to Extreme Programming (XP); 4) Lean 123 costs and benefits; 5) an example of how to create an automated software regression test; and 6) the applicability of this approach to all software efforts.
{"title":"A lean approach to designing for software testability","authors":"A. Alwardt, Nathan Mikeska, Richard J. Pandorf, Philip R. Tarpley","doi":"10.1109/AUTEST.2009.5314039","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314039","url":null,"abstract":"It is common practice for military hardware to be designed for testability; however, the testability of software is rarely considered. When software testability is addressed, the resultant design often does not readily support full coverage automated testing. Since software products must be tested to verify requirements are met, it only makes sense to consider software testability from day one of a project. Once the decision has been made to embrace the concept of designing testable software, there are best practices that enable a lean software development process. This paper will discuss 1) designing for software testability; 2) the automated software regression testing approach; 3) the correlation to Extreme Programming (XP); 4) Lean 123 costs and benefits; 5) an example of how to create an automated software regression test; and 6) the applicability of this approach to all software efforts.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132832930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314081
T. Epstein, Stephen Allen
This paper will explain some of the advantages of utilizing an instrument with Deep Serial Memory in digital test applications that involve large block data transfer. Deep Serial Memory can accommodate test applications that use large block transfers of data, such as read-only memory (ROM) testing, boundary scan and communications that involve read/write of megawords of data. Conventional test pattern memory can be exhausted by these applications. A test case will be presented to simplify the creation and support of tests for a Read Only Memory (ROM). The techniques applied in this example show how the transfer of large blocks of data can be easier and more practical with Deep Serial Memory.
{"title":"Using Deep Serial Memory for large block data transfers","authors":"T. Epstein, Stephen Allen","doi":"10.1109/AUTEST.2009.5314081","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314081","url":null,"abstract":"This paper will explain some of the advantages of utilizing an instrument with Deep Serial Memory in digital test applications that involve large block data transfer. Deep Serial Memory can accommodate test applications that use large block transfers of data, such as read-only memory (ROM) testing, boundary scan and communications that involve read/write of megawords of data. Conventional test pattern memory can be exhausted by these applications. A test case will be presented to simplify the creation and support of tests for a Read Only Memory (ROM). The techniques applied in this example show how the transfer of large blocks of data can be easier and more practical with Deep Serial Memory.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132936138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314033
M. Zachos, P. Srinivasa
This paper will cover the background, current spiral developments, roll out, and sustainment of the US Army's newest At-Platform Automatic Test Systems (APATS) equipment for TWVs (Tactical Wheeled Vehicles). The equipment, called the SWICE (Smart Wireless Internal Combustion Engine) system, was developed for vehicle diagnostics systems in at-platform and embedded applications, including prognostics.
{"title":"Vehicle embedded health monitoring and diagnostic system","authors":"M. Zachos, P. Srinivasa","doi":"10.1109/AUTEST.2009.5314033","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314033","url":null,"abstract":"This paper will cover the background, current spiral developments, roll out, and sustainment of the US Army's newest At-Platform Automatic Test Systems (APATS) equipment for TWVs (Tactical Wheeled Vehicles). The equipment, called the SWICE (Smart Wireless Internal Combustion Engine) system, was developed for vehicle diagnostics systems in at-platform and embedded applications, including prognostics.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124172460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314012
J. Sheppard, Stephyn G. W. Butcher, P. Donnelly
The US Navy has been supporting the demonstration of several IEEE standards with the intent of implementing these standards for future automatic test system procurement. In this paper, we discuss the second phase of a demonstration focusing on the IEEE P1232 AI-ESTATE standard. This standard specifies exchange formats and service interfaces for diagnostic reasoners. The first phase successfully demonstrated the ability to exchange diagnostic models through semantically enriched XML files. The second phase is focusing on the services and has been implemented using a web-based, service-oriented architecture. Here, we discuss implementation issues and preliminary results.
{"title":"Standard Diagnostic Services for the ATS framework","authors":"J. Sheppard, Stephyn G. W. Butcher, P. Donnelly","doi":"10.1109/AUTEST.2009.5314012","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314012","url":null,"abstract":"The US Navy has been supporting the demonstration of several IEEE standards with the intent of implementing these standards for future automatic test system procurement. In this paper, we discuss the second phase of a demonstration focusing on the IEEE P1232 AI-ESTATE standard. This standard specifies exchange formats and service interfaces for diagnostic reasoners. The first phase successfully demonstrated the ability to exchange diagnostic models through semantically enriched XML files. The second phase is focusing on the services and has been implemented using a web-based, service-oriented architecture. Here, we discuss implementation issues and preliminary results.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115928626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314026
M. Itskovich
This paper proposes an area efficient signal processing architecture to perform Iddt test calibration through vector multiplication. The design follows the Field Programmable Array organization, and capitalizes on the unique behavior of binary encoded signals to implement compact multiply elements. Vectors with 8 bit values were multiplied at a rate of 300kHz, independently of vector size.
{"title":"Area efficient vector multiplication for IDDT test calibration","authors":"M. Itskovich","doi":"10.1109/AUTEST.2009.5314026","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314026","url":null,"abstract":"This paper proposes an area efficient signal processing architecture to perform Iddt test calibration through vector multiplication. The design follows the Field Programmable Array organization, and capitalizes on the unique behavior of binary encoded signals to implement compact multiply elements. Vectors with 8 bit values were multiplied at a rate of 300kHz, independently of vector size.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125325956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314047
Jiangbin Zhao, Jiankang K. Wu, T. Shi, Jianping Xuan
Autonomous smart sensor is a highly integrated turn-key device, operates independently and owns itself life cycle. Networked sensors exchange information with protocols; we designed an application-layer protocol named AgileSN making sensor nodes interoperable and interchangeable. AgileSN sensor node itself and captured data that protocol carried are self-descriptive, with these smart capabilities, sensor nodes can be automatically detected or searched by interested peers or sink nodes, and sensor data can be parsed dynamically on the fly without human intervention, made autonomous sensor nodes plug and play in the network world. Like WEB system, well accepted network protocol make networked sensors interchangeable and interoperable, universal sensor tools are possible to process sensed data. We have designed a component-based software tool for sensor application development. In this software environment, all application functionalities are realized through software components, each component is designed to finish a special task, like reading data from networked sensors, data processing, visual presentation, network or local file I/O, HMI, etc. sensor data is processed sequentially by several components through data flow. A component can operate in its own thread, carefully designed component makes the whole data processing flow operates in pipeline mode, greatly improve data processing throughput. Components and data routes can be created and destroyed at runtime, so the application system functionality is reconfigured. We apply the Petri Net tool to model components and the whole application system, investigate their function and performance, present a non-collision high-performance soft-bus for inter-component data transfer, investigate the hazard phenomenon that exists in a multi-input component, and propose technical means to eliminate it.
{"title":"Networked autonomous smart sensors and dynamic reconfigurable application development tool for online monitoring systems","authors":"Jiangbin Zhao, Jiankang K. Wu, T. Shi, Jianping Xuan","doi":"10.1109/AUTEST.2009.5314047","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314047","url":null,"abstract":"Autonomous smart sensor is a highly integrated turn-key device, operates independently and owns itself life cycle. Networked sensors exchange information with protocols; we designed an application-layer protocol named AgileSN making sensor nodes interoperable and interchangeable. AgileSN sensor node itself and captured data that protocol carried are self-descriptive, with these smart capabilities, sensor nodes can be automatically detected or searched by interested peers or sink nodes, and sensor data can be parsed dynamically on the fly without human intervention, made autonomous sensor nodes plug and play in the network world. Like WEB system, well accepted network protocol make networked sensors interchangeable and interoperable, universal sensor tools are possible to process sensed data. We have designed a component-based software tool for sensor application development. In this software environment, all application functionalities are realized through software components, each component is designed to finish a special task, like reading data from networked sensors, data processing, visual presentation, network or local file I/O, HMI, etc. sensor data is processed sequentially by several components through data flow. A component can operate in its own thread, carefully designed component makes the whole data processing flow operates in pipeline mode, greatly improve data processing throughput. Components and data routes can be created and destroyed at runtime, so the application system functionality is reconfigured. We apply the Petri Net tool to model components and the whole application system, investigate their function and performance, present a non-collision high-performance soft-bus for inter-component data transfer, investigate the hazard phenomenon that exists in a multi-input component, and propose technical means to eliminate it.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125215316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-06DOI: 10.1109/AUTEST.2009.5314003
Manzar Abbas, G. Vachtsevanos
In complex systems, there are few critical failure modes. Prognostic models are focused at predicting the evolution of those critical faults, assuming that other subsystems in the same system are performing according to their design specifications. In practice, however, all the subsystems are undergoing deterioration that might accelerate the time evolution of the critical fault mode. This paper aims at analyzing this aspect, i.e. interaction between different fault modes in various subsystems, of the failure prognostic problem. The application domain focuses on an aero propulsion system of the turbofan type. Creep in the high-pressure turbine blade is one of the most critical failure modes of aircraft engines. The effects of health deterioration of low-pressure compressor and high-pressure compressor on creep damage of high-pressure turbine blades are investigated and modeled.
{"title":"A hierarchical framework for fault propagation analysis in complex systems","authors":"Manzar Abbas, G. Vachtsevanos","doi":"10.1109/AUTEST.2009.5314003","DOIUrl":"https://doi.org/10.1109/AUTEST.2009.5314003","url":null,"abstract":"In complex systems, there are few critical failure modes. Prognostic models are focused at predicting the evolution of those critical faults, assuming that other subsystems in the same system are performing according to their design specifications. In practice, however, all the subsystems are undergoing deterioration that might accelerate the time evolution of the critical fault mode. This paper aims at analyzing this aspect, i.e. interaction between different fault modes in various subsystems, of the failure prognostic problem. The application domain focuses on an aero propulsion system of the turbofan type. Creep in the high-pressure turbine blade is one of the most critical failure modes of aircraft engines. The effects of health deterioration of low-pressure compressor and high-pressure compressor on creep damage of high-pressure turbine blades are investigated and modeled.","PeriodicalId":187421,"journal":{"name":"2009 IEEE AUTOTESTCON","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}