Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532500
M. Kafal, Fatme Mustapha, Wafa Ben Hassen, J. Benoit
During the last decade, vast efforts have been invested in research and industry to detect soft noncritical faults in wiring networks. Although time domain reflectometry based methods (TDR) have been the center stage of such techniques, the capability of characterizing the located faults was still out of reach. In fact, this is so important as it can potentially enable preventive maintenance well before the fault's deterioration to critical dangerous stages. An assessment of the fault's situation becomes possible thus maximizing the system functionality and safety while minimizing the out-of-service time. In this paper, we will propose an approach based on the tenets of TDR and post-processing techniques, namely baselining and optimization based algorithms, to detect, locate and characterize soft faults embedded in complex networks. More importantly, this will be accomplished using a single testing port of a totally unknown network whose extremities are kept connected to their loads. Numerical as well as practical experimental results will be employed to validate the efficiency of the proposed approach.
{"title":"A Non Destructive Reflectometry Based Method for the Location and Characterization of Incipient Faults in Complex Unknown Wire Networks","authors":"M. Kafal, Fatme Mustapha, Wafa Ben Hassen, J. Benoit","doi":"10.1109/AUTEST.2018.8532500","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532500","url":null,"abstract":"During the last decade, vast efforts have been invested in research and industry to detect soft noncritical faults in wiring networks. Although time domain reflectometry based methods (TDR) have been the center stage of such techniques, the capability of characterizing the located faults was still out of reach. In fact, this is so important as it can potentially enable preventive maintenance well before the fault's deterioration to critical dangerous stages. An assessment of the fault's situation becomes possible thus maximizing the system functionality and safety while minimizing the out-of-service time. In this paper, we will propose an approach based on the tenets of TDR and post-processing techniques, namely baselining and optimization based algorithms, to detect, locate and characterize soft faults embedded in complex networks. More importantly, this will be accomplished using a single testing port of a totally unknown network whose extremities are kept connected to their loads. Numerical as well as practical experimental results will be employed to validate the efficiency of the proposed approach.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532535
T. Thompson, Kase J. Saylor
Military ground vehicles are complex systems of systems involving Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance/Electronic Warfare (C4ISR/EW) and vehicular platform components, with an ever-increasing demand for more capability, increased survivability and expedient acquisition timelines. A current effort within the U.S. Army is the Vehicular Integration for C4ISR/EW Interoperability (VICTORY) initiative. VICTORY provides standard, on-the-wire network interfaces for C4ISR/EW and platform systems and sensors. VICTORY provides the ability to network platform equipment and enables automated testing and logistics data gathering, thereby offering key pieces of vehicle data to applications on the vehicle. The development of and adherence to open, well-defined, and accepted standards on military vehicles is key to an automated testing capability. By implementing open systems architectures (OSAs) as the interoperability layer upon which systems and sensors are connected, generalized test and evaluation methodologies can be developed and deployed. Leveraging modular, open system architectures and standard specifications like VICTORY for automated vehicle testing may provide opportunities for reduced system interconnect complexity, increased testing capabilities, and possibility for more meaningful, tightly coupled, data-rich test results. OSAs provide the framework for building generalized test sets, offering the opportunity to share a standard in-vehicle network environment upon which different applications can tailor specific test solutions.
{"title":"VICTORY: A New Approach to Automated Vehicle Testing","authors":"T. Thompson, Kase J. Saylor","doi":"10.1109/AUTEST.2018.8532535","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532535","url":null,"abstract":"Military ground vehicles are complex systems of systems involving Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance/Electronic Warfare (C4ISR/EW) and vehicular platform components, with an ever-increasing demand for more capability, increased survivability and expedient acquisition timelines. A current effort within the U.S. Army is the Vehicular Integration for C4ISR/EW Interoperability (VICTORY) initiative. VICTORY provides standard, on-the-wire network interfaces for C4ISR/EW and platform systems and sensors. VICTORY provides the ability to network platform equipment and enables automated testing and logistics data gathering, thereby offering key pieces of vehicle data to applications on the vehicle. The development of and adherence to open, well-defined, and accepted standards on military vehicles is key to an automated testing capability. By implementing open systems architectures (OSAs) as the interoperability layer upon which systems and sensors are connected, generalized test and evaluation methodologies can be developed and deployed. Leveraging modular, open system architectures and standard specifications like VICTORY for automated vehicle testing may provide opportunities for reduced system interconnect complexity, increased testing capabilities, and possibility for more meaningful, tightly coupled, data-rich test results. OSAs provide the framework for building generalized test sets, offering the opportunity to share a standard in-vehicle network environment upon which different applications can tailor specific test solutions.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532527
C. Sparr, R. A. Fox, Yun B. Song
The use of Commercial-Off-The-Shelf (COTS) operating systems in newer generations of Automatic Test Equipment (ATE) has introduced challenges that did not exist with legacy ATE. Unfortunately, COTS instruments and ATE operating systems do not have well documented test sequence execution time. COTS operating systems also require frequent updates due to cyber security concerns, optimization, and obsolescence. These updates, in turn, can affect a Test Program Sets' (TPSs) test sequence execution time and in the worst cases, generate errors. During initial TPS development, the test engineer accounts for any instrument and operating system latency during the TPS integration phase. Because of changes in this latency, the TPS will need to be re-certified whenever a new operating system update is installed prior to releasing it to fleet. This requires maintainers to ensure the integrity of the TPS with extensive regression testing and performing re-integration. For the US Navy's Consolidated Automated Support System (CASS) family of testers, which supports over 2000 unique avionics components, this is a very expensive and labor-intensive effort. Due to the complexity of the TPSs, a highly skilled engineering team is needed to correct test failures that occur during regression testing. As legacy CASS approaches sundown, and is replaced by newer versions of CASS, this regression testing effort will increase significantly. A newer, more automated, and less labor intensive process for regression testing needs to be developed. This paper will highlight the statistical analysis of TPS log data from the CASS family of testers and focus on the test sequence execution time in order to reduce cycle time for regression testing of new software releases to the fleet. Driven by the conclusions of the analysis, an automated tool will be developed to allow software engineers to adjust timing in the test executive in order to minimize the labor hours needed for testing. By reducing the labor needed to certify TPSs, maintenance costs can be optimized to better serve the fleet and Depot customers.
{"title":"Optimizing Regression Testing of Software for the Consolidated Automated Support System","authors":"C. Sparr, R. A. Fox, Yun B. Song","doi":"10.1109/AUTEST.2018.8532527","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532527","url":null,"abstract":"The use of Commercial-Off-The-Shelf (COTS) operating systems in newer generations of Automatic Test Equipment (ATE) has introduced challenges that did not exist with legacy ATE. Unfortunately, COTS instruments and ATE operating systems do not have well documented test sequence execution time. COTS operating systems also require frequent updates due to cyber security concerns, optimization, and obsolescence. These updates, in turn, can affect a Test Program Sets' (TPSs) test sequence execution time and in the worst cases, generate errors. During initial TPS development, the test engineer accounts for any instrument and operating system latency during the TPS integration phase. Because of changes in this latency, the TPS will need to be re-certified whenever a new operating system update is installed prior to releasing it to fleet. This requires maintainers to ensure the integrity of the TPS with extensive regression testing and performing re-integration. For the US Navy's Consolidated Automated Support System (CASS) family of testers, which supports over 2000 unique avionics components, this is a very expensive and labor-intensive effort. Due to the complexity of the TPSs, a highly skilled engineering team is needed to correct test failures that occur during regression testing. As legacy CASS approaches sundown, and is replaced by newer versions of CASS, this regression testing effort will increase significantly. A newer, more automated, and less labor intensive process for regression testing needs to be developed. This paper will highlight the statistical analysis of TPS log data from the CASS family of testers and focus on the test sequence execution time in order to reduce cycle time for regression testing of new software releases to the fleet. Driven by the conclusions of the analysis, an automated tool will be developed to allow software engineers to adjust timing in the test executive in order to minimize the labor hours needed for testing. By reducing the labor needed to certify TPSs, maintenance costs can be optimized to better serve the fleet and Depot customers.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130538197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532509
R. G. Wright, L. Kirkland
This paper describes a novel approach using machine learning and artificial intelligence techniques to analyze, describe and assess stimulus and sensor signal characteristics to create a robust and comprehensive description of Automatic Test Equipment (ATE) instrument capabilities. This approach results in a machine language representation providing a more thorough and accurate assessment of ATE stimulus and sensor capabilities that supports digital, analog, and radio frequency (RF) signals and is especially useful for complex RADAR, SONAR, Infrared and other signals where English and natural language descriptions are difficult or impossible to construct. This is accomplished within the structure of IEEE-Std 1641–2010, Signal and Test Definition, with extensions proposed to support machine language renderings of signal descriptions. This approach facilitates use of generic and commercial automated tools and enhances the possibility for interoperability of tools and test programs across DoD ATE.
{"title":"A Signals Intelligence Approach to Automated Assessment of Instrument Capabilities","authors":"R. G. Wright, L. Kirkland","doi":"10.1109/AUTEST.2018.8532509","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532509","url":null,"abstract":"This paper describes a novel approach using machine learning and artificial intelligence techniques to analyze, describe and assess stimulus and sensor signal characteristics to create a robust and comprehensive description of Automatic Test Equipment (ATE) instrument capabilities. This approach results in a machine language representation providing a more thorough and accurate assessment of ATE stimulus and sensor capabilities that supports digital, analog, and radio frequency (RF) signals and is especially useful for complex RADAR, SONAR, Infrared and other signals where English and natural language descriptions are difficult or impossible to construct. This is accomplished within the structure of IEEE-Std 1641–2010, Signal and Test Definition, with extensions proposed to support machine language renderings of signal descriptions. This approach facilitates use of generic and commercial automated tools and enhances the possibility for interoperability of tools and test programs across DoD ATE.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116501146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532545
R. Albertson, C. Smith, T. Carlisle, Phuong-Lan Nguyen, C. Stewart, Jarrod Haning, Viet-Cuong Nguyen, Geoffrey Dolinger
Automatic Test Equipment (ATE) obsolescence drives a requirement for rehosting Test Program Sets (TPSs). However, due to the many variations of programming languages, in most cases, the target language of the new ATE platform differs from the legacy language. This paper discusses a flexible, modular software approach for TPS rehost efforts. The new approach can be used to convert code while also allowing for the automated creation of additional output products (e.g., Test Requirements Document (TRD), Automatic Test Markup Language (ATML) files, fault universe, and ATE instrument requirements). Furthermore, the modular design and nonspecific, intermediate data structure significantly improves code reusability for rehosting between different ATE platforms. Additionally, as a case study, this paper discusses the benefits achieved using this approach.
{"title":"A New Approach to TPS Rehost Using a Modular Token Mapping System","authors":"R. Albertson, C. Smith, T. Carlisle, Phuong-Lan Nguyen, C. Stewart, Jarrod Haning, Viet-Cuong Nguyen, Geoffrey Dolinger","doi":"10.1109/AUTEST.2018.8532545","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532545","url":null,"abstract":"Automatic Test Equipment (ATE) obsolescence drives a requirement for rehosting Test Program Sets (TPSs). However, due to the many variations of programming languages, in most cases, the target language of the new ATE platform differs from the legacy language. This paper discusses a flexible, modular software approach for TPS rehost efforts. The new approach can be used to convert code while also allowing for the automated creation of additional output products (e.g., Test Requirements Document (TRD), Automatic Test Markup Language (ATML) files, fault universe, and ATE instrument requirements). Furthermore, the modular design and nonspecific, intermediate data structure significantly improves code reusability for rehosting between different ATE platforms. Additionally, as a case study, this paper discusses the benefits achieved using this approach.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"12 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532524
Yindong Xiao, Guangkun Guo, Yu Chen, Wenhao Zhao, Ke Liu, Lei Huang
To solve the problem that auto test equipment (ATE) user cannot set an appropriate output sampling rate due to the missing of arbitrary waveform generator's internal parameters, this paper proposes a sampling rate selecting algorithm based on rational sampling rate conversion (RSRC) theory. Under anti-aliasing conditions, the algorithm selects an output sampling rate, as fractional-times (L/M) as original one, in acceptable range and converts waveform to the output sampling rate by RSRC. In this algorithm, the acceptable sampling rate range can be calculated and be used to get the upper limit of L. Since the value of output waveform length must be less than the memory depth, lower limit of the required memory depth is obtained when a complete waveform can be generated. The experiment result shows that the computational complexity of sampling rate conversion and required memory depth of AWG are both reduced with proposed algorithm comparing with fixed RSRC; the harmonic of output signal is decreased with proposed algorithm comparing with arbitrary sampling rate conversion (ASRC).
{"title":"An Algorithm for Selecting Sampling Rate in Arbitrary Waveform Generator","authors":"Yindong Xiao, Guangkun Guo, Yu Chen, Wenhao Zhao, Ke Liu, Lei Huang","doi":"10.1109/AUTEST.2018.8532524","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532524","url":null,"abstract":"To solve the problem that auto test equipment (ATE) user cannot set an appropriate output sampling rate due to the missing of arbitrary waveform generator's internal parameters, this paper proposes a sampling rate selecting algorithm based on rational sampling rate conversion (RSRC) theory. Under anti-aliasing conditions, the algorithm selects an output sampling rate, as fractional-times (L/M) as original one, in acceptable range and converts waveform to the output sampling rate by RSRC. In this algorithm, the acceptable sampling rate range can be calculated and be used to get the upper limit of L. Since the value of output waveform length must be less than the memory depth, lower limit of the required memory depth is obtained when a complete waveform can be generated. The experiment result shows that the computational complexity of sampling rate conversion and required memory depth of AWG are both reduced with proposed algorithm comparing with fixed RSRC; the harmonic of output signal is decreased with proposed algorithm comparing with arbitrary sampling rate conversion (ASRC).","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130429277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532518
A. Burkhardt, S. Berryman, Ashley Brio, S. Ferkau, Gloria Hubner, K. Lynch, Susan Mittman, Kathy Sonderer
Manufacturing test data volumes are constantly increasing. While there has been extensive focus in the literature on big data processing, less focus has existed on data quality, and considerably less focus has been placed specifically on manufacturing test data quality. This paper presents a fully automated test data quality measurement developed by the authors to facilitate analysis of manufacturing test operations, resulting in a single number used to compare manufacturing test data quality across programs and factories, and focusing effort cost-effectively. The automation enables program and factory users to see, understand, and improve their test data quality directly. Immediate improvements in test data quality speed manufacturing test operation analysis, reducing elapsed time and overall spend in test operations. Data quality has significant financial impacts to businesses [1]. While manufacturing cost models are well understood, data quality cost models are less well understood (see Eppler & Helfert [2] who review manufacturing cost models and create a taxonomy for data quality costs). Kim & Choi [3] discuss measuring data quality costs, and a rudimentary data quality cost calculation is described in [4]. Haug et al. [5] describe a classification of costs for poor data quality, and while they do not provide a cost calculation, they do define optimality for data quality. Laranjeiro et al. [6] have a recent survey of poor data quality classification. Ge & Helfert [7] extend the work in [2], and provide an updated review of data quality costs. Test data is specifically addressed in the context of data processing in [8]. Big data quality efforts are reviewed in [9], [10]. Data quality metrics are discussed in [11], and requirements for data quality metrics are identified in [12]. Data inconsistencies are detailed in [13], while categorical data inconsistencies are explained in [14]. In the current work, manufacturing test data quality is directly correlated to the speed of manufacturing test operations analysis. A measurement for manufacturing test data quality indicates the speed at which analysis can be performed, and increases in the test data quality score have precipitated increases in the speed of analysis, described herein.
{"title":"Measuring Manufacturing Test Data Analysis Quality","authors":"A. Burkhardt, S. Berryman, Ashley Brio, S. Ferkau, Gloria Hubner, K. Lynch, Susan Mittman, Kathy Sonderer","doi":"10.1109/AUTEST.2018.8532518","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532518","url":null,"abstract":"Manufacturing test data volumes are constantly increasing. While there has been extensive focus in the literature on big data processing, less focus has existed on data quality, and considerably less focus has been placed specifically on manufacturing test data quality. This paper presents a fully automated test data quality measurement developed by the authors to facilitate analysis of manufacturing test operations, resulting in a single number used to compare manufacturing test data quality across programs and factories, and focusing effort cost-effectively. The automation enables program and factory users to see, understand, and improve their test data quality directly. Immediate improvements in test data quality speed manufacturing test operation analysis, reducing elapsed time and overall spend in test operations. Data quality has significant financial impacts to businesses [1]. While manufacturing cost models are well understood, data quality cost models are less well understood (see Eppler & Helfert [2] who review manufacturing cost models and create a taxonomy for data quality costs). Kim & Choi [3] discuss measuring data quality costs, and a rudimentary data quality cost calculation is described in [4]. Haug et al. [5] describe a classification of costs for poor data quality, and while they do not provide a cost calculation, they do define optimality for data quality. Laranjeiro et al. [6] have a recent survey of poor data quality classification. Ge & Helfert [7] extend the work in [2], and provide an updated review of data quality costs. Test data is specifically addressed in the context of data processing in [8]. Big data quality efforts are reviewed in [9], [10]. Data quality metrics are discussed in [11], and requirements for data quality metrics are identified in [12]. Data inconsistencies are detailed in [13], while categorical data inconsistencies are explained in [14]. In the current work, manufacturing test data quality is directly correlated to the speed of manufacturing test operations analysis. A measurement for manufacturing test data quality indicates the speed at which analysis can be performed, and increases in the test data quality score have precipitated increases in the speed of analysis, described herein.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130442165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532505
Sabiha Hande Koru Başoğlu, Zafer Savaş
This paper reviews the typical design and production verification lifecycle of an EW receiver Line Replacement Unit (LRU) according to the military standards and conforming the hardware development processes. The verification lifecycle of such an LRU begins with the preliminary design review. Test design engineer is responsible for reviewing the requirements in terms of testability. In the next step test definitions for each requirement are prepared and test infrastructures including test software, cabling, test fixtures, mechanical fixtures, Automatic Test Equipment etc. are designed. Typical milestones of the verification tests are; –Functional tests in laboratory environment, –Environmental conditions tests according to standards such as MIL-STD-810 –Electromagnetic Compatibility tests according to the standards such as MIL-STD-461 After the corresponding tests on the prototype unit have been completed, the unit is ready for integration tests and the results of the conducted tests are reported in Hardware Test Report. Finally, the necessary production test infrastructure including test setup, test documentation, Environmental Stress Screening infrastructure are prepared by the same test design engineer for verifying the mass production units. Within this paper, for a typical EW receiver LRU, examples for the types of tests conducted, typical testing times, characteristics of the test setups, difficulties encountered during the whole testing activities are also given.
{"title":"Design & Production Verification Lifecycle Of An EW Receiver Line Replaceable Unit (LRU) According To The Military Standards","authors":"Sabiha Hande Koru Başoğlu, Zafer Savaş","doi":"10.1109/AUTEST.2018.8532505","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532505","url":null,"abstract":"This paper reviews the typical design and production verification lifecycle of an EW receiver Line Replacement Unit (LRU) according to the military standards and conforming the hardware development processes. The verification lifecycle of such an LRU begins with the preliminary design review. Test design engineer is responsible for reviewing the requirements in terms of testability. In the next step test definitions for each requirement are prepared and test infrastructures including test software, cabling, test fixtures, mechanical fixtures, Automatic Test Equipment etc. are designed. Typical milestones of the verification tests are; –Functional tests in laboratory environment, –Environmental conditions tests according to standards such as MIL-STD-810 –Electromagnetic Compatibility tests according to the standards such as MIL-STD-461 After the corresponding tests on the prototype unit have been completed, the unit is ready for integration tests and the results of the conducted tests are reported in Hardware Test Report. Finally, the necessary production test infrastructure including test setup, test documentation, Environmental Stress Screening infrastructure are prepared by the same test design engineer for verifying the mass production units. Within this paper, for a typical EW receiver LRU, examples for the types of tests conducted, typical testing times, characteristics of the test setups, difficulties encountered during the whole testing activities are also given.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127390971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/autest.2018.8532508
S. R. Sabapathi
Any PCB manufactured needs to be tested fully for its functionality at the production site and to be maintained for its life time with minimum effort and expense for a product to be commercially successful. This paper presents the techniques and equipment that were used in fault diagnosis and identifying of the faulty components in an electronic Printed circuit board (PCB) in the past and present. With present day technology of high density - high pin count ASIC / FPGA chip sets and SOIC devices, it becomes highly challenging for the test and maintenance engineer to trouble-shoot and to identify faults at component level using the present-day techniques and equipment. The economics may not allow to replace the entire circuit board especially in cases of highly expensive defence electronic products. Even up keeping an inventory of spare boards for many years is a challenge. Component obsolescence and OEM shutting down support is yet another problem. So, what is the future of PCB trouble-shooting and component level maintenance? This paper suggests various trouble-shooting techniques and equipment that can help a component level maintenance. One such future solution could be embedding functional self-tests into every device a PCB assembly holds and they are checked by a simple JTAG command
{"title":"The Future of PCB Diagnostics and Trouble-shooting","authors":"S. R. Sabapathi","doi":"10.1109/autest.2018.8532508","DOIUrl":"https://doi.org/10.1109/autest.2018.8532508","url":null,"abstract":"Any PCB manufactured needs to be tested fully for its functionality at the production site and to be maintained for its life time with minimum effort and expense for a product to be commercially successful. This paper presents the techniques and equipment that were used in fault diagnosis and identifying of the faulty components in an electronic Printed circuit board (PCB) in the past and present. With present day technology of high density - high pin count ASIC / FPGA chip sets and SOIC devices, it becomes highly challenging for the test and maintenance engineer to trouble-shoot and to identify faults at component level using the present-day techniques and equipment. The economics may not allow to replace the entire circuit board especially in cases of highly expensive defence electronic products. Even up keeping an inventory of spare boards for many years is a challenge. Component obsolescence and OEM shutting down support is yet another problem. So, what is the future of PCB trouble-shooting and component level maintenance? This paper suggests various trouble-shooting techniques and equipment that can help a component level maintenance. One such future solution could be embedding functional self-tests into every device a PCB assembly holds and they are checked by a simple JTAG command","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130759902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532542
Y. Eracar, Thomas Jacobs
This paper starts with a brief overview of Test Program Set (TPS) life cycle and explains TPS maintenance (and/or re-host) issues that come with long-term support requirements. The focus of the paper is the compatibility issues found between the legacy and the replacement analog instrumentation during two recent TPS re-host projects. A detailed analysis is provided for the compatibility issues and the applied solutions from the perspective of both the TPS developer and the analog instrument vendor. The paper is concluded with a section on possible approaches to design a new analog instrument that needs to replace a legacy instrument without sacrificing from the features necessary for modern test requirements.
{"title":"Challenges of Replacing Legacy Analog Instrumentation During TPS Rehost Projects","authors":"Y. Eracar, Thomas Jacobs","doi":"10.1109/AUTEST.2018.8532542","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532542","url":null,"abstract":"This paper starts with a brief overview of Test Program Set (TPS) life cycle and explains TPS maintenance (and/or re-host) issues that come with long-term support requirements. The focus of the paper is the compatibility issues found between the legacy and the replacement analog instrumentation during two recent TPS re-host projects. A detailed analysis is provided for the compatibility issues and the applied solutions from the perspective of both the TPS developer and the analog instrument vendor. The paper is concluded with a section on possible approaches to design a new analog instrument that needs to replace a legacy instrument without sacrificing from the features necessary for modern test requirements.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132034210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}