Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532522
Joseph Bosas
History is filled with examples where software testing and adherence to Capability Maturity Model Integration may have prevented disaster. Consider just two past examples, the EDS Child Support System and the Ariane 5 Flight 501, where software error-related losses totaled approximately $16 billion, not including the loss of goodwill and the negative social impact. Honeywell manages the National Security Campus in Kansas City, MO and Albuquerque, NM, for the U.S. Department of Energy's National Nuclear Security Administration. The Kansas City National Security Campus provides technology solutions to national security challenges and is commissioned to increase the integrity of software developed. Honeywell is therefore beginning to utilize automated regression testing, not only to achieve cost savings (as compared to manual performance of tests) but also to push the envelope of traditional automated testing through an entire software diagnostics suite that includes the following: analysis of algorithmic time efficiency, percentage of code executed by tests, and automated capture/storage of testing reports for historical purposes.
{"title":"Automated Testing Importance and Impact","authors":"Joseph Bosas","doi":"10.1109/AUTEST.2018.8532522","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532522","url":null,"abstract":"History is filled with examples where software testing and adherence to Capability Maturity Model Integration may have prevented disaster. Consider just two past examples, the EDS Child Support System and the Ariane 5 Flight 501, where software error-related losses totaled approximately $16 billion, not including the loss of goodwill and the negative social impact. Honeywell manages the National Security Campus in Kansas City, MO and Albuquerque, NM, for the U.S. Department of Energy's National Nuclear Security Administration. The Kansas City National Security Campus provides technology solutions to national security challenges and is commissioned to increase the integrity of software developed. Honeywell is therefore beginning to utilize automated regression testing, not only to achieve cost savings (as compared to manual performance of tests) but also to push the envelope of traditional automated testing through an entire software diagnostics suite that includes the following: analysis of algorithmic time efficiency, percentage of code executed by tests, and automated capture/storage of testing reports for historical purposes.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114776292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532514
C. Gorringe, I. Neag
The paper considers the various techniques and design patterns available for modelling test station and instrument capabilities, in terms of the resources they have available, the ports any signals would go through, and the test capabilities available at each port. These standard models are used in resource allocation algorithms, to automatically map test requirements to ATE resources, to identify exception reports (missing capabilities), etc. The paper considers modelling resource dependencies in IEEE 1671.2 ATML Instrument Description and IEEE 1671.6 ATML Test Station Description, proposing a set of simple modeling patterns that describe independent, alternative, and concurrent capabilities. The consistent use of these patterns produces ATML capability description that are easy to interpret and update, benefiting the long-term maintainability of the automatic test system.
{"title":"Model-Based Design Patterns for describing Test Station and Resource Capabilities","authors":"C. Gorringe, I. Neag","doi":"10.1109/AUTEST.2018.8532514","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532514","url":null,"abstract":"The paper considers the various techniques and design patterns available for modelling test station and instrument capabilities, in terms of the resources they have available, the ports any signals would go through, and the test capabilities available at each port. These standard models are used in resource allocation algorithms, to automatically map test requirements to ATE resources, to identify exception reports (missing capabilities), etc. The paper considers modelling resource dependencies in IEEE 1671.2 ATML Instrument Description and IEEE 1671.6 ATML Test Station Description, proposing a set of simple modeling patterns that describe independent, alternative, and concurrent capabilities. The consistent use of these patterns produces ATML capability description that are easy to interpret and update, benefiting the long-term maintainability of the automatic test system.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/autest.2018.8532523
S. Wegener
New Automatic Test Equipment (ATE) must be capable of supporting multiple runtime systems to be a cost effective solution across multiple military platforms. This paper will present a third generation ATE architecture capable of supporting legacy runtime systems, commercial off the shelf runtimes and a hybrid runtime system based on Microsoft's Visual Studio product line. The paper will touch on rapid development of an ATE system and the techniques to test the software layers exposed to test program developer. Presented will be trade-offs between cost, schedule and long term supportability based on the requirements for developing and sustaining test programs over a long period of time as generally required by military systems. Included is a discussion on graphical User Interfaces (UI) and the possibility of a generic UI for test development. The paper will cover a brief history of the how this third generation of ATE architecture is used to support current programs.
{"title":"Supporting Multiple Runtime Engines on a Common and Scalable Automatic Test Equipment (ATE) Framework","authors":"S. Wegener","doi":"10.1109/autest.2018.8532523","DOIUrl":"https://doi.org/10.1109/autest.2018.8532523","url":null,"abstract":"New Automatic Test Equipment (ATE) must be capable of supporting multiple runtime systems to be a cost effective solution across multiple military platforms. This paper will present a third generation ATE architecture capable of supporting legacy runtime systems, commercial off the shelf runtimes and a hybrid runtime system based on Microsoft's Visual Studio product line. The paper will touch on rapid development of an ATE system and the techniques to test the software layers exposed to test program developer. Presented will be trade-offs between cost, schedule and long term supportability based on the requirements for developing and sustaining test programs over a long period of time as generally required by military systems. Included is a discussion on graphical User Interfaces (UI) and the possibility of a generic UI for test development. The paper will cover a brief history of the how this third generation of ATE architecture is used to support current programs.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123531208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532507
F. Szidarovszky, D. Goodman, Richard Thompson, H. Manhaeve
To secure operational readiness of components, equipment, subsystems and systems and to assure successful job completion, appropriate monitoring, inspection and preventive maintenance, repair and replacement strategies are needed. Such requires suitable sensors and measurement approaches serving continuous monitoring of key operational parameters, aiming at discovering anomalies and assessing degradation levels, State of Health (SoH) and Remaining Useful Life (RUL) of any critical component involved. Serving this purpose, multivariate methods are important tools to analyzing multiple data sequences, providing means to compare actual measurement data against data representing a healthy system and making qualified assessments, typically based on measuring the distance between the actual system and the healthy system. The Multivariate State Estimation Technique (MSET) uses the least squares approach, the Auto-Associative Kernel Regression (AAKR) method uses the nonparametric Kernel estimation procedure, while the usage of the Mahalanobis distance is based on the covariance matrix of the different measured parameters. These methods are all based on specially selected distance definitions. In this paper, several extensions and variants of these procedures, yielding alternative measures, are introduced, analyzed and examined with focus on their advantages and disadvantages. Possible application areas are also outlined.
{"title":"Alternative Multivariate Methods for State Estimation, Anomaly Detection, and Prognostics","authors":"F. Szidarovszky, D. Goodman, Richard Thompson, H. Manhaeve","doi":"10.1109/AUTEST.2018.8532507","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532507","url":null,"abstract":"To secure operational readiness of components, equipment, subsystems and systems and to assure successful job completion, appropriate monitoring, inspection and preventive maintenance, repair and replacement strategies are needed. Such requires suitable sensors and measurement approaches serving continuous monitoring of key operational parameters, aiming at discovering anomalies and assessing degradation levels, State of Health (SoH) and Remaining Useful Life (RUL) of any critical component involved. Serving this purpose, multivariate methods are important tools to analyzing multiple data sequences, providing means to compare actual measurement data against data representing a healthy system and making qualified assessments, typically based on measuring the distance between the actual system and the healthy system. The Multivariate State Estimation Technique (MSET) uses the least squares approach, the Auto-Associative Kernel Regression (AAKR) method uses the nonparametric Kernel estimation procedure, while the usage of the Mahalanobis distance is based on the covariance matrix of the different measured parameters. These methods are all based on specially selected distance definitions. In this paper, several extensions and variants of these procedures, yielding alternative measures, are introduced, analyzed and examined with focus on their advantages and disadvantages. Possible application areas are also outlined.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130010177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532530
C. P. Heagney, L. J. Walker
The Department of Defense (DoD) spends significant amounts of money addressing software obsolescence and compatibility across the Enterprise. Cybersecurity concerns drive constant change in operating systems (OS), software, and hardware. Automated test systems are particularly impacted by these changes because Test Program Sets (TPS) are validated to detect specific faults, and then remain unchanged. To validate efficacy, faults are inserted into avionics to confirm the TPS correctly detects and isolates the fault. Many times faults are inserted to create circuit opens or shorts by unsoldering and lifting pins. This is a lengthy, costly process requiring expert engineers and technicians, access to the avionics, and risk to damage the good avionics in the process. Future changes to TPSs or station software increase uncertainty that previously detected faults will still be correctly isolated. A technical solution is needed to reduce cyber risk, while maintaining existing TPS and station software in a known good state. This research presents Virtual Applications as a solution with widespread applicability across the DoD, Industry, and Academia. Application virtualization is a process that packages computer programs and their dependencies from the underlying OS into a single executable bundle. Applications are then isolated from the host OS. In this paper, we present the latest virtual application development by the US Navy with specific examples from the Automated Test Equipment (ATE) community. Virtual Applications allow legacy software to continue functioning on modern hardware and operating systems, limit cyberattack surface of fielded systems, reduce total ownership cost, and reduce technical risk from changes to known good software.
{"title":"Virtual Applications Reduce Cyber Attack Surface for Test Program Sets and Station Software","authors":"C. P. Heagney, L. J. Walker","doi":"10.1109/AUTEST.2018.8532530","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532530","url":null,"abstract":"The Department of Defense (DoD) spends significant amounts of money addressing software obsolescence and compatibility across the Enterprise. Cybersecurity concerns drive constant change in operating systems (OS), software, and hardware. Automated test systems are particularly impacted by these changes because Test Program Sets (TPS) are validated to detect specific faults, and then remain unchanged. To validate efficacy, faults are inserted into avionics to confirm the TPS correctly detects and isolates the fault. Many times faults are inserted to create circuit opens or shorts by unsoldering and lifting pins. This is a lengthy, costly process requiring expert engineers and technicians, access to the avionics, and risk to damage the good avionics in the process. Future changes to TPSs or station software increase uncertainty that previously detected faults will still be correctly isolated. A technical solution is needed to reduce cyber risk, while maintaining existing TPS and station software in a known good state. This research presents Virtual Applications as a solution with widespread applicability across the DoD, Industry, and Academia. Application virtualization is a process that packages computer programs and their dependencies from the underlying OS into a single executable bundle. Applications are then isolated from the host OS. In this paper, we present the latest virtual application development by the US Navy with specific examples from the Automated Test Equipment (ATE) community. Virtual Applications allow legacy software to continue functioning on modern hardware and operating systems, limit cyberattack surface of fielded systems, reduce total ownership cost, and reduce technical risk from changes to known good software.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126740509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532501
Peter van den Eijnden, A. Sparks
Since ratification by the IEEE in 1990, the 1149.1 standard, synonymous with boundary-scan and JTAG, has been relied upon heavily to address limited physical test access challenges in complex and highly dense electronic designs in industries ranging from, but not limited to, Industrial, Tele/DataComm, Automotive and Mil/Aero. Over the years the advancement of boundary-scan software and hardware and the evolution of JTAG-based standards have facilitated a vast range of test and in-system programming capabilities for use in prototype verification test, manufacturing/production test, system test and even test of systems in the field. This paper will examine the use of JTAG/boundary-scan throughout the lifecycle of a system from conception to end-of-life support, touching on topics such as Design-For-Test techniques for board-level and system-level test, integration of boundary-scan with existing ATE, with an emphasis on Intermediate-level and Depot-level test as well as a novel approach to perform remote boundary-scan test, diagnostics and reconfiguration.
{"title":"Boundary-scan, a cradle-to-grave test, programming and maintenance solution that stands the test of time","authors":"Peter van den Eijnden, A. Sparks","doi":"10.1109/AUTEST.2018.8532501","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532501","url":null,"abstract":"Since ratification by the IEEE in 1990, the 1149.1 standard, synonymous with boundary-scan and JTAG, has been relied upon heavily to address limited physical test access challenges in complex and highly dense electronic designs in industries ranging from, but not limited to, Industrial, Tele/DataComm, Automotive and Mil/Aero. Over the years the advancement of boundary-scan software and hardware and the evolution of JTAG-based standards have facilitated a vast range of test and in-system programming capabilities for use in prototype verification test, manufacturing/production test, system test and even test of systems in the field. This paper will examine the use of JTAG/boundary-scan throughout the lifecycle of a system from conception to end-of-life support, touching on topics such as Design-For-Test techniques for board-level and system-level test, integration of boundary-scan with existing ATE, with an emphasis on Intermediate-level and Depot-level test as well as a novel approach to perform remote boundary-scan test, diagnostics and reconfiguration.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532503
Juan E Ramos, Vi T Weaver, Edward Ly
The test strategy developed for a program has many similarities to the development of a military strategy. Both need a strong central leader to drive decisions and maintain focus on the end goal. In the military, the commanding General defines and drives the military strategy; likewise, on a product development program, the Test Architect defines and drives the test strategy. Ultimately, the success of that program is decisively determined by the program test strategy robustness. The test strategy is a living document that defines all test activities across the product lifecycle required to meet a defined end-state. Consequently, the effective design verification and validation of a new product in the market is wholly dependent on the careful planning and execution of the test strategy. But what defines a good test strategy and how does that drive a successful test program? This paper discusses how test tenets, like a set of fundamental military principles of war, must first be defined as part of the test strategy. These tenets establish the rules for test program execution such that no matter how the test program may evolve, the tenets can never be broken, else additional cost and schedule occur. Additionally, test goals must be identified to constrain and bound the scope of the test program. Each test goal should have a set of key processes, or tactics as defined in the military vernacular, and associated metrics to drive the execution of the test strategy and provide status. Test strategy execution, from defining test tenets to identifying processes and metrics, drive successful test program execution and ensure that the delivered product is fully verified and validated. Establishing the test strategy early on in the product development lifecycle ensures that all the key stakeholders are aligned and remain focused on the outcome of the test program and guarantees successful delivery of the product to the and guarantees successful delivery of the product to the Customer.
{"title":"Execution of a Test Program like a Military Campaign","authors":"Juan E Ramos, Vi T Weaver, Edward Ly","doi":"10.1109/AUTEST.2018.8532503","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532503","url":null,"abstract":"The test strategy developed for a program has many similarities to the development of a military strategy. Both need a strong central leader to drive decisions and maintain focus on the end goal. In the military, the commanding General defines and drives the military strategy; likewise, on a product development program, the Test Architect defines and drives the test strategy. Ultimately, the success of that program is decisively determined by the program test strategy robustness. The test strategy is a living document that defines all test activities across the product lifecycle required to meet a defined end-state. Consequently, the effective design verification and validation of a new product in the market is wholly dependent on the careful planning and execution of the test strategy. But what defines a good test strategy and how does that drive a successful test program? This paper discusses how test tenets, like a set of fundamental military principles of war, must first be defined as part of the test strategy. These tenets establish the rules for test program execution such that no matter how the test program may evolve, the tenets can never be broken, else additional cost and schedule occur. Additionally, test goals must be identified to constrain and bound the scope of the test program. Each test goal should have a set of key processes, or tactics as defined in the military vernacular, and associated metrics to drive the execution of the test strategy and provide status. Test strategy execution, from defining test tenets to identifying processes and metrics, drive successful test program execution and ensure that the delivered product is fully verified and validated. Establishing the test strategy early on in the product development lifecycle ensures that all the key stakeholders are aligned and remain focused on the outcome of the test program and guarantees successful delivery of the product to the and guarantees successful delivery of the product to the Customer.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117039427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532525
Rıdvan Sürbahanli, Kağan Berk Tanaydin
This manuscript reports on a fast, accurate, and cost-effective current sensing technique for multi-channel DC-biased burn-in test systems. Hybrid microwave modules designed for military and space platforms must be undergone electrical burn-in and life tests according to the production-level military and space qualification standards. These tests require prolonged test durations on the order of hundreds of hours and multi-channel test setups along with the continuous current monitoring and recording for each device under test (DUT). Current monitoring for a single channel setup is quite straightforward. However, for multi-channel test scenario, using the conventional test methods with specific power supply for each DUT may lead to costly and bulky systems. Precision Hall-effect current sensor is a good alternative monitoring the current of DUTs using a common power supply. For such a current sensor, as the current flows through the copper conduction path, it creates a magnetic field that is sensed by the integrated Hall integrated circuit (IC) and converted into a linearly proportional voltage. To digitize the output voltage of the sensor, a precision analog-to-digital converter (ADC) with SPI or I2C communication interface is used. One advantage of using the Hall sensor structure for measuring the target current will result in minimal power loss due to the non-contact inductive detection. In addition to the utilization of Hall-effect current sensor, we implemented single-pole-single-throw (SPST) switch and fast acting fuse for each DUT line in order to protect the system, in case of an early failure in the DUTs. As a result, using the abovementioned configuration we conducted DC burn-in and life tests on various radio frequency (RF) and microwave (MW) high power hybrid modules.
{"title":"Automated Multi-Channel DC-Biased Burn-in Test System using Hall Effect Current Sensor","authors":"Rıdvan Sürbahanli, Kağan Berk Tanaydin","doi":"10.1109/AUTEST.2018.8532525","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532525","url":null,"abstract":"This manuscript reports on a fast, accurate, and cost-effective current sensing technique for multi-channel DC-biased burn-in test systems. Hybrid microwave modules designed for military and space platforms must be undergone electrical burn-in and life tests according to the production-level military and space qualification standards. These tests require prolonged test durations on the order of hundreds of hours and multi-channel test setups along with the continuous current monitoring and recording for each device under test (DUT). Current monitoring for a single channel setup is quite straightforward. However, for multi-channel test scenario, using the conventional test methods with specific power supply for each DUT may lead to costly and bulky systems. Precision Hall-effect current sensor is a good alternative monitoring the current of DUTs using a common power supply. For such a current sensor, as the current flows through the copper conduction path, it creates a magnetic field that is sensed by the integrated Hall integrated circuit (IC) and converted into a linearly proportional voltage. To digitize the output voltage of the sensor, a precision analog-to-digital converter (ADC) with SPI or I2C communication interface is used. One advantage of using the Hall sensor structure for measuring the target current will result in minimal power loss due to the non-contact inductive detection. In addition to the utilization of Hall-effect current sensor, we implemented single-pole-single-throw (SPST) switch and fast acting fuse for each DUT line in order to protect the system, in case of an early failure in the DUTs. As a result, using the abovementioned configuration we conducted DC burn-in and life tests on various radio frequency (RF) and microwave (MW) high power hybrid modules.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115471724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/AUTEST.2018.8532531
Josselyn Webb
An end-to-end, holistic approach to capturing diagnostic data for disconnected automatic test systems, as part of a global maintenance capability, is a challenge many maintenance organizations face today. This is especially true for the Marine Corp's Automatic Test Equipment Program (ATEP) supporting Ground Weapon Systems. These systems often operate under austere conditions, with little to no connectivity, making data collection by manual processes, at point of calibration, or during support representative site visits the only viable options. While the goal is to collect diagnostic data every thirty days, the reality is that in many cases data collection periodicity can exceed a year or more. A review of the data collected over the two year period of 2016 to 2017 indicates that less than half of all systems supported by ATEP report annually. Of those systems, there were some that showed evidence of loss of data due to a reset of the diagnostic history databases. This evidence, coupled with the growing importance of diagnostic data, make it imperative that we reduce the gap in reliable diagnostic data collection. The purpose of this study is to investigate the integration of recent developments in areas of technical convergence to support more effective diagnostic data collection. The specific areas that will be demonstrated are the Marine Corp's integration of Boeing's Health Management System (HMS), whose evolution has undergone much growth in support of Joint efforts, and the Marine Corp's Electronic Maintenance Support System (EMSS). This infrastructure will be used to demonstrate the diagnostic data needs of the Ground Radio Maintenance Automatic Tests System (GRMATS). This paper will include descriptions of operational scenarios, prototypes and integration through a partnership between Marine Corp's ATEP, and Penn State's Systems Integration Lab (SIL). It is the goal of this paper to demonstrate an integrated, technical solution that shows the potential of recent capabilities. By combining recent advancements in maintenance support, the author seeks to demonstrate a prototype environment that meets the goal of an end-to-end diagnostic data solution that bridges the air-gap between disconnected automatic tests systems and the test support enterprise, exceeding previously unrealized goals of collection frequency.
{"title":"Data Collection for Disconnected Diagnostics in a Net-Centric Environment","authors":"Josselyn Webb","doi":"10.1109/AUTEST.2018.8532531","DOIUrl":"https://doi.org/10.1109/AUTEST.2018.8532531","url":null,"abstract":"An end-to-end, holistic approach to capturing diagnostic data for disconnected automatic test systems, as part of a global maintenance capability, is a challenge many maintenance organizations face today. This is especially true for the Marine Corp's Automatic Test Equipment Program (ATEP) supporting Ground Weapon Systems. These systems often operate under austere conditions, with little to no connectivity, making data collection by manual processes, at point of calibration, or during support representative site visits the only viable options. While the goal is to collect diagnostic data every thirty days, the reality is that in many cases data collection periodicity can exceed a year or more. A review of the data collected over the two year period of 2016 to 2017 indicates that less than half of all systems supported by ATEP report annually. Of those systems, there were some that showed evidence of loss of data due to a reset of the diagnostic history databases. This evidence, coupled with the growing importance of diagnostic data, make it imperative that we reduce the gap in reliable diagnostic data collection. The purpose of this study is to investigate the integration of recent developments in areas of technical convergence to support more effective diagnostic data collection. The specific areas that will be demonstrated are the Marine Corp's integration of Boeing's Health Management System (HMS), whose evolution has undergone much growth in support of Joint efforts, and the Marine Corp's Electronic Maintenance Support System (EMSS). This infrastructure will be used to demonstrate the diagnostic data needs of the Ground Radio Maintenance Automatic Tests System (GRMATS). This paper will include descriptions of operational scenarios, prototypes and integration through a partnership between Marine Corp's ATEP, and Penn State's Systems Integration Lab (SIL). It is the goal of this paper to demonstrate an integrated, technical solution that shows the potential of recent capabilities. By combining recent advancements in maintenance support, the author seeks to demonstrate a prototype environment that meets the goal of an end-to-end diagnostic data solution that bridges the air-gap between disconnected automatic tests systems and the test support enterprise, exceeding previously unrealized goals of collection frequency.","PeriodicalId":384058,"journal":{"name":"2018 IEEE AUTOTESTCON","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115399345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}