Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165832
P. Wang, L. Menegozzi
Key elements of an automated damage assessment (ADA) will include ground-based sensors to survey and measure postattack damages, communication networks to link sensors, a survival recovery center (SRC), a runway repair team (or robots) for rapid response, and advanced signal processors to perform the 'search and optimization' processes for the 'best' airbase recovery plan. To meet the USAF ADA requirements, ITT Avionics has proposed the development of a hybrid signal processor. The system will consist of algorithmic processors and neural networks. To improve DA performance, key DA functions are implemented by neural networks. Due to the intrinsic nature of distributed processing power, the neural network not only provides the high throughput required for DA but it also achieves fault tolerance and graceful degradation, which are extremely important for the Rapid Runway Repair program.<>
{"title":"A neural network based postattack damage assessment system","authors":"P. Wang, L. Menegozzi","doi":"10.1109/NAECON.1991.165832","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165832","url":null,"abstract":"Key elements of an automated damage assessment (ADA) will include ground-based sensors to survey and measure postattack damages, communication networks to link sensors, a survival recovery center (SRC), a runway repair team (or robots) for rapid response, and advanced signal processors to perform the 'search and optimization' processes for the 'best' airbase recovery plan. To meet the USAF ADA requirements, ITT Avionics has proposed the development of a hybrid signal processor. The system will consist of algorithmic processors and neural networks. To improve DA performance, key DA functions are implemented by neural networks. Due to the intrinsic nature of distributed processing power, the neural network not only provides the high throughput required for DA but it also achieves fault tolerance and graceful degradation, which are extremely important for the Rapid Runway Repair program.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130409856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165727
G. Kranz, M. Mehalic
The authors describe the implementation of a digital radio frequency memory (DRFM) on a single integrated circuit. A VHSIC Hardware Description Language (VHDL) model of the DRFM was completed and used to design the VLSI components of the DRFM architecture. The model performed the specified time and frequency shift functions. A DRFM, with a 1 K memory, a control unit, and a digital single-sideband modulator (DSSM) has been placed onto a silicon single chip layout design.<>
{"title":"Design of a 6-bit CMOS digital radio frequency memory","authors":"G. Kranz, M. Mehalic","doi":"10.1109/NAECON.1991.165727","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165727","url":null,"abstract":"The authors describe the implementation of a digital radio frequency memory (DRFM) on a single integrated circuit. A VHSIC Hardware Description Language (VHDL) model of the DRFM was completed and used to design the VLSI components of the DRFM architecture. The model performed the specified time and frequency shift functions. A DRFM, with a 1 K memory, a control unit, and a digital single-sideband modulator (DSSM) has been placed onto a silicon single chip layout design.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129955740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165853
D. Repperger, E. Scarborough, L. Tripp
Summary form only given. A task involving an extension of the classical Fitts' law paradigm in a multidimensional sense is discussed. This type of task investigates the tradeoffs of speed to accuracy as humans perform tracking tasks. The Fitts' law paradigm is ideal for this research in the sense that it includes a metric to evaluate task difficulty as well as a measure of capacity (or baud rate) in the accomplishment of a task. Thus, a stressor may affect the capacity to perform a task (in a temporal sense) as well as increase the amount of errors that occur. Another advantage of using an extended Fitts' law paradigm is from the information contained in the errors. In this task, six types of errors illustrate when and what breaks down. Analysis of these errors indicates how the capacity is compromised as the subjects are exposed to multiple stress situations. Data from both a learning study and an exhaustion study on G stressors were obtained and used in the analysis.<>
{"title":"A cognitive performance task for evaluation of human performance based on an extended Fitts' law paradigm","authors":"D. Repperger, E. Scarborough, L. Tripp","doi":"10.1109/NAECON.1991.165853","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165853","url":null,"abstract":"Summary form only given. A task involving an extension of the classical Fitts' law paradigm in a multidimensional sense is discussed. This type of task investigates the tradeoffs of speed to accuracy as humans perform tracking tasks. The Fitts' law paradigm is ideal for this research in the sense that it includes a metric to evaluate task difficulty as well as a measure of capacity (or baud rate) in the accomplishment of a task. Thus, a stressor may affect the capacity to perform a task (in a temporal sense) as well as increase the amount of errors that occur. Another advantage of using an extended Fitts' law paradigm is from the information contained in the errors. In this task, six types of errors illustrate when and what breaks down. Analysis of these errors indicates how the capacity is compromised as the subjects are exposed to multiple stress situations. Data from both a learning study and an exhaustion study on G stressors were obtained and used in the analysis.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126292016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165865
D. M. Pfeiffer
It is pointed out that modern workstations, such as the VITec Image Computing System, are beginning to address the performance and functionality issues involved with interactive visualization in an integrated, cost-effective environment. Visualization, combining computer graphics with imaging, enables users to visualize both image and non-image data, and to be able to integrate images with text and graphics. This type of interactive computing requires sufficient processing power to handle the large amounts of data involved. In addition, the hardware system is a complete computer in itself, which is programmable, and therefore the function library for imaging is included within the instructions of the hardware system. Thus the functionality is not hardwired. The speed of the computing system permits more complex functionality than was practical in the past.<>
{"title":"Visualization architecture for now and the future","authors":"D. M. Pfeiffer","doi":"10.1109/NAECON.1991.165865","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165865","url":null,"abstract":"It is pointed out that modern workstations, such as the VITec Image Computing System, are beginning to address the performance and functionality issues involved with interactive visualization in an integrated, cost-effective environment. Visualization, combining computer graphics with imaging, enables users to visualize both image and non-image data, and to be able to integrate images with text and graphics. This type of interactive computing requires sufficient processing power to handle the large amounts of data involved. In addition, the hardware system is a complete computer in itself, which is programmable, and therefore the function library for imaging is included within the instructions of the hardware system. Thus the functionality is not hardwired. The speed of the computing system permits more complex functionality than was practical in the past.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115594956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165829
S. Harbaugh, B. Wavering
The authors introduce HeapGuard, a heap management system which eliminates the problem of garbage collection in real-time systems. The sources of garbage and traditional garbage collection are reviewed. The HeapGuard scheme and optional hardware are presented. It is concluded that, in real-time computer systems, heap memory could be managed using HeapGuard to provide predictable timing and eliminate storage errors due to fragmentation. If feasible, a coprocessor could be provided to manage the dynamic memory and increase system throughput. If the application program is known to store only small objects, then a software-only implementation could be effective. If the application program is known to store many large objects in dynamic memory then the coprocessor could provide and manage a set of pointers to these large objects. If the program stores many small and many large objects, then a software-only heap for the small objects and an address pointer based heap for large objects could be provided.<>
{"title":"HeapGuard, eliminating garbage collection in real-time Ada systems","authors":"S. Harbaugh, B. Wavering","doi":"10.1109/NAECON.1991.165829","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165829","url":null,"abstract":"The authors introduce HeapGuard, a heap management system which eliminates the problem of garbage collection in real-time systems. The sources of garbage and traditional garbage collection are reviewed. The HeapGuard scheme and optional hardware are presented. It is concluded that, in real-time computer systems, heap memory could be managed using HeapGuard to provide predictable timing and eliminate storage errors due to fragmentation. If feasible, a coprocessor could be provided to manage the dynamic memory and increase system throughput. If the application program is known to store only small objects, then a software-only implementation could be effective. If the application program is known to store many large objects in dynamic memory then the coprocessor could provide and manage a set of pointers to these large objects. If the program stores many small and many large objects, then a software-only heap for the small objects and an address pointer based heap for large objects could be provided.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132654059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165913
R. Krupa, D. Kennedy
The authors advocate diagnostics and testing in a back to basics way by stressing the importance of establishing goals which are absolutely relevant to operational capabilities and constraints. The evolution of support methods and concepts resulted in the application of technology to diagnostics and testing without first establishing the operational relevancy for such support. The trend is insensitive to the fact that maintenance testing is not the primary mission of operational users and that ineffective testing can be more a liability than an asset to the mission. Without relevant objectives for support tasks, their development can be dominated by technical challenge. The result can be overly complex, ineffective support. The authors highlight a fundamental approach to design and diagnostics to select and balance maintenance tasks and optimize operational support.<>
{"title":"Diagnostics and maintenance testing-a case for relevancy","authors":"R. Krupa, D. Kennedy","doi":"10.1109/NAECON.1991.165913","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165913","url":null,"abstract":"The authors advocate diagnostics and testing in a back to basics way by stressing the importance of establishing goals which are absolutely relevant to operational capabilities and constraints. The evolution of support methods and concepts resulted in the application of technology to diagnostics and testing without first establishing the operational relevancy for such support. The trend is insensitive to the fact that maintenance testing is not the primary mission of operational users and that ineffective testing can be more a liability than an asset to the mission. Without relevant objectives for support tasks, their development can be dominated by technical challenge. The result can be overly complex, ineffective support. The authors highlight a fundamental approach to design and diagnostics to select and balance maintenance tasks and optimize operational support.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131889099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165922
M. H. Awtry, A. B. Calvo, C. J. Debeljak
The authors introduce a logistics engineering workstation based on the Air Force's R&M 2000 concepts and consistent with DoD's concurrent engineering and computer-aided acquisition and logistic support (CALS) initiatives. The workstation uses the framework of a decision support system (DSS) to evaluate both the R&M (reliability and maintenance) and logistics impacts of weapon system design or modification on overall system performance. Thus, the objectives of R&M 2000 can be effectively managed within the concurrent engineering process.<>
{"title":"Logistics engineering workstations for concurrent engineering applications","authors":"M. H. Awtry, A. B. Calvo, C. J. Debeljak","doi":"10.1109/NAECON.1991.165922","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165922","url":null,"abstract":"The authors introduce a logistics engineering workstation based on the Air Force's R&M 2000 concepts and consistent with DoD's concurrent engineering and computer-aided acquisition and logistic support (CALS) initiatives. The workstation uses the framework of a decision support system (DSS) to evaluate both the R&M (reliability and maintenance) and logistics impacts of weapon system design or modification on overall system performance. Thus, the objectives of R&M 2000 can be effectively managed within the concurrent engineering process.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131749636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165869
W. F. Jorgensen
A review was conducted of US Air Force, Army, and Navy studies which attempted to establish procedures and guidelines for the development of embedded training (ET). The review showed that the studies to date have not provided an analysis process which can examine early design data, and state with any precision what ET is required and what its content should be. Based on this review and some internal company research, characteristics required of a process to determine and specify ET were developed. A conceptual procedure for conducting the analysis of ET requirements is presented. This concept incorporates procedures from Army and Navy published studies. The conclusion is that, by combining the procedures from these endeavors, a process can be demonstrated which will define ET requirements early enough to include in system design.<>
{"title":"Embedding training in a system","authors":"W. F. Jorgensen","doi":"10.1109/NAECON.1991.165869","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165869","url":null,"abstract":"A review was conducted of US Air Force, Army, and Navy studies which attempted to establish procedures and guidelines for the development of embedded training (ET). The review showed that the studies to date have not provided an analysis process which can examine early design data, and state with any precision what ET is required and what its content should be. Based on this review and some internal company research, characteristics required of a process to determine and specify ET were developed. A conceptual procedure for conducting the analysis of ET requirements is presented. This concept incorporates procedures from Army and Navy published studies. The conclusion is that, by combining the procedures from these endeavors, a process can be demonstrated which will define ET requirements early enough to include in system design.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126944073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165828
G. Scott, A. L. Hughes
Three major components of the software process have been defined: human relations or people, such as skills acquisition, compensation, incentives and rewards; technology, including CASE (computer-aided software engineering); and management, policies, procedures, standards and practices. The authors explore the overt and hidden costs of these components in three phases: the initial non-recurring costs to acquire and install these capabilities as the organization grows from an initial level of process maturity; the recurring costs associated with maintenance of the installed systems used in a well-defined software process organization; and the continuing costs associated with technology tracking and insertion to maintain a state-of-the-practice that is consistent within an organization using a managed or optimizing software development process.<>
{"title":"The care and feeding costs of a maturing software process","authors":"G. Scott, A. L. Hughes","doi":"10.1109/NAECON.1991.165828","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165828","url":null,"abstract":"Three major components of the software process have been defined: human relations or people, such as skills acquisition, compensation, incentives and rewards; technology, including CASE (computer-aided software engineering); and management, policies, procedures, standards and practices. The authors explore the overt and hidden costs of these components in three phases: the initial non-recurring costs to acquire and install these capabilities as the organization grows from an initial level of process maturity; the recurring costs associated with maintenance of the installed systems used in a well-defined software process organization; and the continuing costs associated with technology tracking and insertion to maintain a state-of-the-practice that is consistent within an organization using a managed or optimizing software development process.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123153706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-05-20DOI: 10.1109/NAECON.1991.165729
R. G. Davenport
It is illustrated how a DSP (digital signal processing)-microprocessor-based module can be used for rapid identification and acquisition of the GPS (Global Positioning Satellite) system C/A codes. Specifically, the author describes the use of Inmos' VecTram DSP module, which consists of an Inmos 32-b floating point transputer and Zoran's 32-b floating point vector signal processor (VSP). The algorithm correlates a pseudorandom sequence in the frequency domain using the fast Fourier transform. The frequency-domain correlation algorithm is shown to be considerably more efficient than time-domain correlation, assuming that the time-domain correlation is performed with hardware of comparable performance The author explores the philosophy of the technique, and provides examples of the algorithm and results. A mathematically rigorous description is not provided.<>
{"title":"FFT processing of direct sequence spreading codes using modern DSP microprocessors","authors":"R. G. Davenport","doi":"10.1109/NAECON.1991.165729","DOIUrl":"https://doi.org/10.1109/NAECON.1991.165729","url":null,"abstract":"It is illustrated how a DSP (digital signal processing)-microprocessor-based module can be used for rapid identification and acquisition of the GPS (Global Positioning Satellite) system C/A codes. Specifically, the author describes the use of Inmos' VecTram DSP module, which consists of an Inmos 32-b floating point transputer and Zoran's 32-b floating point vector signal processor (VSP). The algorithm correlates a pseudorandom sequence in the frequency domain using the fast Fourier transform. The frequency-domain correlation algorithm is shown to be considerably more efficient than time-domain correlation, assuming that the time-domain correlation is performed with hardware of comparable performance The author explores the philosophy of the technique, and provides examples of the algorithm and results. A mathematically rigorous description is not provided.<<ETX>>","PeriodicalId":247766,"journal":{"name":"Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}