Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138207
C. Grossner, T. Radhakrishnan
The notion of an organization of expert systems is examined for the case that many expert systems cooperate to solve an ill-structured problem. An organization is considered to be a pair comprising a coordination structure and an organizational structure which are associated with the planning and execution stages, respectively, of the problem-solving process. A case study of three organizations is presented and compared in the context of the distributed blackbox game.<>
{"title":"Organizations for cooperating expert systems","authors":"C. Grossner, T. Radhakrishnan","doi":"10.1109/SSST.1990.138207","DOIUrl":"https://doi.org/10.1109/SSST.1990.138207","url":null,"abstract":"The notion of an organization of expert systems is examined for the case that many expert systems cooperate to solve an ill-structured problem. An organization is considered to be a pair comprising a coordination structure and an organizational structure which are associated with the planning and execution stages, respectively, of the problem-solving process. A case study of three organizations is presented and compared in the context of the distributed blackbox game.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128074052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138140
H. Barghi, J. Bredeson, W. Cronenwett
The authors describe a two-channel reservation network with priority access (TCRN/PA) that uses both channels and advance reservation to increase network throughput with shorter delay than carrier-sense multiple access with collision detection (CSMA-CD). A contention channel (C-channel) carriers the reservation broadcasts while a data channel (D-channel) delivers the transmitted data. Each D-channel transmission is restricted to a limited number of data packets so that no one station dominates the channel. Each station limits the number of unsuccessful attempts to transmit each C-channel request packet to 20. Data packets sent to each station for transmission are ranked according to their arrival time, with older data sent at higher priority. The performance of TCRN/PA is compared with that of CSMA-CD-based networks, such as SRMA and ring and polling networks, for identical system bandwidths and is found to give greater throughput, as well as shorter delay, particularly at high throughputs.<>
{"title":"Throughput and delay analysis for a new efficient CSMA/CD based protocol","authors":"H. Barghi, J. Bredeson, W. Cronenwett","doi":"10.1109/SSST.1990.138140","DOIUrl":"https://doi.org/10.1109/SSST.1990.138140","url":null,"abstract":"The authors describe a two-channel reservation network with priority access (TCRN/PA) that uses both channels and advance reservation to increase network throughput with shorter delay than carrier-sense multiple access with collision detection (CSMA-CD). A contention channel (C-channel) carriers the reservation broadcasts while a data channel (D-channel) delivers the transmitted data. Each D-channel transmission is restricted to a limited number of data packets so that no one station dominates the channel. Each station limits the number of unsuccessful attempts to transmit each C-channel request packet to 20. Data packets sent to each station for transmission are ranked according to their arrival time, with older data sent at higher priority. The performance of TCRN/PA is compared with that of CSMA-CD-based networks, such as SRMA and ring and polling networks, for identical system bandwidths and is found to give greater throughput, as well as shorter delay, particularly at high throughputs.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128129870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138162
A. Barbir, C. T. Ng, D. Teague
The gray-level co-occurrence (GLC) method is a powerful technique that computes several GLC matrices on subregions of an image to measure its textural qualities. The method is not suitable for real-time image analysis and pattern recognition because of its high compute time. The authors propose a systolic array and a parallel architecture for evaluating the algorithm in an optimum time. Novel features of the structures include the minimization of intermediate I/O operations and the use of current existing hardware devices. The architectures are time optimal and are suitable for algorithm partitioning.<>
{"title":"Two VLSI structures for implementing the gray level co-occurrence method","authors":"A. Barbir, C. T. Ng, D. Teague","doi":"10.1109/SSST.1990.138162","DOIUrl":"https://doi.org/10.1109/SSST.1990.138162","url":null,"abstract":"The gray-level co-occurrence (GLC) method is a powerful technique that computes several GLC matrices on subregions of an image to measure its textural qualities. The method is not suitable for real-time image analysis and pattern recognition because of its high compute time. The authors propose a systolic array and a parallel architecture for evaluating the algorithm in an optimum time. Novel features of the structures include the minimization of intermediate I/O operations and the use of current existing hardware devices. The architectures are time optimal and are suitable for algorithm partitioning.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134202754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138209
P. B. Davis, J. Spears, M. Abidi
A description is given of an uncertainty and parallel data fusion approach that has been developed and tested. This fusion algorithm is based on the interaction of two constraints: the principle of knowledge source corroboration, which tends to maximize the final belief in a given proposition (often modeled by a probability density function or fuzzy membership distribution) if either of the knowledge sources supports the occurrence of the proposition; and the principle of belief enhancement/withdrawal which adjusts the belief of one knowledge source according to the belief of a second knowledge source by maximizing the similarity between the two source outputs. These two principles are combined by maximizing a positive linear combination of these two constraints related by a fusion function, to be determined. The implementation of this method was performed on an NCUBE hypercube parallel computer.<>
{"title":"Parallel implementation of analytic data fusion","authors":"P. B. Davis, J. Spears, M. Abidi","doi":"10.1109/SSST.1990.138209","DOIUrl":"https://doi.org/10.1109/SSST.1990.138209","url":null,"abstract":"A description is given of an uncertainty and parallel data fusion approach that has been developed and tested. This fusion algorithm is based on the interaction of two constraints: the principle of knowledge source corroboration, which tends to maximize the final belief in a given proposition (often modeled by a probability density function or fuzzy membership distribution) if either of the knowledge sources supports the occurrence of the proposition; and the principle of belief enhancement/withdrawal which adjusts the belief of one knowledge source according to the belief of a second knowledge source by maximizing the similarity between the two source outputs. These two principles are combined by maximizing a positive linear combination of these two constraints related by a fusion function, to be determined. The implementation of this method was performed on an NCUBE hypercube parallel computer.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132400748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138118
B. Abbott, C. Biegl, R. Souder, T. Bapty, J. Sztipanovits
The Multigraph programming environment provides a very-high-level programmer interface for the development of parallel and real-time processing systems. It is specifically targeted for large systems wishing to integrate a knowledge-based synthesis technique with standard numerical techniques. The result is a graphical editing environment where the user models the structure of the desired computation. Subsequently, symbolic techniques are used to translate this model to a large-grain data-flow graph. A description is given of the concepts and use the Multigraph programming environment on a tightly coupled parallel processing platform, the INMOS transputer.<>
{"title":"Graphical programming for the transputer","authors":"B. Abbott, C. Biegl, R. Souder, T. Bapty, J. Sztipanovits","doi":"10.1109/SSST.1990.138118","DOIUrl":"https://doi.org/10.1109/SSST.1990.138118","url":null,"abstract":"The Multigraph programming environment provides a very-high-level programmer interface for the development of parallel and real-time processing systems. It is specifically targeted for large systems wishing to integrate a knowledge-based synthesis technique with standard numerical techniques. The result is a graphical editing environment where the user models the structure of the desired computation. Subsequently, symbolic techniques are used to translate this model to a large-grain data-flow graph. A description is given of the concepts and use the Multigraph programming environment on a tightly coupled parallel processing platform, the INMOS transputer.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131711694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138163
S. Rai
Techniques for generating approximate measures for terminal and network reliability in a hypercube architecture are described. First, the author considers the total number of s-t paths of cardinability H(s,t) and H(s,t)+2, where H(s,t) represents the Hamming distance between source s and terminal t, and generates a bound on terminal reliability. Various theorems which help arrive at the solution are stated and proved. Second, utilizing the concept of degree matrix (for B/sub n/), the author presents a method to obtain the total number of spanning trees in B/sub n/ and hence an approximate measure for network reliability.<>
{"title":"On hypercube reliability","authors":"S. Rai","doi":"10.1109/SSST.1990.138163","DOIUrl":"https://doi.org/10.1109/SSST.1990.138163","url":null,"abstract":"Techniques for generating approximate measures for terminal and network reliability in a hypercube architecture are described. First, the author considers the total number of s-t paths of cardinability H(s,t) and H(s,t)+2, where H(s,t) represents the Hamming distance between source s and terminal t, and generates a bound on terminal reliability. Various theorems which help arrive at the solution are stated and proved. Second, utilizing the concept of degree matrix (for B/sub n/), the author presents a method to obtain the total number of spanning trees in B/sub n/ and hence an approximate measure for network reliability.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"670 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132366768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138197
G. A. Armstrong, M. L. Simpson, D. Bouldin
The design of a VLSI ASIC (application-specific integrated circuit) for use in automated inspection is described. The inspection scheme uses M.K. Hu's (1962) and S. Maitra's (1979) algorithms for moment invariants. A prototype design that resolved the long delay time of the multiplier by custom designing adder cells based on the Manchester carry chain was generated. The prototype ASIC is currently being fabricated in 2.0- mu m CMOS technology and has been simulated at 20 MHz. The final ASICs will be used in parallel at the board level to achieve the 230 MOPS necessary to perform moment-invariant algorithms in real time on 512*512 pixel images with 256 gray scales.<>
描述了用于自动检测的VLSI专用集成电路的设计。检查方案使用M.K. Hu(1962)和S. Maitra(1979)的矩不变量算法。提出了一种基于曼彻斯特进位链的加法器单元自定义设计的原型设计,解决了乘法器延迟时间长的问题。原型ASIC目前正在2.0 μ m CMOS技术中制造,并已在20mhz下进行了模拟。最终的asic将在板级并行使用,以实现在具有256个灰度的512*512像素图像上实时执行矩不变算法所需的230 MOPS
{"title":"VLSI implementation of moment invariants for automated inspection","authors":"G. A. Armstrong, M. L. Simpson, D. Bouldin","doi":"10.1109/SSST.1990.138197","DOIUrl":"https://doi.org/10.1109/SSST.1990.138197","url":null,"abstract":"The design of a VLSI ASIC (application-specific integrated circuit) for use in automated inspection is described. The inspection scheme uses M.K. Hu's (1962) and S. Maitra's (1979) algorithms for moment invariants. A prototype design that resolved the long delay time of the multiplier by custom designing adder cells based on the Manchester carry chain was generated. The prototype ASIC is currently being fabricated in 2.0- mu m CMOS technology and has been simulated at 20 MHz. The final ASICs will be used in parallel at the board level to achieve the 230 MOPS necessary to perform moment-invariant algorithms in real time on 512*512 pixel images with 256 gray scales.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115889093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138176
H.K. Brown, G. Fuller, M. S. Clamme
A laser probing procedure has been developed to examine the timing margin of signal paths in complex CMOS devices. In the procedure, injected current at one of the logic gate's transistor drains increases the propagation delay of the logic gate. This occurs because increased current at the transistor drain decreases the rate of charge transfer between the logic gate and its output load. By use of an indirect measurement scheme, a curve depicting laser-induced propagation delay as a function of illumination is experimentally generated. This curve is then analyzed to determine whether or not the examined signal path has critical timing.<>
{"title":"Timing margin examination using laser probing technique","authors":"H.K. Brown, G. Fuller, M. S. Clamme","doi":"10.1109/SSST.1990.138176","DOIUrl":"https://doi.org/10.1109/SSST.1990.138176","url":null,"abstract":"A laser probing procedure has been developed to examine the timing margin of signal paths in complex CMOS devices. In the procedure, injected current at one of the logic gate's transistor drains increases the propagation delay of the logic gate. This occurs because increased current at the transistor drain decreases the rate of charge transfer between the logic gate and its output load. By use of an indirect measurement scheme, a curve depicting laser-induced propagation delay as a function of illumination is experimentally generated. This curve is then analyzed to determine whether or not the examined signal path has critical timing.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116080183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138111
S. Natarajan, B.K. Herman
A method for measuring small capacitance values without expensive test setups is presented. This method utilizes phase measurement with standard laboratory equipment. In the process of measuring the capacitance, the loss factor of the capacitance is also determined. This method lends itself to measuring a wide range of capacitances over a wide frequency range.<>
{"title":"Measurement of small capacitances using phase measurement","authors":"S. Natarajan, B.K. Herman","doi":"10.1109/SSST.1990.138111","DOIUrl":"https://doi.org/10.1109/SSST.1990.138111","url":null,"abstract":"A method for measuring small capacitance values without expensive test setups is presented. This method utilizes phase measurement with standard laboratory equipment. In the process of measuring the capacitance, the loss factor of the capacitance is also determined. This method lends itself to measuring a wide range of capacitances over a wide frequency range.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"427 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115657856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-03-11DOI: 10.1109/SSST.1990.138203
B. Bomar, L. M. Smith
An algorithm for minimizing roundoff noise in extended state-space (e-state) realizations of recursive digital filters, where the order of the e-state equation is 2, is developed. It is shown that previous efforts to minimize roundoff noise in e-state structures have not provided a global minimum. The algorithm presented applies an unconstrained transformation matrix to an arbitrary starting state-space structure to produce an intermediate structure. A second matrix transforms the intermediate structure to e-state form. A conjugate-gradient optimization scheme is used to determine the coefficients of the first matrix that minimize the roundoff noise gain of the e-state structure produced by the second transformation. A numerical example illustrates that orders-of-magnitude improvement over previous results can be achieved with this approach.<>
{"title":"Synthesis of minimum roundoff noise structures for extended state-space digital filter implementations","authors":"B. Bomar, L. M. Smith","doi":"10.1109/SSST.1990.138203","DOIUrl":"https://doi.org/10.1109/SSST.1990.138203","url":null,"abstract":"An algorithm for minimizing roundoff noise in extended state-space (e-state) realizations of recursive digital filters, where the order of the e-state equation is 2, is developed. It is shown that previous efforts to minimize roundoff noise in e-state structures have not provided a global minimum. The algorithm presented applies an unconstrained transformation matrix to an arbitrary starting state-space structure to produce an intermediate structure. A second matrix transforms the intermediate structure to e-state form. A conjugate-gradient optimization scheme is used to determine the coefficients of the first matrix that minimize the roundoff noise gain of the e-state structure produced by the second transformation. A numerical example illustrates that orders-of-magnitude improvement over previous results can be achieved with this approach.<<ETX>>","PeriodicalId":201543,"journal":{"name":"[1990] Proceedings. The Twenty-Second Southeastern Symposium on System Theory","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121970407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}