Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145450
W. Luk
The authors consider the use of a nonstandard interpretation to analyze parametrized circuit descriptions, in particular for array based architectures. Various metrics are employed to characterize the performance tradeoffs for generic designs. The objective is to facilitate the comparison of feasible design alternatives at an early stage of development. The research centers on techniques for extracting various performance attributes, such as critical path and latency, from a single generic design representation. The features of this approach include-uniformity, modularity, reusability, flexibility, and computerized support.<>
{"title":"Analysing parametrised designs by non-standard interpretation","authors":"W. Luk","doi":"10.1109/ASAP.1990.145450","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145450","url":null,"abstract":"The authors consider the use of a nonstandard interpretation to analyze parametrized circuit descriptions, in particular for array based architectures. Various metrics are employed to characterize the performance tradeoffs for generic designs. The objective is to facilitate the comparison of feasible design alternatives at an early stage of development. The research centers on techniques for extracting various performance attributes, such as critical path and latency, from a single generic design representation. The features of this approach include-uniformity, modularity, reusability, flexibility, and computerized support.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125548419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145490
I. Reed, M. Shih, T. Truong, E. Hendon, D. Tufts
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical fast Fourier transform (FFT) in terms of accuracy, complexity and speed. Theorems developed previously for the AFT algorithm are used to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A computationally balanced AFT algorithm for Fourier analysis and signal processing is developed. This algorithm does not require complex multiplications. A VLSI architecture is suggested for this amplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25% over that used by the direct method. This efficient AFT algorithm is shown to be identical to Brun's original AFT algorithm.<>
{"title":"A VLSI architecture for simplified arithmetic Fourier transform algorithm","authors":"I. Reed, M. Shih, T. Truong, E. Hendon, D. Tufts","doi":"10.1109/ASAP.1990.145490","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145490","url":null,"abstract":"The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical fast Fourier transform (FFT) in terms of accuracy, complexity and speed. Theorems developed previously for the AFT algorithm are used to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A computationally balanced AFT algorithm for Fourier analysis and signal processing is developed. This algorithm does not require complex multiplications. A VLSI architecture is suggested for this amplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25% over that used by the direct method. This efficient AFT algorithm is shown to be identical to Brun's original AFT algorithm.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145472
Keping Chen, P. Danielsson, Anders Åström
The PASIC prototype chip contains 256*256 photosensors, a linear array of 256 A/D converters, two 256 8-bit shift registers, 256 bit-serial processors, and a 256*256 bit dynamic RAM. It appears to be a viable architecture for low-level vision processing. The processors operate in SIMD model at 20 MHz. To avoid high speed transfer of analog data, an A/D converter in the form of a linear array of comparators is used. The architecture of the processing part conforms to the row parallel output from the A/D-converters. A simple but efficient processor excellently suited to the special VLSI constraints of the sensor was designed. The pitch in the present version of PASIC is 30 mu m and it was possible to fit the A/D-converter circuitry, the shift register, the ALU, and the memory into this narrow slot. A key factor is the unified structure achieved by extending the memory data bus to all other units within the same column. The versatility of the chip is shown using three algorithms: edge detection, shading correction, and histogram-based thresholding. Each is executed in approximately 10 ms.<>
该PASIC原型芯片包含256*256光传感器、256 a /D转换器线性阵列、两个256位8位移位寄存器、256位串行处理器和256*256位动态RAM。这似乎是一种可行的低层次视觉处理架构。处理器在20mhz的SIMD模式下工作。为了避免模拟数据的高速传输,采用比较器线性阵列形式的A/D转换器。处理部分的结构符合A/ d转换器的行并行输出。设计了一种简单而高效的处理器,能很好地适应传感器的特殊VLSI约束。当前版本的PASIC的间距为30 μ m,并且可以将A/ d转换器电路,移位寄存器,ALU和存储器装入这个狭窄的插槽中。一个关键因素是通过将内存数据总线扩展到同一列内的所有其他单元而实现的统一结构。芯片的多功能性显示使用三种算法:边缘检测,阴影校正和基于直方图的阈值。每次执行大约需要10毫秒。
{"title":"PASIC. A sensor/processor array for computer vision","authors":"Keping Chen, P. Danielsson, Anders Åström","doi":"10.1109/ASAP.1990.145472","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145472","url":null,"abstract":"The PASIC prototype chip contains 256*256 photosensors, a linear array of 256 A/D converters, two 256 8-bit shift registers, 256 bit-serial processors, and a 256*256 bit dynamic RAM. It appears to be a viable architecture for low-level vision processing. The processors operate in SIMD model at 20 MHz. To avoid high speed transfer of analog data, an A/D converter in the form of a linear array of comparators is used. The architecture of the processing part conforms to the row parallel output from the A/D-converters. A simple but efficient processor excellently suited to the special VLSI constraints of the sensor was designed. The pitch in the present version of PASIC is 30 mu m and it was possible to fit the A/D-converter circuitry, the shift register, the ALU, and the memory into this narrow slot. A key factor is the unified structure achieved by extending the memory data bus to all other units within the same column. The versatility of the chip is shown using three algorithms: edge detection, shading correction, and histogram-based thresholding. Each is executed in approximately 10 ms.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124882403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145445
Chen-Mie Wu, R. Owens, M. J. Irwin
The authors present a new template-matching algorithm with good recognition performance. However, this new algorithm exhibits a complex, four-dimensional, wavefront architecture. Thus, for VLSI implementation, reduced architectures with fewer connections and processors need to be derived. For this purpose, the authors develop a systematic reduction methodology to manually map wavefront computations from high-dimension to low-dimension. This methodology consists of seven steps. Based on this methodology, the authors derive several two-dimensional architectures which are suitable for VLSI implementation for the new template-matching algorithm and have simulated one of the architectures by using the Intel Hypercube Machine iPSC/2.<>
{"title":"Mapping high-dimension wavefront computations to silicon","authors":"Chen-Mie Wu, R. Owens, M. J. Irwin","doi":"10.1109/ASAP.1990.145445","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145445","url":null,"abstract":"The authors present a new template-matching algorithm with good recognition performance. However, this new algorithm exhibits a complex, four-dimensional, wavefront architecture. Thus, for VLSI implementation, reduced architectures with fewer connections and processors need to be derived. For this purpose, the authors develop a systematic reduction methodology to manually map wavefront computations from high-dimension to low-dimension. This methodology consists of seven steps. Based on this methodology, the authors derive several two-dimensional architectures which are suitable for VLSI implementation for the new template-matching algorithm and have simulated one of the architectures by using the Intel Hypercube Machine iPSC/2.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126429259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145498
D. Pradhan
The author provides an overview of various key features of De Bruijn graph-based VLSI architectures. The advantages of De Bruijn architectures over such other architectures as cube and shuffle-exchange are discussed. Important differences between De Bruijn interconnects and others are also described. The evolution of the De Bruijn interconnect is described. The FFT architecture and the Viterbi decoder for convolutional codes are examined in detail. The issues of routing and fault tolerance are addressed.<>
{"title":"Application specific VLSI architectures based on De Bruijn graphs","authors":"D. Pradhan","doi":"10.1109/ASAP.1990.145498","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145498","url":null,"abstract":"The author provides an overview of various key features of De Bruijn graph-based VLSI architectures. The advantages of De Bruijn architectures over such other architectures as cube and shuffle-exchange are discussed. Important differences between De Bruijn interconnects and others are also described. The evolution of the De Bruijn interconnect is described. The FFT architecture and the Viterbi decoder for convolutional codes are examined in detail. The issues of routing and fault tolerance are addressed.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129085702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145467
N. Morgan, J. Beck, P. Kohn, J. Bilmes, E. Allman, J. Beer
The authors have designed and implemented a ring array processor, RAP, for fast implementation of layered neural network algorithms. The RAP is a multi-DSP system targeted at continuous speech recognition using connectionist algorithms. Four boards, each with four Texas Instruments, TMS 320C30 DSPs, serve as an array processor for a 68020-based host running a real-time operating system. The overall system is controlled from a Sun workstation via the Ethernet. Each board includes 16 MB of dynamic memory (expandable to 64 MB) and 1 MB of fast static RAM. Theoretical peak performance is 128 MFLOPS/board, and test runs with the first working board show a sustained throughput of roughly one-third to one-half of this for algorithms of interest. Software development is aided by a Sun workstation-based command interpreter, tools from the standard C environment and a library of matrix and vector routines.<>
{"title":"The RAP: a ring array processor for layered network calculations","authors":"N. Morgan, J. Beck, P. Kohn, J. Bilmes, E. Allman, J. Beer","doi":"10.1109/ASAP.1990.145467","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145467","url":null,"abstract":"The authors have designed and implemented a ring array processor, RAP, for fast implementation of layered neural network algorithms. The RAP is a multi-DSP system targeted at continuous speech recognition using connectionist algorithms. Four boards, each with four Texas Instruments, TMS 320C30 DSPs, serve as an array processor for a 68020-based host running a real-time operating system. The overall system is controlled from a Sun workstation via the Ethernet. Each board includes 16 MB of dynamic memory (expandable to 64 MB) and 1 MB of fast static RAM. Theoretical peak performance is 128 MFLOPS/board, and test runs with the first working board show a sustained throughput of roughly one-third to one-half of this for algorithms of interest. Software development is aided by a Sun workstation-based command interpreter, tools from the standard C environment and a library of matrix and vector routines.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133474074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145485
R. Lea
ASP (associative string processor) modules comprise highly-versatile parallel processing building-blocks for the simple construction of application-specific second-generation massively parallel processors (MPPs). The author discusses ASP module philosophy, demonstrates how ASP modules can satisfy the market, algorithmic, architectural, and engineering requirements of application-specific MPPs, and reports on current progress in the development of ASP technology. A case example indicates that 1 TOPS/ft/sup 3/, 1 GOPS/W, and 1 MOPS/$ can be reasonably forecast figures-of-merit for the cost effectiveness of second-generation MPPs built with WSI ASP modules. Comparison with first-generation MPP implementations reveals a 2-3 orders-of-magnitude advantage in favor of the ASP modules.<>
{"title":"ASP modules: building-blocks for application-specific massively parallel processors","authors":"R. Lea","doi":"10.1109/ASAP.1990.145485","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145485","url":null,"abstract":"ASP (associative string processor) modules comprise highly-versatile parallel processing building-blocks for the simple construction of application-specific second-generation massively parallel processors (MPPs). The author discusses ASP module philosophy, demonstrates how ASP modules can satisfy the market, algorithmic, architectural, and engineering requirements of application-specific MPPs, and reports on current progress in the development of ASP technology. A case example indicates that 1 TOPS/ft/sup 3/, 1 GOPS/W, and 1 MOPS/$ can be reasonably forecast figures-of-merit for the cost effectiveness of second-generation MPPs built with WSI ASP modules. Comparison with first-generation MPP implementations reveals a 2-3 orders-of-magnitude advantage in favor of the ASP modules.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126089721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145502
A. Hiraiwa, M. Fujita, S. Kurosu, S. Arisawa, M. Inoue
The authors present a mesh systolic array, GCN (giga connection), for a fast simulator of artificial neural networks (ANNs). The processor element (PE) of the GCN is composed of the RISC processor i-860 designed by Intel Corp., a large scale local memory, and high bandwidth first-in first-out devices. The mapping algorithm of the ANN onto the GCN, called the net-data partition, is discussed, and the multilayer feedforward network and Kohenen feature map are mapped onto the GCN by using this algorithm. Another parallelism that can be used for a stochastic ANN like the Boltzmann machine is also discussed. The performance of the GCN is evaluated by software simulation and the authors achieve over 1 gigaconnection per second using 128 PEs.<>
作者提出了一种网格收缩阵列,GCN(千兆连接),用于人工神经网络(ann)的快速模拟器。GCN的处理器单元(PE)由Intel公司设计的RISC处理器i-860、大规模本地存储器和高带宽先进先出器件组成。讨论了神经网络到GCN的映射算法,即网络-数据分区,并利用该算法将多层前馈网络和Kohenen特征映射映射到GCN上。另一种可用于随机人工神经网络的并行性,如玻尔兹曼机,也进行了讨论。通过软件仿真对GCN的性能进行了评估,作者使用128 pe .>实现了每秒超过1千兆的连接
{"title":"Implementation of ANN on RISC processor array","authors":"A. Hiraiwa, M. Fujita, S. Kurosu, S. Arisawa, M. Inoue","doi":"10.1109/ASAP.1990.145502","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145502","url":null,"abstract":"The authors present a mesh systolic array, GCN (giga connection), for a fast simulator of artificial neural networks (ANNs). The processor element (PE) of the GCN is composed of the RISC processor i-860 designed by Intel Corp., a large scale local memory, and high bandwidth first-in first-out devices. The mapping algorithm of the ANN onto the GCN, called the net-data partition, is discussed, and the multilayer feedforward network and Kohenen feature map are mapped onto the GCN by using this algorithm. Another parallelism that can be used for a stochastic ANN like the Boltzmann machine is also discussed. The performance of the GCN is evaluated by software simulation and the authors achieve over 1 gigaconnection per second using 128 PEs.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129870807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145444
J. Nelson, Arifur Rahman, E. McQuade
A systolic implementation of a Reed-Solomon decoder is presented which with minor modification is suitable for BCH and Goppa codes. The various operations involved in decoding such codes were analyzed and the results are described. Systolic array architectures are derived for the various steps including the syndrome calculation, key equation solution and error evaluation. Since the throughput of the decoder is effectively determined by the speed of the multipliers, various multiplier architectures are discussed briefly. The architectures presented improve upon previous designs. The result is highly regular and modular, and thus it is more suitable for VLSI implementation.<>
{"title":"Systolic architectures for decoding Reed-Solomon codes","authors":"J. Nelson, Arifur Rahman, E. McQuade","doi":"10.1109/ASAP.1990.145444","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145444","url":null,"abstract":"A systolic implementation of a Reed-Solomon decoder is presented which with minor modification is suitable for BCH and Goppa codes. The various operations involved in decoding such codes were analyzed and the results are described. Systolic array architectures are derived for the various steps including the syndrome calculation, key equation solution and error evaluation. Since the throughput of the decoder is effectively determined by the speed of the multipliers, various multiplier architectures are discussed briefly. The architectures presented improve upon previous designs. The result is highly regular and modular, and thus it is more suitable for VLSI implementation.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130955107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-09-05DOI: 10.1109/ASAP.1990.145458
D. Lattard, B. Faure, G. Mazaré
The authors present two applications of a specific cellular architecture: emulation of the recall and learning for feedforward neural networks and parallel image reconstruction. This architecture is based on a bidimensional array of asynchronous processing elements, the cells, which can communicate between themselves by message transfers. Each cell includes a rotating routing part ensuring the message transportation through the array and a processing part dedicated to a particular application. The specificity of the processing part demands that it be redesigned for each application but leads to very fast computing and low complexity. This architecture can process algorithms not regular enough for SIMD machines. The cellular architecture is described, the feedforward neural network accelerator is introduced, the learning is discussed, and some time performances, evaluated by computer simulation, are given. The image reconstruction problem, its parallelization, some results of both functional and behavioral simulations, the realization of the circuit, and some test results are presented.<>
{"title":"Massively parallel architecture: application to neural net emulation and image reconstruction","authors":"D. Lattard, B. Faure, G. Mazaré","doi":"10.1109/ASAP.1990.145458","DOIUrl":"https://doi.org/10.1109/ASAP.1990.145458","url":null,"abstract":"The authors present two applications of a specific cellular architecture: emulation of the recall and learning for feedforward neural networks and parallel image reconstruction. This architecture is based on a bidimensional array of asynchronous processing elements, the cells, which can communicate between themselves by message transfers. Each cell includes a rotating routing part ensuring the message transportation through the array and a processing part dedicated to a particular application. The specificity of the processing part demands that it be redesigned for each application but leads to very fast computing and low complexity. This architecture can process algorithms not regular enough for SIMD machines. The cellular architecture is described, the feedforward neural network accelerator is introduced, the learning is discussed, and some time performances, evaluated by computer simulation, are given. The image reconstruction problem, its parallelization, some results of both functional and behavioral simulations, the realization of the circuit, and some test results are presented.<<ETX>>","PeriodicalId":438078,"journal":{"name":"[1990] Proceedings of the International Conference on Application Specific Array Processors","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125247579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}