Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853803
Ghislain Takam Tchendjou, Rshdee Alhakim, E. Simeu
This paper presents a novel methodology of objective image quality assessment (IQA) based on Fuzzy Logic (FL) method. The main purpose is to automatically assess the quality of image in agreement with human visual perception. The used attributes (quality metrics) and evaluation criteria (human rating mean opinion score MOS) are extracted from image quality database TID2013. The fuzzy model design starts by selecting the most independent attributes, by applying Pearson's correlation approach and seeking the most correlated metrics with the corresponding MOS. Then, Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied in order to construct an objective fuzzy model able to efficiently predict the image quality correlated with the subjective MOS. In this paper, different fuzzy models are produced by modifying certain ANFIS configurations. After that, we select the appropriate ANFIS model that provides high prediction accuracy and stability with taking into account its implementation complexity. The overall architecture of the selected FL model consists of four input metrics, two bell-shaped membership functions associated to each input metric, two fuzzy if-then rules, two linear combination equations and one output which gives the image adequate quality score. Finally the performance of the proposed fuzzy model is compared with other IQA models produced by different machine learning methods, the simulation results demonstrate that the fuzzy logic model has a high stable behavior with the best agreement with human visual perception.
{"title":"Fuzzy logic modeling for objective image quality assessment","authors":"Ghislain Takam Tchendjou, Rshdee Alhakim, E. Simeu","doi":"10.1109/DASIP.2016.7853803","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853803","url":null,"abstract":"This paper presents a novel methodology of objective image quality assessment (IQA) based on Fuzzy Logic (FL) method. The main purpose is to automatically assess the quality of image in agreement with human visual perception. The used attributes (quality metrics) and evaluation criteria (human rating mean opinion score MOS) are extracted from image quality database TID2013. The fuzzy model design starts by selecting the most independent attributes, by applying Pearson's correlation approach and seeking the most correlated metrics with the corresponding MOS. Then, Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied in order to construct an objective fuzzy model able to efficiently predict the image quality correlated with the subjective MOS. In this paper, different fuzzy models are produced by modifying certain ANFIS configurations. After that, we select the appropriate ANFIS model that provides high prediction accuracy and stability with taking into account its implementation complexity. The overall architecture of the selected FL model consists of four input metrics, two bell-shaped membership functions associated to each input metric, two fuzzy if-then rules, two linear combination equations and one output which gives the image adequate quality score. Finally the performance of the proposed fuzzy model is compared with other IQA models produced by different machine learning methods, the simulation results demonstrate that the fuzzy logic model has a high stable behavior with the best agreement with human visual perception.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"8 1","pages":"98-105"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88111893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853804
M. Biglari-Abhari
Using hardware architectures to improve performance and energy efficiency has been a key factor for application-specific optimisations. Latest Field Programmable Gate Arrays (FPGA) can not only be used as a reconfigurable hardware platform, they also provide hard core processors and other hard core IPs on the same chip to implement multiprocessor systems on chip, which can be tuned based on the target applications characteristics. In this session, the first two papers present the challenges and optimisations to use hardware architectures based on FPGA for wireless communication systems. In addition an investigation of the crosstalk effects on the Network on Chip energy consumption, as the main interconnection network in multiprocessor systems on chip, is presented.
{"title":"Session 4: Advanced hardware architectures","authors":"M. Biglari-Abhari","doi":"10.1109/DASIP.2016.7853804","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853804","url":null,"abstract":"Using hardware architectures to improve performance and energy efficiency has been a key factor for application-specific optimisations. Latest Field Programmable Gate Arrays (FPGA) can not only be used as a reconfigurable hardware platform, they also provide hard core processors and other hard core IPs on the same chip to implement multiprocessor systems on chip, which can be tuned based on the target applications characteristics. In this session, the first two papers present the challenges and optimisations to use hardware architectures based on FPGA for wireless communication systems. In addition an investigation of the crosstalk effects on the Network on Chip energy consumption, as the main interconnection network in multiprocessor systems on chip, is presented.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"1 1","pages":"106"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89572158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853800
Benjamin Tan, M. Biglari-Abhari, Z. Salcic
Embedded systems are becoming increasingly complex as designers integrate different functionalities into a single application for execution on heterogeneous hardware platforms. In this work we propose a system-level security approach in order to provide isolation of tasks without the need to trust a central authority at run-time. We discuss security requirements that can be found in complex embedded systems that use heterogeneous execution platforms, and by regulating memory access we create mechanisms that allow safe use of shared IP with direct memory access, as well as shared libraries. We also present a prototype Isolation Unit that checks memory transactions and allows for dynamic configuration of permissions.
{"title":"A system-level security approach for heterogeneous MPSoCs","authors":"Benjamin Tan, M. Biglari-Abhari, Z. Salcic","doi":"10.1109/DASIP.2016.7853800","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853800","url":null,"abstract":"Embedded systems are becoming increasingly complex as designers integrate different functionalities into a single application for execution on heterogeneous hardware platforms. In this work we propose a system-level security approach in order to provide isolation of tasks without the need to trust a central authority at run-time. We discuss security requirements that can be found in complex embedded systems that use heterogeneous execution platforms, and by regulating memory access we create mechanisms that allow safe use of shared IP with direct memory access, as well as shared libraries. We also present a prototype Isolation Unit that checks memory transactions and allows for dynamic configuration of permissions.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"1 1","pages":"74-81"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89986750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853789
E. Juárez
This session should appeal to the researchers interested in the latest implementations of the MPEG standard in embedded platforms. The four papers propose cope with performance, energy and complexity issues. From different points of view, two of them deals with the Scalable standard. One presents an efficient parallel architecture while the other one analyses the existing trade-off between energy consumption and quality. Encoding complexity estimates are discussed in the third paper. The last paper comes back to the energy efficiency issue using a task-based programming model.
{"title":"Session 1: HEVC in embedded systems","authors":"E. Juárez","doi":"10.1109/DASIP.2016.7853789","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853789","url":null,"abstract":"This session should appeal to the researchers interested in the latest implementations of the MPEG standard in embedded platforms. The four papers propose cope with performance, energy and complexity issues. From different points of view, two of them deals with the Scalable standard. One presents an efficient parallel architecture while the other one analyses the existing trade-off between energy consumption and quality. Encoding complexity estimates are discussed in the third paper. The last paper comes back to the energy efficiency issue using a task-based programming model.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"9 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87888836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853790
Ronan Parois, W. Hamidouche, E. Mora, M. Raulet, O. Déforges
The High Efficiency Video Coding (HEVC) standard enables meeting new video quality demands such as Ultra High Definition (UHD). Its scalable extension (SHVC) allows encoding simultaneously different representations of a video, organised in layers. Thanks to inter-layer predictions, SHVC provides bit-rate savings compared to the equivalent HEVC simulcast encoding. Therefore, SHVC seems a promising solution for both broadcast and storage applications. This paper proposes a multi-layer architecture of a pipelined software HEVC encoders with two main settings: a live setting for real-time encoding and a file setting for encoding with better fidelity. The proposed architecture provides a good trade-off between coding rate and coding efficiency achieving real-time performance of 1080p60 and 1600p30 sequences in 2× spatial scalability. Moreover, experimental results show more than a 26× and 300× speed-up for the file and live settings, respectively, with respect to the scalable reference software (SHM) in an intra-only configuration.
{"title":"Efficient parallel architecture of an intra-only scalable multi-layer HEVC encoder","authors":"Ronan Parois, W. Hamidouche, E. Mora, M. Raulet, O. Déforges","doi":"10.1109/DASIP.2016.7853790","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853790","url":null,"abstract":"The High Efficiency Video Coding (HEVC) standard enables meeting new video quality demands such as Ultra High Definition (UHD). Its scalable extension (SHVC) allows encoding simultaneously different representations of a video, organised in layers. Thanks to inter-layer predictions, SHVC provides bit-rate savings compared to the equivalent HEVC simulcast encoding. Therefore, SHVC seems a promising solution for both broadcast and storage applications. This paper proposes a multi-layer architecture of a pipelined software HEVC encoders with two main settings: a live setting for real-time encoding and a file setting for encoding with better fidelity. The proposed architecture provides a good trade-off between coding rate and coding efficiency achieving real-time performance of 1080p60 and 1600p30 sequences in 2× spatial scalability. Moreover, experimental results show more than a 26× and 300× speed-up for the file and live settings, respectively, with respect to the scalable reference software (SHM) in an intra-only configuration.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"101 1","pages":"11-17"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87466548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853822
Trevor Gee, P. Delmas, Sylvain Joly, Valentin Baron, R. Ababou, J. Nezan
This work describes a light weight dedicated system, capable of generating a sequence of depth-maps computed from image streams acquired from a synchronized pair of GoPro HERO 3+ cameras in real-time. The envisioned purpose is to capture depth-maps from mid-sized drones for computer vision applications (e.g. surveillance and management of ecosystems). The implementation is of modular design, consisting of a dedicated camera synchronisation box, fast lookup based rectification system, a block matching based dense correspondence finder that uses dynamic programming, and a simple disparity-to-depth conversion module. The final output is transmitted to a server via WIFIor G4 LTE cellular Internet connection for further processing. The complete pipeline is implemented on an Android tablet. The main novelty is the system's ability to operate on small portable devices while retaining reasonable quality and real-time performance for outdoor applications. Our experimental results in estuary, forestry and dairy farming environment support this claim.
{"title":"A dedicated lightweight binocular stereo system for real-time depth-map generation","authors":"Trevor Gee, P. Delmas, Sylvain Joly, Valentin Baron, R. Ababou, J. Nezan","doi":"10.1109/DASIP.2016.7853822","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853822","url":null,"abstract":"This work describes a light weight dedicated system, capable of generating a sequence of depth-maps computed from image streams acquired from a synchronized pair of GoPro HERO 3+ cameras in real-time. The envisioned purpose is to capture depth-maps from mid-sized drones for computer vision applications (e.g. surveillance and management of ecosystems). The implementation is of modular design, consisting of a dedicated camera synchronisation box, fast lookup based rectification system, a block matching based dense correspondence finder that uses dynamic programming, and a simple disparity-to-depth conversion module. The final output is transmitted to a server via WIFIor G4 LTE cellular Internet connection for further processing. The complete pipeline is implemented on an Android tablet. The main novelty is the system's ability to operate on small portable devices while retaining reasonable quality and real-time performance for outdoor applications. Our experimental results in estuary, forestry and dairy farming environment support this claim.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"10 1","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89477527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853812
D. Madroñal, R. Lazcano, H. Fabelo, S. Ortega, G. Callicó, E. Juárez, C. Sanz
In this paper, a study of the parallel exploitation of a Support Vector Machine (SVM) classifier with a linear kernel running on a Massively Parallel Processor Array platform is exposed. This system joins 256 cores working in parallel and grouped in 16 different clusters. The main objective of the research has been to develop an optimal implementation of the SVM classifier on a MPPA platform whilst the architectural bottlenecks of the hyperspectral image classifier are analyzed. Experimenting with medical images, the parallelization of the SVM classification has been conducted using three strategies: i) single- and multi-core processing, ii) single- and multi-cluster analysis and iii) single- and double-buffer execution. As a result, an average core processing speedup of 11.8 has been achieved when parallelizing the SVM classification process in a single cluster. On the contrary, since data communication accounts for 34.7% of the total execution time in the sequential case, the total speedup is upper-bounded to 2.9. Using a double-buffer methodology, a total speedup of 2.84 has been achieved on a single cluster. At last, the feasibility of a portable version of a linear SVM has been demonstrated.
{"title":"Hyperspectral image classification using a parallel implementation of the linear SVM on a Massively Parallel Processor Array (MPPA) platform","authors":"D. Madroñal, R. Lazcano, H. Fabelo, S. Ortega, G. Callicó, E. Juárez, C. Sanz","doi":"10.1109/DASIP.2016.7853812","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853812","url":null,"abstract":"In this paper, a study of the parallel exploitation of a Support Vector Machine (SVM) classifier with a linear kernel running on a Massively Parallel Processor Array platform is exposed. This system joins 256 cores working in parallel and grouped in 16 different clusters. The main objective of the research has been to develop an optimal implementation of the SVM classifier on a MPPA platform whilst the architectural bottlenecks of the hyperspectral image classifier are analyzed. Experimenting with medical images, the parallelization of the SVM classification has been conducted using three strategies: i) single- and multi-core processing, ii) single- and multi-cluster analysis and iii) single- and double-buffer execution. As a result, an average core processing speedup of 11.8 has been achieved when parallelizing the SVM classification process in a single cluster. On the contrary, since data communication accounts for 34.7% of the total execution time in the sequential case, the total speedup is upper-bounded to 2.9. Using a double-buffer methodology, a total speedup of 2.84 has been achieved on a single cluster. At last, the feasibility of a portable version of a linear SVM has been demonstrated.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"65 1","pages":"154-160"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80904092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853801
Lin Li, Tiziana Fanni, T. Viitanen, Renjie Xie, F. Palumbo, L. Raffo, H. Huttunen, J. Takala, S. Bhattacharyya
Dataflow modeling techniques facilitate many aspects of design exploration and optimization for signal processing systems, such as efficient scheduling, memory management, and task synchronization. The lightweight dataflow (LWDF) programming methodology provides an abstract programming model that supports dataflow-based design and implementation of signal processing hardware and software components and systems. Previous work on LWDF techniques has emphasized their application to DSP software implementation. In this paper, we present new extensions of the LWDF methodology for effective integration with hardware description languages (HDLs), and we apply these extensions to develop efficient methods for low power DSP hardware implementation. Through a case study of a deep neural network application for vehicle classification, we demonstrate our proposed LWDF-based hardware design methodology, and its effectiveness in low power implementation of complex signal processing systems.
{"title":"Low power design methodology for signal processing systems using lightweight dataflow techniques","authors":"Lin Li, Tiziana Fanni, T. Viitanen, Renjie Xie, F. Palumbo, L. Raffo, H. Huttunen, J. Takala, S. Bhattacharyya","doi":"10.1109/DASIP.2016.7853801","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853801","url":null,"abstract":"Dataflow modeling techniques facilitate many aspects of design exploration and optimization for signal processing systems, such as efficient scheduling, memory management, and task synchronization. The lightweight dataflow (LWDF) programming methodology provides an abstract programming model that supports dataflow-based design and implementation of signal processing hardware and software components and systems. Previous work on LWDF techniques has emphasized their application to DSP software implementation. In this paper, we present new extensions of the LWDF methodology for effective integration with hardware description languages (HDLs), and we apply these extensions to develop efficient methods for low power DSP hardware implementation. Through a case study of a deep neural network application for vehicle classification, we demonstrate our proposed LWDF-based hardware design methodology, and its effectiveness in low power implementation of complex signal processing systems.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"24 1","pages":"82-89"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74023198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853798
Lucana Santos, A. Gomez, Pedro Hernandez-Fernandez, R. Sarmiento
In this paper, we perform the Electronic System Level (ESL) modelling and verification of two lossless compression standard algorithms for space applications using the SystemC language. In particular we present the architectures and a description in SystemC of the CCSDS-121 universal lossless compressor and the CCSDS-123 lossless compressor for hyperspectral and multispectral images. Both algorithms were specifically designed to operate on board satellites and they can be utilized as independent standalone compressors as well as jointly. In the latter case, the CCSDS-121 performs the entropy coding stage of the CCSDS-123 compressor. The computational capabilities of the hardware available on a satellite are limited, and hence, it is necessary to design hardware architectures that make it possible to execute the algorithms in an efficient way in terms of throughput, resource utilization and power consumption. On-board compression algorithms are usually implemented on ASICs or FPGAs that are tolerant to solar radiation. The main objective of this work is to describe models of the compressors in SystemC, that enable the generation of specifications for a subsequent implementation phase where the algorithms will be described in a hardware design language (VHDL) that can be efficiently mapped into space-qualified FPGAs. With the SystemC models, we perform an exploration of the design space, refining the architecture, and retrieving information about the limits in performance of the cores, storage requirements, data dependencies and prospective hardware requirements of the later FPGA implementation. The described models also comprise connections to shared communication buses using transaction-level modelling (TLM), allowing their inclusion in an embedded system model that may include a software co-processor as well as other processing cores. Additionally, the models are verified by creating SystemC testbenches that can be reused to verify the IP cores when described in VHDL.
{"title":"SystemC modelling of lossless compression IP cores for space applications","authors":"Lucana Santos, A. Gomez, Pedro Hernandez-Fernandez, R. Sarmiento","doi":"10.1109/DASIP.2016.7853798","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853798","url":null,"abstract":"In this paper, we perform the Electronic System Level (ESL) modelling and verification of two lossless compression standard algorithms for space applications using the SystemC language. In particular we present the architectures and a description in SystemC of the CCSDS-121 universal lossless compressor and the CCSDS-123 lossless compressor for hyperspectral and multispectral images. Both algorithms were specifically designed to operate on board satellites and they can be utilized as independent standalone compressors as well as jointly. In the latter case, the CCSDS-121 performs the entropy coding stage of the CCSDS-123 compressor. The computational capabilities of the hardware available on a satellite are limited, and hence, it is necessary to design hardware architectures that make it possible to execute the algorithms in an efficient way in terms of throughput, resource utilization and power consumption. On-board compression algorithms are usually implemented on ASICs or FPGAs that are tolerant to solar radiation. The main objective of this work is to describe models of the compressors in SystemC, that enable the generation of specifications for a subsequent implementation phase where the algorithms will be described in a hardware design language (VHDL) that can be efficiently mapped into space-qualified FPGAs. With the SystemC models, we perform an exploration of the design space, refining the architecture, and retrieving information about the limits in performance of the cores, storage requirements, data dependencies and prospective hardware requirements of the later FPGA implementation. The described models also comprise connections to shared communication buses using transaction-level modelling (TLM), allowing their inclusion in an embedded system model that may include a software co-processor as well as other processing cores. Additionally, the models are verified by creating SystemC testbenches that can be reused to verify the IP cores when described in VHDL.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"54 1","pages":"65-72"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75188808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/DASIP.2016.7853808
P. Langlois
This session proposes four papers, that, although all related to image processing, offer a great variety of algorithms, application domains and implementation targets. The algorithms include matrix processing, optical flow, the computation of image features with the Histogram of Oriented Gradients and hyperspectral imaging. The implementation targets include Intel processors, ASIPs, FPGAs and a Massively Parallel Processor Array. The session can thus appeal to all researchers interested in the implementation of computationally intensive image processing algorithms on a variety of platforms.
{"title":"Session 5: Image processing on multicore plateforms","authors":"P. Langlois","doi":"10.1109/DASIP.2016.7853808","DOIUrl":"https://doi.org/10.1109/DASIP.2016.7853808","url":null,"abstract":"This session proposes four papers, that, although all related to image processing, offer a great variety of algorithms, application domains and implementation targets. The algorithms include matrix processing, optical flow, the computation of image features with the Histogram of Oriented Gradients and hyperspectral imaging. The implementation targets include Intel processors, ASIPs, FPGAs and a Massively Parallel Processor Array. The session can thus appeal to all researchers interested in the implementation of computationally intensive image processing algorithms on a variety of platforms.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"44 5 1","pages":"129"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83828900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}