首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
End-to-end evaluation of practical video analytics systems for face detection and recognition 端到端评估实际视频分析系统的人脸检测和识别
Pub Date : 2023-01-16 DOI: 10.2352/EI.2023.35.16.AVM-111
Praneet Singh, E. Delp, A. Reibman
Practical video analytics systems that are deployed in bandwidth constrained environments like autonomous vehicles perform computer vision tasks such as face detection and recognition. In an end-to-end face analytics system, inputs are first compressed using popular video codecs like HEVC and then passed onto modules that perform face detection, alignment, and recognition sequentially. Typically, the modules of these systems are evaluated independently using task-specific imbalanced datasets that can misconstrue performance estimates. In this paper, we perform a thorough end-to-end evaluation of a face analytics system using a driving-specific dataset, which enables meaningful interpretations. We demonstrate how independent task evaluations, dataset imbalances, and inconsistent annotations can lead to incorrect system performance estimates. We propose strategies to create balanced evaluation subsets of our dataset and to make its annotations consistent across multiple analytics tasks and scenarios. We then evaluate the end-to-end system performance sequentially to account for task interdependencies. Our experiments show that our approach provides consistent, accurate, and interpretable estimates of the system's performance which is critical for real-world applications.
部署在带宽受限环境(如自动驾驶汽车)中的实用视频分析系统可以执行人脸检测和识别等计算机视觉任务。在端到端人脸分析系统中,输入首先使用流行的视频编解码器(如HEVC)进行压缩,然后传递给执行人脸检测、对齐和识别的模块。通常,使用特定于任务的不平衡数据集独立评估这些系统的模块,这些数据集可能会误解性能估计。在本文中,我们使用特定于驾驶的数据集对人脸分析系统进行了彻底的端到端评估,从而实现了有意义的解释。我们演示了独立的任务评估、数据集不平衡和不一致的注释如何导致不正确的系统性能估计。我们提出了一些策略来创建数据集的平衡评估子集,并使其注释在多个分析任务和场景中保持一致。然后,我们依次评估端到端的系统性能,以考虑任务的相互依赖性。我们的实验表明,我们的方法提供了系统性能的一致、准确和可解释的估计,这对现实世界的应用至关重要。
{"title":"End-to-end evaluation of practical video analytics systems for face detection and recognition","authors":"Praneet Singh, E. Delp, A. Reibman","doi":"10.2352/EI.2023.35.16.AVM-111","DOIUrl":"https://doi.org/10.2352/EI.2023.35.16.AVM-111","url":null,"abstract":"Practical video analytics systems that are deployed in bandwidth constrained environments like autonomous vehicles perform computer vision tasks such as face detection and recognition. In an end-to-end face analytics system, inputs are first compressed using popular video codecs like HEVC and then passed onto modules that perform face detection, alignment, and recognition sequentially. Typically, the modules of these systems are evaluated independently using task-specific imbalanced datasets that can misconstrue performance estimates. In this paper, we perform a thorough end-to-end evaluation of a face analytics system using a driving-specific dataset, which enables meaningful interpretations. We demonstrate how independent task evaluations, dataset imbalances, and inconsistent annotations can lead to incorrect system performance estimates. We propose strategies to create balanced evaluation subsets of our dataset and to make its annotations consistent across multiple analytics tasks and scenarios. We then evaluate the end-to-end system performance sequentially to account for task interdependencies. Our experiments show that our approach provides consistent, accurate, and interpretable estimates of the system's performance which is critical for real-world applications.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123247742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
tRANSAC: Dynamic feature accumulation across time for stable online RANSAC model estimation in automotive applications tRANSAC:汽车应用中稳定在线RANSAC模型估计的动态特征随时间累积
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-110
Shimiao Li, Yang Song, Ruijiang Luo, Zhongyang Huang, Chengming Liu
RANdom SAmple Consensus (RANSAC) is widely used in computer vision and automotive related applications. It is an iterative method to estimate parameters of mathematical model from a set of observed data that contains outliers. In computer vision, such observed data is usually a set of features (such as feature points, line segments) extracted from images. In automotive re-lated applications, RANSAC can be used to estimate lane vanishing point, camera view angles, ground plane etc. In such applications, changing content of road scene makes stable online model estimation difficult. In this paper, we propose a framework called tRANSAC to dynamically accumulate features across time so that online RANSAC model estimation can be stably performed. Feature accumulation across time is done in such a dynamic way that when RANSAC tends to perform robustly and stably, accumulated features are discarded fast so that fewer redundant features are used for RANSAC estimation; when RANSAC tends to perform poorly, accumulated features are discarded slowly so that more features can be used for better RANSAC estimation. Experimental results on road scene dataset for vanishing point and camera angle estimation show that the proposed tRANSAC method gives more stable and accurate estimates compared to baseline RANSAC method.
随机样本一致性(RANSAC)广泛应用于计算机视觉和汽车相关应用。从一组包含异常值的观测数据中估计数学模型参数是一种迭代方法。在计算机视觉中,这种观测数据通常是从图像中提取的一组特征(如特征点、线段)。在汽车相关应用中,RANSAC可用于估计车道消失点、摄像头视角、地平面等。在这类应用中,道路场景内容的变化给稳定的在线模型估计带来了困难。在本文中,我们提出了一个名为tRANSAC的框架来动态累积特征,从而可以稳定地进行在线RANSAC模型估计。随着时间的推移,特征积累以动态的方式进行,当RANSAC趋于鲁棒性和稳定性时,积累的特征被快速丢弃,从而减少了用于RANSAC估计的冗余特征;当RANSAC倾向于表现较差时,累积的特征被慢慢丢弃,以便使用更多的特征进行更好的RANSAC估计。在道路场景数据集上进行消失点和摄像机角度估计的实验结果表明,与基线RANSAC方法相比,本文提出的tRANSAC方法的估计更加稳定和准确。
{"title":"tRANSAC: Dynamic feature accumulation across time for stable online RANSAC model estimation in automotive applications","authors":"Shimiao Li, Yang Song, Ruijiang Luo, Zhongyang Huang, Chengming Liu","doi":"10.2352/ei.2023.35.16.avm-110","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-110","url":null,"abstract":"RANdom SAmple Consensus (RANSAC) is widely used in computer vision and automotive related applications. It is an iterative method to estimate parameters of mathematical model from a set of observed data that contains outliers. In computer vision, such observed data is usually a set of features (such as feature points, line segments) extracted from images. In automotive re-lated applications, RANSAC can be used to estimate lane vanishing point, camera view angles, ground plane etc. In such applications, changing content of road scene makes stable online model estimation difficult. In this paper, we propose a framework called tRANSAC to dynamically accumulate features across time so that online RANSAC model estimation can be stably performed. Feature accumulation across time is done in such a dynamic way that when RANSAC tends to perform robustly and stably, accumulated features are discarded fast so that fewer redundant features are used for RANSAC estimation; when RANSAC tends to perform poorly, accumulated features are discarded slowly so that more features can be used for better RANSAC estimation. Experimental results on road scene dataset for vanishing point and camera angle estimation show that the proposed tRANSAC method gives more stable and accurate estimates compared to baseline RANSAC method.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130333100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of image capture and processing on MTF for end of line test and validation 图像捕获和处理对终端测试和验证MTF的影响
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-126
B. Deegan, Dara Molloy, Jordan Cahill, J. Horgan, Enda Ward, E. Jones, M. Glavin
{"title":"The influence of image capture and processing on MTF for end of line test and validation","authors":"B. Deegan, Dara Molloy, Jordan Cahill, J. Horgan, Enda Ward, E. Jones, M. Glavin","doi":"10.2352/ei.2023.35.16.avm-126","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-126","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128532985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using simulation to quantify the performance of automotive perception systems 用仿真方法量化汽车感知系统的性能
Pub Date : 2023-01-16 DOI: 10.48550/arXiv.2303.00983
Zhenyi Liu, Devesh Shah, Alireza Rahimpour, D. Upadhyay, J. Farrell, B. Wandell
The design and evaluation of complex systems can benefit from a software simulation - sometimes called a digital twin. The simulation can be used to characterize system performance or to test its performance under conditions that are difficult to measure (e.g., nighttime for automotive perception systems). We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection. We describe experiments with 13 different cameras with a variety of optics and pixel sizes. To measure the impact of camera spatial resolution, we designed a collection of driving scenes that had cars at many different distances. We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance. We also quantified the large performance degradation under nighttime conditions, compared to daytime, for all cameras and a COCO pre-trained network.
复杂系统的设计和评估可以受益于软件模拟-有时被称为数字孪生。模拟可用于表征系统性能或测试其在难以测量的条件下的性能(例如,汽车感知系统的夜间)。我们描述了我们用来评估物体(汽车)检测图像系统性能的图像系统仿真软件工具。我们描述了用13种不同的光学和像素大小的相机进行的实验。为了测量相机空间分辨率的影响,我们设计了一组驾驶场景,其中有许多不同距离的汽车。我们通过测量平均精度来量化系统性能,并报告了与系统分辨率和目标检测性能相关的趋势。我们还量化了所有摄像机和COCO预训练网络在夜间条件下与白天相比的较大性能下降。
{"title":"Using simulation to quantify the performance of automotive perception systems","authors":"Zhenyi Liu, Devesh Shah, Alireza Rahimpour, D. Upadhyay, J. Farrell, B. Wandell","doi":"10.48550/arXiv.2303.00983","DOIUrl":"https://doi.org/10.48550/arXiv.2303.00983","url":null,"abstract":"The design and evaluation of complex systems can benefit from a software simulation - sometimes called a digital twin. The simulation can be used to characterize system performance or to test its performance under conditions that are difficult to measure (e.g., nighttime for automotive perception systems). We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection. We describe experiments with 13 different cameras with a variety of optics and pixel sizes. To measure the impact of camera spatial resolution, we designed a collection of driving scenes that had cars at many different distances. We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance. We also quantified the large performance degradation under nighttime conditions, compared to daytime, for all cameras and a COCO pre-trained network.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":" 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114053319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive stray light (flare) testing: Lessons learned 综合杂散光(耀斑)测试:经验教训
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-127
Jackson S. Knappen
{"title":"Comprehensive stray light (flare) testing: Lessons learned","authors":"Jackson S. Knappen","doi":"10.2352/ei.2023.35.16.avm-127","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-127","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an automotive platform for computer vision research 汽车计算机视觉研究平台的设计
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-119
Dominik Schörkhuber, R. Popp, Oleksandr Chistov, Fabian Windbacher, Michael Hödlmoser, M. Gelautz
The goal of our work is to design an automotive platform for AD/ADAS data acquisition in view of subsequent application to behaviour analysis of vulnerable road users. We present a novel data capture platform mounted on a Mercedes GLC vehicle. The car is equipped with an array of sensors and recording hardware including multiple RGB cameras, Lidar, GPS and IMU. For future research on human behaviour analysis in traffic scenes, we compile two kinds of data recordings. Firstly, we design a range of artificial test cases which we then record on a safety regulated proving ground with stunt persons to capture rare events in traffic scenes in a predictable and structured way. Secondly, we record data on public streets of Vienna, Austria, showing unconstrained pedestrian behaviour in an urban setting, while also considering European General Data Protection Regulation (GDPR) requirements. We describe the overall framework including the planning phase, data acquisition and ground truth annotation.
我们的工作目标是设计一个用于AD/ADAS数据采集的汽车平台,以便随后应用于弱势道路使用者的行为分析。我们提出了一个新的数据捕获平台安装在奔驰GLC车辆。该车配备了一系列传感器和记录硬件,包括多个RGB摄像头、激光雷达、GPS和IMU。为了进一步研究交通场景中的人类行为分析,我们编写了两种数据记录。首先,我们设计了一系列人工测试用例,然后在安全监管的试验场与特技人员一起记录,以可预测和结构化的方式捕捉交通场景中的罕见事件。其次,我们记录了奥地利维也纳公共街道上的数据,显示了城市环境中不受约束的行人行为,同时也考虑了欧洲通用数据保护条例(GDPR)的要求。我们描述了整个框架,包括规划阶段,数据采集和地面真值注释。
{"title":"Design of an automotive platform for computer vision research","authors":"Dominik Schörkhuber, R. Popp, Oleksandr Chistov, Fabian Windbacher, Michael Hödlmoser, M. Gelautz","doi":"10.2352/ei.2023.35.16.avm-119","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-119","url":null,"abstract":"The goal of our work is to design an automotive platform for AD/ADAS data acquisition in view of subsequent application to behaviour analysis of vulnerable road users. We present a novel data capture platform mounted on a Mercedes GLC vehicle. The car is equipped with an array of sensors and recording hardware including multiple RGB cameras, Lidar, GPS and IMU. For future research on human behaviour analysis in traffic scenes, we compile two kinds of data recordings. Firstly, we design a range of artificial test cases which we then record on a safety regulated proving ground with stunt persons to capture rare events in traffic scenes in a predictable and structured way. Secondly, we record data on public streets of Vienna, Austria, showing unconstrained pedestrian behaviour in an urban setting, while also considering European General Data Protection Regulation (GDPR) requirements. We describe the overall framework including the planning phase, data acquisition and ground truth annotation.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129485058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTF as a performance indicator for AI algorithms? MTF作为人工智能算法的性能指标?
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-125
Patrick Müller, Alexander Braun
{"title":"MTF as a performance indicator for AI algorithms?","authors":"Patrick Müller, Alexander Braun","doi":"10.2352/ei.2023.35.16.avm-125","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-125","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123036868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Orchestration of co-operative and adaptive multi-core deep learning engines 协作和自适应多核深度学习引擎的编排
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-112
Mihir Mody, Kumar Desappan, P. Swami, David Smith, Shyam Jagannathan, Kevin Lavery, Gregory Shultz, Jason Jones
Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning (DL) techniques like convolution neural network (CNN), Transformers etc. Deep learning (DL)-based algorithms are used in many integral modules of Advanced driver Assistance systems (ADAS) and Automated Driving Systems. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. These real-time DL applications requires huge compute requires up to 250 TOPs to realize them on an edge device. To meet the needs of such SoCs efficiently in-terms of Cost and Power silicon vendor provide a complex SoC with multiple DL engines. These SoCs also comes with all the system resources like L2/L3 on-chip memory, high speed DDR interface, PMIC etc to feed the data and power to utilize these DL engines compute efficiently. These system resource would scale linearly with number of DL engines in the system. This paper proposes solutions to optimizes these system resource to provide cost and Power efficient solution. (1) Co-operative and Adaptive asynchronous DL engines scheduling to optimize the peak resources usage in multiple vectors like memory size, throughput, Power/ Current. (2) Orchestration of Co-operative and Adaptive Multi-core DL Engines to achieve synchronous execution to achieve maximum utilization of all the resources.
自动驾驶功能,如高速公路驾驶和停车辅助,越来越多地部署在高端汽车上,目标是利用卷积神经网络(CNN)、变形金刚等深度学习(DL)技术实现自动驾驶汽车。基于深度学习(DL)的算法被用于高级驾驶辅助系统(ADAS)和自动驾驶系统的许多集成模块中。基于摄像头的感知、驾驶员监控、驾驶策略、雷达和激光雷达感知是在此类系统中使用深度学习算法构建的几个例子。这些实时深度学习应用需要巨大的计算能力,需要多达250个TOPs才能在边缘设备上实现。为了有效地满足这种SoC在成本和功耗方面的需求,硅供应商提供了具有多个DL引擎的复杂SoC。这些soc还配备了所有系统资源,如L2/L3片上存储器,高速DDR接口,PMIC等,以提供数据和功率,从而有效地利用这些DL引擎进行计算。这些系统资源将随系统中深度学习引擎的数量线性扩展。本文提出了优化这些系统资源的解决方案,以提供低成本和低功耗的解决方案。(1)协作和自适应异步DL引擎调度,以优化内存大小,吞吐量,功率/电流等多个向量的峰值资源使用。(2)编排协同和自适应多核DL引擎,实现同步执行,实现所有资源的最大利用。
{"title":"Orchestration of co-operative and adaptive multi-core deep learning engines","authors":"Mihir Mody, Kumar Desappan, P. Swami, David Smith, Shyam Jagannathan, Kevin Lavery, Gregory Shultz, Jason Jones","doi":"10.2352/ei.2023.35.16.avm-112","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-112","url":null,"abstract":"Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning (DL) techniques like convolution neural network (CNN), Transformers etc. Deep learning (DL)-based algorithms are used in many integral modules of Advanced driver Assistance systems (ADAS) and Automated Driving Systems. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. These real-time DL applications requires huge compute requires up to 250 TOPs to realize them on an edge device. To meet the needs of such SoCs efficiently in-terms of Cost and Power silicon vendor provide a complex SoC with multiple DL engines. These SoCs also comes with all the system resources like L2/L3 on-chip memory, high speed DDR interface, PMIC etc to feed the data and power to utilize these DL engines compute efficiently. These system resource would scale linearly with number of DL engines in the system. This paper proposes solutions to optimizes these system resource to provide cost and Power efficient solution. (1) Co-operative and Adaptive asynchronous DL engines scheduling to optimize the peak resources usage in multiple vectors like memory size, throughput, Power/ Current. (2) Orchestration of Co-operative and Adaptive Multi-core DL Engines to achieve synchronous execution to achieve maximum utilization of all the resources.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128443165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating motion blur and exposure time and evaluating its effect on image quality 模拟运动模糊和曝光时间,并评估其对图像质量的影响
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-117
Hao-Xiang Lin, B. Deegan, J. Horgan, Enda Ward, Patrick Denny, Ciarán Eising, M. Glavin, E. Jones
{"title":"Simulating motion blur and exposure time and evaluating its effect on image quality","authors":"Hao-Xiang Lin, B. Deegan, J. Horgan, Enda Ward, Patrick Denny, Ciarán Eising, M. Glavin, E. Jones","doi":"10.2352/ei.2023.35.16.avm-117","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-117","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpTIFlow - An optimized end-to-end dataflow for accelerating deep learning workloads on heterogeneous SoCs OpTIFlow -优化的端到端数据流,用于加速异构soc上的深度学习工作负载
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-113
Shyam Jagannathan, Vijay Pothukuchi, Jesse Villarreal, Kumar Desappan, Manu Mathew, Rahul Ravikumar, Aniket Limaye, Mihir Mody, P. Swami, Piyali Goswami, Carlos Rodriguez, Emmanuel Madrigal, Marco Herrera
{"title":"OpTIFlow - An optimized end-to-end dataflow for accelerating deep learning workloads on heterogeneous SoCs","authors":"Shyam Jagannathan, Vijay Pothukuchi, Jesse Villarreal, Kumar Desappan, Manu Mathew, Rahul Ravikumar, Aniket Limaye, Mihir Mody, P. Swami, Piyali Goswami, Carlos Rodriguez, Emmanuel Madrigal, Marco Herrera","doi":"10.2352/ei.2023.35.16.avm-113","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-113","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123157691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1