首页 > 最新文献

Proceedings of the Second International Conference on AI-ML Systems最新文献

英文 中文
System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games 实时策略游戏集成终身强化学习代理的系统设计
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3565236
Indranil Sur, Zachary A. Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron Taylor, Mustafa Burak Gurbuz, James Smith, Sahana P Joshi, N. Japkowicz, Michael Baron, Z. Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, M. Piacentino, Jesse Hostetler, Aswin Raghavan
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model’s capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
随着人工和机器人系统越来越多地部署和依赖于现实世界的应用,重要的是它们表现出在动态变化的环境中不断学习和适应的能力,成为终身学习机器。持续/终身学习(LL)包括最小化旧任务的灾难性遗忘,同时最大化模型学习新任务的能力。本文讨论了具有挑战性的终身强化学习(L2RL)设置。推动L2RL的发展并使L2RL在实际应用中有用,需要的不仅仅是开发单独的L2RL算法;它需要在系统级取得进展,特别是研究如何将多个L2RL算法集成到一个公共框架中的重要问题。在本文中,我们介绍了终身强化学习组件框架(L2RLCF),该框架标准化了L2RL系统,并将不同的持续学习组件(每个组件都解决终身学习问题的不同方面)吸收到一个统一的系统中。作为L2RLCF的一个实例,我们开发了一个标准API,允许轻松集成新的终身学习组件。我们描述了一个案例研究,演示了如何将多个独立开发的LL组件集成到一个已实现的系统中。我们还介绍了一个评估环境,以衡量组合各种系统组件的效果。我们的评估环境采用了由《星际争霸2》迷你游戏组成的不同LL场景(任务序列),并允许在具有挑战性的通用评估环境中对不同组件组合进行公平、全面和定量的比较。
{"title":"System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games","authors":"Indranil Sur, Zachary A. Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron Taylor, Mustafa Burak Gurbuz, James Smith, Sahana P Joshi, N. Japkowicz, Michael Baron, Z. Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, M. Piacentino, Jesse Hostetler, Aswin Raghavan","doi":"10.1145/3564121.3565236","DOIUrl":"https://doi.org/10.1145/3564121.3565236","url":null,"abstract":"As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model’s capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122336120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Link-Adaptation for Improved Quality-of-Service in V2V Communication using Reinforcement Learning 利用强化学习改进V2V通信服务质量的链路自适应
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564122
Serene Banerjee, Joy Bose, Sleeba Paul Puthepurakel, Pratyush Kiran Uppuluri, Subhadip Bandyopadhyay, Y. S. K. Reddy, Ranjani H. G.
For autonomous driving, safer travel, and fleet management, Vehicle-to-Vehicle (V2V) communication protocols are an emerging area of research and development. State-of-the-art techniques include machine learning (ML) and reinforcement learning (RL) to adapt modulation and coding rates as the vehicle moves. However, channel state estimations are often incorrect and rapidly changing in a V2V scenario. We propose a combination of input features, including (a) sensor inputs from other parameters in the vehicle, such as speed and global positioning system (GPS), (b) estimation of interference and load for each of the vehicles, and (c) channel state estimation to find the optimal rate that would maximize Quality-of-Service. Our model uses an ensemble of RL-agents to predict trends in the input parameters and to find the inter-dependencies of these input parameters. An RL agent then utilizes these inputs to find the best modulation and coding rate as the vehicle moves. We demonstrate our results through prototype experiments using real data collected from customer networks.
对于自动驾驶、更安全的旅行和车队管理来说,车对车(V2V)通信协议是一个新兴的研究和开发领域。最先进的技术包括机器学习(ML)和强化学习(RL),以适应车辆移动时的调制和编码速率。然而,在V2V场景中,通道状态估计通常是不正确的,并且变化很快。我们提出了一个输入特征的组合,包括(a)来自车辆其他参数的传感器输入,如速度和全球定位系统(GPS), (b)对每辆车的干扰和负载的估计,以及(c)通道状态估计,以找到最大限度提高服务质量的最佳速率。我们的模型使用强化学习代理的集合来预测输入参数的趋势,并找到这些输入参数的相互依赖关系。然后,RL代理利用这些输入来找到车辆移动时的最佳调制和编码速率。我们通过使用从客户网络中收集的真实数据的原型实验来证明我们的结果。
{"title":"Link-Adaptation for Improved Quality-of-Service in V2V Communication using Reinforcement Learning","authors":"Serene Banerjee, Joy Bose, Sleeba Paul Puthepurakel, Pratyush Kiran Uppuluri, Subhadip Bandyopadhyay, Y. S. K. Reddy, Ranjani H. G.","doi":"10.1145/3564121.3564122","DOIUrl":"https://doi.org/10.1145/3564121.3564122","url":null,"abstract":"For autonomous driving, safer travel, and fleet management, Vehicle-to-Vehicle (V2V) communication protocols are an emerging area of research and development. State-of-the-art techniques include machine learning (ML) and reinforcement learning (RL) to adapt modulation and coding rates as the vehicle moves. However, channel state estimations are often incorrect and rapidly changing in a V2V scenario. We propose a combination of input features, including (a) sensor inputs from other parameters in the vehicle, such as speed and global positioning system (GPS), (b) estimation of interference and load for each of the vehicles, and (c) channel state estimation to find the optimal rate that would maximize Quality-of-Service. Our model uses an ensemble of RL-agents to predict trends in the input parameters and to find the inter-dependencies of these input parameters. An RL agent then utilizes these inputs to find the best modulation and coding rate as the vehicle moves. We demonstrate our results through prototype experiments using real data collected from customer networks.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115779504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of gcForest inferencing on multi-core CPU and FPGA gcForest推理在多核CPU和FPGA上的性能评估
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564797
P. Manavar, Sharyu Vijay Mukhekar, M. Nambiar
Decision forests have proved to be useful in machine learning tasks. gcForest is a model that leverages ensembles of decision forests for classification. It combines several decision forests and by adding properties and layered architecture in such a way that it has been proven to give competitive results compared to convolutional neural networks. This paper analyzes the performance of a gcForest model trained on the MNIST digit classification data set on a multi-core CPU based system. Using a performance model-based approach it also presents an analysis of performance on a well-endowed FPGA accelerator card for the same model. It is concluded that the multi-core CPU system can deliver more throughput than the FPGA with batched workload, while the FPGA offers lower latency for a single inference. We also analyze the scalability of the gcForest model on the multi-core server system and with the help of experiments and models, uncover ways to improve the scalability.
决策森林已被证明在机器学习任务中很有用。gcForest是一个利用决策森林集合进行分类的模型。它结合了几个决策森林,并通过添加属性和分层架构,这种方式已被证明可以提供与卷积神经网络相比具有竞争力的结果。本文分析了基于MNIST数字分类数据集训练的gcForest模型在多核CPU系统上的性能。使用基于性能模型的方法,本文还对相同模型的高性能FPGA加速卡进行了性能分析。结果表明,在处理批处理工作负载时,多核CPU系统比FPGA提供更高的吞吐量,而FPGA在单个推理时提供更低的延迟。本文还分析了gcForest模型在多核服务器系统上的可扩展性,并结合实验和模型,揭示了提高可扩展性的方法。
{"title":"Performance Evaluation of gcForest inferencing on multi-core CPU and FPGA","authors":"P. Manavar, Sharyu Vijay Mukhekar, M. Nambiar","doi":"10.1145/3564121.3564797","DOIUrl":"https://doi.org/10.1145/3564121.3564797","url":null,"abstract":"Decision forests have proved to be useful in machine learning tasks. gcForest is a model that leverages ensembles of decision forests for classification. It combines several decision forests and by adding properties and layered architecture in such a way that it has been proven to give competitive results compared to convolutional neural networks. This paper analyzes the performance of a gcForest model trained on the MNIST digit classification data set on a multi-core CPU based system. Using a performance model-based approach it also presents an analysis of performance on a well-endowed FPGA accelerator card for the same model. It is concluded that the multi-core CPU system can deliver more throughput than the FPGA with batched workload, while the FPGA offers lower latency for a single inference. We also analyze the scalability of the gcForest model on the multi-core server system and with the help of experiments and models, uncover ways to improve the scalability.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Email Server I/O Events As Multi-temporal Point Processes 将电子邮件服务器I/O事件建模为多时间点进程
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564129
Vinayaka Kamath, Evan Sinclair, Damon Gilkerson, V. Padmanabhan, Sreangsu Acharyya
We model the read-workload experienced by an email server as a superposition of reads performed by different software clients at non-deterministic times, each modeled as a dependent point process. The probability of a read event occurring on an email is affected, among others, by the age of an email and the time of the email recipient’s day. Unlike the more commonly encountered variants of point processes – the one-dimensional temporal, or the multi-dimensional spatial or spatio-temporal – the dependence between the different temporal axes, age and time of day, is incorporated by a point process defined over a non-Euclidean manifold. The used model captures the diverse patterns exhibited by the different clients, for example, the influence of age of an email, time of the user’s day, recent reads by the same or different clients, whether the client is controlled directly by the user, or is a software-agent acting semi-autonomously on the user’s behalf or is a server-side batch job that attempts to avoid adverse impact on user’s latency experience. We show how estimating this point process can be mapped to a Poisson regression, thereby saving the time to implement custom model training software.
我们将电子邮件服务器所经历的读取工作负载建模为不同软件客户端在不确定时间执行的读取的叠加,每个读取都建模为一个依赖点过程。电子邮件上发生阅读事件的概率受到电子邮件的年龄和电子邮件接收者当天的时间等因素的影响。不同于更常见的点过程变体-一维时间,或多维空间或时空-不同时间轴,年龄和时间之间的依赖关系,被定义在非欧几里得流形上的点过程所包含。所使用的模型捕获了不同客户端所表现出的不同模式,例如,电子邮件的年龄、用户一天中的时间、相同或不同客户端的最近阅读的影响、客户端是由用户直接控制的,还是代表用户进行半自主操作的软件代理,还是试图避免对用户延迟体验产生不利影响的服务器端批处理作业。我们展示了如何估计这个点过程可以映射到泊松回归,从而节省了实现自定义模型训练软件的时间。
{"title":"Modeling Email Server I/O Events As Multi-temporal Point Processes","authors":"Vinayaka Kamath, Evan Sinclair, Damon Gilkerson, V. Padmanabhan, Sreangsu Acharyya","doi":"10.1145/3564121.3564129","DOIUrl":"https://doi.org/10.1145/3564121.3564129","url":null,"abstract":"We model the read-workload experienced by an email server as a superposition of reads performed by different software clients at non-deterministic times, each modeled as a dependent point process. The probability of a read event occurring on an email is affected, among others, by the age of an email and the time of the email recipient’s day. Unlike the more commonly encountered variants of point processes – the one-dimensional temporal, or the multi-dimensional spatial or spatio-temporal – the dependence between the different temporal axes, age and time of day, is incorporated by a point process defined over a non-Euclidean manifold. The used model captures the diverse patterns exhibited by the different clients, for example, the influence of age of an email, time of the user’s day, recent reads by the same or different clients, whether the client is controlled directly by the user, or is a software-agent acting semi-autonomously on the user’s behalf or is a server-side batch job that attempts to avoid adverse impact on user’s latency experience. We show how estimating this point process can be mapped to a Poisson regression, thereby saving the time to implement custom model training software.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensembling Deep Learning And CIELAB Color Space Model for Fire Detection from UAV images 基于深度学习和CIELAB色彩空间模型的无人机图像火灾探测
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564130
Yash Jain, Vishu Saxena, Sparsh Mittal
Wildfires can cause significant damage to forests and endanger wildlife. Detecting these forest fires at the initial stages helps the authorities in preventing them from spreading further. In this paper, we first propose a novel technique, termed CIELAB-color technique, which detects fire based on the color of the fire in CIELAB color space. We train state-of-art CNNs to detect fire. Since deep learning (CNNs) and image processing have complementary strengths, we combine their strengths to propose an ensemble architecture. It uses two CNNs and the CIELAB-color technique and then performs majority voting to decide the final fire/no-fire prediction output. We finally propose a chain-of-classifiers technique which first tests an image using the CIELAB-color technique. If an image is flagged as no-fire, then it further checks the image using a CNN. This technique has lower model size than ensemble technique. On FLAME dataset, the ensemble technique provides 93.32% accuracy, outperforming both previous works ( accuracy) and individually using either CNNs or CIELAB-color technique. The source code can be obtained from https://github.com/CandleLabAI/FireDetection.
野火会对森林造成严重破坏,并危及野生动物。在最初阶段发现这些森林火灾有助于当局防止它们进一步蔓延。在本文中,我们首先提出了一种新的技术,称为CIELAB-color技术,该技术基于火灾在CIELAB颜色空间中的颜色来检测火灾。我们训练最先进的cnn来探测火灾。由于深度学习(cnn)和图像处理具有互补的优势,我们将它们的优势结合起来提出一个集成架构。它使用两个cnn和CIELAB-color技术,然后进行多数投票来决定最终的火灾/无火灾预测输出。最后,我们提出了一种分类器链技术,该技术首先使用CIELAB-color技术对图像进行测试。如果图像被标记为无火,那么它将使用CNN进一步检查图像。该技术比集成技术具有更小的模型尺寸。在FLAME数据集上,集成技术提供了93.32%的准确率,优于之前的工作(准确率),并且单独使用cnn或CIELAB-color技术。源代码可以从https://github.com/CandleLabAI/FireDetection获得。
{"title":"Ensembling Deep Learning And CIELAB Color Space Model for Fire Detection from UAV images","authors":"Yash Jain, Vishu Saxena, Sparsh Mittal","doi":"10.1145/3564121.3564130","DOIUrl":"https://doi.org/10.1145/3564121.3564130","url":null,"abstract":"Wildfires can cause significant damage to forests and endanger wildlife. Detecting these forest fires at the initial stages helps the authorities in preventing them from spreading further. In this paper, we first propose a novel technique, termed CIELAB-color technique, which detects fire based on the color of the fire in CIELAB color space. We train state-of-art CNNs to detect fire. Since deep learning (CNNs) and image processing have complementary strengths, we combine their strengths to propose an ensemble architecture. It uses two CNNs and the CIELAB-color technique and then performs majority voting to decide the final fire/no-fire prediction output. We finally propose a chain-of-classifiers technique which first tests an image using the CIELAB-color technique. If an image is flagged as no-fire, then it further checks the image using a CNN. This technique has lower model size than ensemble technique. On FLAME dataset, the ensemble technique provides 93.32% accuracy, outperforming both previous works ( accuracy) and individually using either CNNs or CIELAB-color technique. The source code can be obtained from https://github.com/CandleLabAI/FireDetection.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122229698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An hardware accelerator design of Mobile-Net model on FPGA 基于FPGA的移动网络模型硬件加速器设计
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564124
Sanjaya M V, M. Rao
Domain specific hardware architectures and hardware accelerators have been a vital part of modern system design. Especially for math intensive applications involving tasks related to machine perception, incorporating hardware accelerators that work in tandem with general purpose micro-processors can prove to be energy efficient both at server and edge scenarios. FPGAs, due to their reconfigurability makes it possible to have customized hardware designed as per the computational and memory requirements specific to that application. This work proposes an optimized low latency hardware accelerator implementation of Mobile-net V2 CNN on an FPGA. This paper presents an implementation of Mobile-net-V2 inference on a Xilinx Ultrascale+ MPSOC platform incorporating solely half precision floating point arithmetic for both parameters and activations of the network. The proposed implementation is also optimized by merging all batch-norm layers with its preceding convolutional layers. For applications which cannot compromise on performance of the algorithm for execution speed and efficiency, an optimized floating point inference is proposed. The current implementation offers an overall performance improvement of at-least 20X with moderate resource utilization with minimal variance in inference latency, as compared to performing inference on the processor alone with almost no degradation in the model accuracy.
领域专用硬件架构和硬件加速器已经成为现代系统设计的重要组成部分。特别是对于涉及与机器感知相关任务的数学密集型应用程序,将硬件加速器与通用微处理器结合起来,在服务器和边缘场景中都可以证明是节能的。fpga由于其可重构性,使得根据特定应用程序的计算和内存要求定制硬件成为可能。本文提出了一种在FPGA上优化的低延迟的Mobile-net V2 CNN硬件加速器实现。本文介绍了在Xilinx Ultrascale+ MPSOC平台上实现Mobile-net-V2推理的方法,该平台仅包含用于网络参数和激活的半精度浮点算法。提出的实现还通过将所有批规范层与之前的卷积层合并来优化。对于不能在执行速度和效率上牺牲算法性能的应用,提出了一种优化的浮点推理。目前的实现提供了至少20倍的总体性能改进,资源利用率适中,推理延迟变化最小,与单独在处理器上执行推理相比,模型精度几乎没有下降。
{"title":"An hardware accelerator design of Mobile-Net model on FPGA","authors":"Sanjaya M V, M. Rao","doi":"10.1145/3564121.3564124","DOIUrl":"https://doi.org/10.1145/3564121.3564124","url":null,"abstract":"Domain specific hardware architectures and hardware accelerators have been a vital part of modern system design. Especially for math intensive applications involving tasks related to machine perception, incorporating hardware accelerators that work in tandem with general purpose micro-processors can prove to be energy efficient both at server and edge scenarios. FPGAs, due to their reconfigurability makes it possible to have customized hardware designed as per the computational and memory requirements specific to that application. This work proposes an optimized low latency hardware accelerator implementation of Mobile-net V2 CNN on an FPGA. This paper presents an implementation of Mobile-net-V2 inference on a Xilinx Ultrascale+ MPSOC platform incorporating solely half precision floating point arithmetic for both parameters and activations of the network. The proposed implementation is also optimized by merging all batch-norm layers with its preceding convolutional layers. For applications which cannot compromise on performance of the algorithm for execution speed and efficiency, an optimized floating point inference is proposed. The current implementation offers an overall performance improvement of at-least 20X with moderate resource utilization with minimal variance in inference latency, as compared to performing inference on the processor alone with almost no degradation in the model accuracy.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130528848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Address Location Correction System for Q-commerce Q-commerce地址定位校正系统
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564800
Y. Reddy, Sumanth Sadu, A. Ganesan, Jose Mathew
Hyperlocal e-commerce companies in India deliver food and groceries in around 20-40 minutes, and more recently, some companies focus on sub-ten-minute delivery targets. Such "instant" delivery platforms referred to as quick (q)-commerce onboard GPS locations of customer addresses along with their text addresses to enable Delivery Partners (DPs) navigate to the customer locations seamlessly. Inaccurate GPS locations lead to a breach of promises on delivery times for customers and order cancellations because the DPs may not be able to find the address easily or may not even navigate close to the actual address. As a first step towards correcting these inaccurate locations, in this work, we design a classifier to identify if the GPS location captured is incorrect using the text addresses. The classifier is trained in a self-supervised manner. We propose two strategies to generate the train set, one based on location perturbation using Gaussian noise and another based on swapping pairs of addresses in a dataset generated with accurate address locations. An ensemble of outputs of models trained on these two datasets give 84.5 % precision and 49 % recall in a large Indian city on our internal test set.
印度的超本地化电子商务公司在20-40分钟内配送食品和杂货,最近,一些公司专注于10分钟以下的配送目标。这种“即时”配送平台被称为“快速商务”,其内置GPS定位客户地址及其文本地址,使配送伙伴(dp)能够无缝地导航到客户位置。不准确的GPS定位会导致客户在交货时间上的承诺被违背,订单被取消,因为配送员可能无法轻松找到地址,甚至可能无法导航到接近实际地址的地方。作为纠正这些不准确位置的第一步,在这项工作中,我们设计了一个分类器来识别使用文本地址捕获的GPS位置是否不正确。分类器以自监督的方式进行训练。我们提出了两种生成训练集的策略,一种是基于使用高斯噪声的位置扰动,另一种是基于在精确地址位置生成的数据集中交换地址对。在我们的内部测试集中,在这两个数据集上训练的模型输出的集合给出了84.5%的精度和49%的召回率。
{"title":"Address Location Correction System for Q-commerce","authors":"Y. Reddy, Sumanth Sadu, A. Ganesan, Jose Mathew","doi":"10.1145/3564121.3564800","DOIUrl":"https://doi.org/10.1145/3564121.3564800","url":null,"abstract":"Hyperlocal e-commerce companies in India deliver food and groceries in around 20-40 minutes, and more recently, some companies focus on sub-ten-minute delivery targets. Such \"instant\" delivery platforms referred to as quick (q)-commerce onboard GPS locations of customer addresses along with their text addresses to enable Delivery Partners (DPs) navigate to the customer locations seamlessly. Inaccurate GPS locations lead to a breach of promises on delivery times for customers and order cancellations because the DPs may not be able to find the address easily or may not even navigate close to the actual address. As a first step towards correcting these inaccurate locations, in this work, we design a classifier to identify if the GPS location captured is incorrect using the text addresses. The classifier is trained in a self-supervised manner. We propose two strategies to generate the train set, one based on location perturbation using Gaussian noise and another based on swapping pairs of addresses in a dataset generated with accurate address locations. An ensemble of outputs of models trained on these two datasets give 84.5 % precision and 49 % recall in a large Indian city on our internal test set.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"82 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128158490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Planning System for Smart Charging of Electric Fleets 电动汽车智能充电的混合规划系统
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564125
Kshitij Garg, A. Narayanan, P. Misra, Arunchandar Vasan, Vivek Bandhu, Debarupa Das
Electric vehicle (EV) fleets are well suited for last-mile deliveries both from sustainability and operational cost perspectives. To ensure economic parity with non-EV options, even captive chargers for EV fleets need to be managed intelligently. Specifically, the EVs needs to be adequately charged for their entire delivery runs while handling reduced time flexibility between runs; limited number of chargers; and deviations from the planned schedule. Existing works either solve smaller instances of this problem optimally, or larger instances with significant sub-optimality. In addition, they typically consider either day-ahead or real-time planning in isolation. We complement existing works with a hybrid approach that first identifies a day-ahead plan for assigning EVs to chargers; and then uses online replanning to handle any deviations in real-time. For the day-ahead planning, we use a learning agent (LA) that learns to assign EVs to chargers over several problem instances. Because the agent solves a given instance during its testing phase, it achieves scale in problem size with limited sub-optimality. For the online replanning, we use a greedy heuristic that dynamically refines the day-ahead plan to handle delays in EV arrivals. We evaluate our approach using representative datasets. As baselines for the LA, we use an exact mixed-integer linear program (MILP) (greedy heuristic) for small (large) problem instances. As baselines for the replanning, we use no-planning and no-replanning. Our experiments show that LA performs better (8.5-14%) than greedy heuristic in large problem instances, while being reasonably close (< 22%) to the optimal in smaller instances. For online replanning, our approach performs about 7-20% better than no-planning and no-replanning for a range of delay profiles.
从可持续性和运营成本的角度来看,电动汽车(EV)车队都非常适合最后一英里的交付。为了确保与非电动汽车选择的经济平价,即使是电动汽车车队的专属充电器也需要智能管理。具体来说,电动汽车需要在整个交付过程中充分充电,同时处理运行之间减少的时间灵活性;充电器数量有限;以及与计划进度的偏差。现有的工作要么最优地解决了这个问题的较小实例,要么解决了具有显著次最优性的较大实例。此外,他们通常会孤立地考虑提前一天或实时计划。我们采用一种混合方法来补充现有的工作,首先确定将电动汽车分配给充电器的提前计划;然后使用在线重新规划来实时处理任何偏差。对于提前一天的计划,我们使用一个学习代理(LA),它学习在几个问题实例中将电动汽车分配给充电器。因为代理在测试阶段解决了给定的实例,所以它在有限的次最优性下实现了问题规模的扩展。对于在线重新规划,我们使用贪婪启发式算法动态改进日前计划来处理电动汽车到达的延迟。我们使用代表性数据集来评估我们的方法。作为LA的基线,我们对小(大)问题实例使用精确混合整数线性规划(MILP)(贪婪启发式)。作为重新规划的基准,我们使用无规划和无重新规划。我们的实验表明,在大型问题实例中,LA比贪婪启发式算法表现得更好(8.5-14%),而在较小的实例中,LA与最优算法相当接近(< 22%)。对于在线重新规划,我们的方法比无规划和无重新规划在一定范围的延迟配置文件中执行约7-20%。
{"title":"A Hybrid Planning System for Smart Charging of Electric Fleets","authors":"Kshitij Garg, A. Narayanan, P. Misra, Arunchandar Vasan, Vivek Bandhu, Debarupa Das","doi":"10.1145/3564121.3564125","DOIUrl":"https://doi.org/10.1145/3564121.3564125","url":null,"abstract":"Electric vehicle (EV) fleets are well suited for last-mile deliveries both from sustainability and operational cost perspectives. To ensure economic parity with non-EV options, even captive chargers for EV fleets need to be managed intelligently. Specifically, the EVs needs to be adequately charged for their entire delivery runs while handling reduced time flexibility between runs; limited number of chargers; and deviations from the planned schedule. Existing works either solve smaller instances of this problem optimally, or larger instances with significant sub-optimality. In addition, they typically consider either day-ahead or real-time planning in isolation. We complement existing works with a hybrid approach that first identifies a day-ahead plan for assigning EVs to chargers; and then uses online replanning to handle any deviations in real-time. For the day-ahead planning, we use a learning agent (LA) that learns to assign EVs to chargers over several problem instances. Because the agent solves a given instance during its testing phase, it achieves scale in problem size with limited sub-optimality. For the online replanning, we use a greedy heuristic that dynamically refines the day-ahead plan to handle delays in EV arrivals. We evaluate our approach using representative datasets. As baselines for the LA, we use an exact mixed-integer linear program (MILP) (greedy heuristic) for small (large) problem instances. As baselines for the replanning, we use no-planning and no-replanning. Our experiments show that LA performs better (8.5-14%) than greedy heuristic in large problem instances, while being reasonably close (< 22%) to the optimal in smaller instances. For online replanning, our approach performs about 7-20% better than no-planning and no-replanning for a range of delay profiles.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"80 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114131555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acceleration-aware, Retraining-free Evolutionary Pruning for Automated Fitment of Deep Learning Models on Edge Devices 边缘设备上深度学习模型自动拟合的加速感知、无需再训练的进化剪枝
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564133
Jeet Dutta, Swarnava Dey, Arijit Mukherjee, Arpan Pal
Deep Learning architectures used in computer vision, natural language and speech processing, unsupervised clustering, etc. have become highly complex and application-specific in recent times. Despite existing automated feature engineering techniques, building such complex models still requires extensive domain knowledge or a huge infrastructure for employing techniques such as Neural Architecture Search (NAS). Further, many industrial applications need in-premises decision-making close to sensors, thus making deployment of deep learning models on edge devices a desirable and often necessary option. Instead of freshly designing application-specific Deep Learning models, the transformation of already built models can achieve faster time to market and cost reduction. In this work, we present an efficient re-training-free model compression method that searches for the best hyper-parameters to reduce the model size and latency without losing any accuracy. Moreover, our proposed method takes into account any drop in accuracy due to hardware acceleration, when a Deep Neural Network is executed on accelerator hardware.
近年来,用于计算机视觉、自然语言和语音处理、无监督聚类等领域的深度学习架构变得高度复杂和特定于应用。尽管现有的自动化特征工程技术,构建如此复杂的模型仍然需要广泛的领域知识或使用诸如神经结构搜索(NAS)等技术的庞大基础设施。此外,许多工业应用需要靠近传感器的内部决策,因此在边缘设备上部署深度学习模型是一种理想的选择,而且通常是必要的选择。而不是新设计特定于应用程序的深度学习模型,已经建立的模型的转换可以实现更快的上市时间和降低成本。在这项工作中,我们提出了一种有效的无需再训练的模型压缩方法,该方法在不损失任何准确性的情况下搜索最佳超参数来减小模型大小和延迟。此外,当在加速器硬件上执行深度神经网络时,我们提出的方法考虑了由于硬件加速而导致的精度下降。
{"title":"Acceleration-aware, Retraining-free Evolutionary Pruning for Automated Fitment of Deep Learning Models on Edge Devices","authors":"Jeet Dutta, Swarnava Dey, Arijit Mukherjee, Arpan Pal","doi":"10.1145/3564121.3564133","DOIUrl":"https://doi.org/10.1145/3564121.3564133","url":null,"abstract":"Deep Learning architectures used in computer vision, natural language and speech processing, unsupervised clustering, etc. have become highly complex and application-specific in recent times. Despite existing automated feature engineering techniques, building such complex models still requires extensive domain knowledge or a huge infrastructure for employing techniques such as Neural Architecture Search (NAS). Further, many industrial applications need in-premises decision-making close to sensors, thus making deployment of deep learning models on edge devices a desirable and often necessary option. Instead of freshly designing application-specific Deep Learning models, the transformation of already built models can achieve faster time to market and cost reduction. In this work, we present an efficient re-training-free model compression method that searches for the best hyper-parameters to reduce the model size and latency without losing any accuracy. Moreover, our proposed method takes into account any drop in accuracy due to hardware acceleration, when a Deep Neural Network is executed on accelerator hardware.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115238512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How Provenance helps Quality Assurance Activities in AI/ML Systems 来源如何帮助AI/ML系统中的质量保证活动
Pub Date : 2022-10-12 DOI: 10.1145/3564121.3564801
Takao Nakagawa, Kenichiro Narita, Kyoung-Sook Kim
Quality assurance is required for the wide use of artificial intelligence (AI) systems in industry and society, including mission-critical areas such as medical or disaster management domains. However, the quality evaluation methods of machine learning (ML) components, especially deep neural networks, have not yet been established. In addition, various metrics are applied by evaluators with different quality requirements and testing environments, from data collection to experimentation to deployment. In this paper, we propose a quality provenance model, AIQPROV, to record who evaluated quality, when from which viewpoint, and how the evaluation was used. The AIQPROV model focuses on human activities on how to apply this to the field of quality assurance, where human intervention is required. Moreover, we present an extension of the W3C PROV framework and conduct a database to store the provenance information of the quality assurance lifecycle with 11 use cases to validate our model.
在工业和社会中广泛使用人工智能(AI)系统需要质量保证,包括关键任务领域,如医疗或灾害管理领域。然而,机器学习(ML)组件,特别是深度神经网络的质量评价方法尚未建立。此外,从数据收集到实验再到部署,评估人员使用不同的质量需求和测试环境来应用各种度量标准。在本文中,我们提出了一个质量来源模型,AIQPROV,以记录谁评估了质量,何时从哪个角度,以及如何使用评估。AIQPROV模型关注人类活动,关注如何将其应用于需要人工干预的质量保证领域。此外,我们提出了W3C PROV框架的扩展,并使用一个数据库来存储质量保证生命周期的来源信息,并使用11个用例来验证我们的模型。
{"title":"How Provenance helps Quality Assurance Activities in AI/ML Systems","authors":"Takao Nakagawa, Kenichiro Narita, Kyoung-Sook Kim","doi":"10.1145/3564121.3564801","DOIUrl":"https://doi.org/10.1145/3564121.3564801","url":null,"abstract":"Quality assurance is required for the wide use of artificial intelligence (AI) systems in industry and society, including mission-critical areas such as medical or disaster management domains. However, the quality evaluation methods of machine learning (ML) components, especially deep neural networks, have not yet been established. In addition, various metrics are applied by evaluators with different quality requirements and testing environments, from data collection to experimentation to deployment. In this paper, we propose a quality provenance model, AIQPROV, to record who evaluated quality, when from which viewpoint, and how the evaluation was used. The AIQPROV model focuses on human activities on how to apply this to the field of quality assurance, where human intervention is required. Moreover, we present an extension of the W3C PROV framework and conduct a database to store the provenance information of the quality assurance lifecycle with 11 use cases to validate our model.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126235075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the Second International Conference on AI-ML Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1