首页 > 最新文献

ACM Transactions on Cyber-Physical Systems最新文献

英文 中文
sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images sat2Map:从二维卫星图像重建三维建筑屋顶
Pub Date : 2024-02-13 DOI: 10.1145/3648006
Yoones Rezaei, Stephen Lee
Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose sat2Map , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses sat2pc to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce sat2height , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that sat2Map consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our sat2height model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.
三维(3D)城市模型因其在灾害管理、能源管理和太阳能潜力分析等许多应用案例中的应用而备受关注。然而,生成这些建筑物的三维模型需要激光雷达数据,而收集激光雷达数据的成本通常很高。因此,激光雷达数据并不经常更新,而且在美国许多地区并不广泛使用。因此,基于这些激光雷达数据的三维模型要么已经过时,要么仅限于那些有数据可用的地区。相比之下,卫星图像可以免费获取,而且更新频繁。我们提出的 sat2Map 是一种基于深度学习的新方法,可直接从单张二维卫星图像预测建筑物屋顶的几何形状和高度。我们的方法首先使用 sat2pc,通过整合两个不同的损失函数(倒角距离和地球移动距离)来预测点云,从而获得兼顾整体结构和更精细细节的三维点云输出。此外,我们还引入了高度估算模型 sat2height,该模型可估算预测点云的高度,从而生成给定位置的最终三维建筑结构。我们在建筑屋顶数据集上广泛评估了我们的模型,并进行了消融研究以分析其性能。结果表明,sat2Map 的性能始终优于现有的基线方法至少 18.6%。此外,我们还表明,我们的细化模块显著提高了整体性能,产生了更精确、更精细的三维输出。我们的 sat2height 模型在预测高度参数方面具有很高的准确性,而且误差率很低。此外,我们的评估结果表明,我们可以在保留建筑物整体结构的前提下,以平均绝对误差小于 30 厘米的中位数估算建筑物高度。
{"title":"sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images","authors":"Yoones Rezaei, Stephen Lee","doi":"10.1145/3648006","DOIUrl":"https://doi.org/10.1145/3648006","url":null,"abstract":"\u0000 Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose\u0000 sat2Map\u0000 , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses\u0000 sat2pc\u0000 to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce\u0000 sat2height\u0000 , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that\u0000 sat2Map\u0000 consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our\u0000 sat2height\u0000 model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139781517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images sat2Map:从二维卫星图像重建三维建筑屋顶
Pub Date : 2024-02-13 DOI: 10.1145/3648006
Yoones Rezaei, Stephen Lee
Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose sat2Map , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses sat2pc to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce sat2height , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that sat2Map consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our sat2height model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.
三维(3D)城市模型因其在灾害管理、能源管理和太阳能潜力分析等许多应用案例中的应用而备受关注。然而,生成这些建筑物的三维模型需要激光雷达数据,而收集激光雷达数据的成本通常很高。因此,激光雷达数据并不经常更新,而且在美国许多地区并不广泛使用。因此,基于这些激光雷达数据的三维模型要么已经过时,要么仅限于那些有数据可用的地区。相比之下,卫星图像可以免费获取,而且更新频繁。我们提出的 sat2Map 是一种基于深度学习的新方法,可直接从单张二维卫星图像预测建筑物屋顶的几何形状和高度。我们的方法首先使用 sat2pc,通过整合两个不同的损失函数(倒角距离和地球移动距离)来预测点云,从而获得兼顾整体结构和更精细细节的三维点云输出。此外,我们还引入了高度估算模型 sat2height,该模型可估算预测点云的高度,从而生成给定位置的最终三维建筑结构。我们在建筑屋顶数据集上广泛评估了我们的模型,并进行了消融研究以分析其性能。结果表明,sat2Map 的性能始终优于现有的基线方法至少 18.6%。此外,我们还表明,我们的细化模块显著提高了整体性能,产生了更精确、更精细的三维输出。我们的 sat2height 模型在预测高度参数方面具有很高的准确性,而且误差率很低。此外,我们的评估结果表明,我们可以在保留建筑物整体结构的前提下,以平均绝对误差小于 30 厘米的中位数估算建筑物高度。
{"title":"sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images","authors":"Yoones Rezaei, Stephen Lee","doi":"10.1145/3648006","DOIUrl":"https://doi.org/10.1145/3648006","url":null,"abstract":"\u0000 Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose\u0000 sat2Map\u0000 , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses\u0000 sat2pc\u0000 to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce\u0000 sat2height\u0000 , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that\u0000 sat2Map\u0000 consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our\u0000 sat2height\u0000 model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139841143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees 基于内存的分布偏移检测,用于具有统计保障的学习型网络物理系统
Pub Date : 2024-02-06 DOI: 10.1145/3643892
Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee
Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi
由于底层深度神经网络的脆弱性,在当前最先进的网络物理系统(CPS)中加入基于学习的组件一直是一项挑战。从好的方面看,如果能在保证安全的前提下正确执行,这将有能力彻底改变自主系统、医学和其他安全关键领域。这是因为,它将允许系统设计师使用来自摄像头和激光雷达等传感器的高维输出。在部署带有视觉和激光雷达组件的系统时所产生的恐惧来自于现实世界中发生的灾难性故障事件。最近关于自动驾驶汽车遇到难以处理的情况的报道,在处理此类传感器输入的软件组件中根深蒂固。之所以能够处理如此高维的信号,是因为使用深度神经网络的算法层出不穷。遗憾的是,安全问题背后的原因也在于深度神经网络本身。由于可能存在过度拟合,以及对训练分布引起的盲点缺乏认识,这些隐患就会出现。理想情况下,系统设计者希望在训练过程中覆盖尽可能多的场景。但是,实现有意义的覆盖是不可能的。这自然会引出以下问题:在不引起过多误报的情况下标记出分布外样本(OOD)是否可行?这样的 OOD 检测器应该能以高效的计算方式执行。这是因为 OOD 检测器的执行频率通常与传感器的采样频率相同。我们在本文中的目标是建立一个有效的异常检测器。为此,我们提出了一个内存库的概念,用来缓存数据样本,这些样本具有足够的代表性,可以覆盖大部分的分布数据。与这些样本的相似度可以用来衡量测试输入的熟悉程度。针对我们感兴趣的传感器类型,我们可以选择适当的距离函数来实现这一点。此外,我们还调整了保形异常检测框架,以在保证误报率的前提下捕捉分布偏移。我们报告了我们的技术在两个具有挑战性的场景中的表现:在模拟器 CARLA 中使用图像输入实现的自动驾驶汽车设置和使用激光雷达输入的自动赛车导航设置。从实验中可以清楚地看出,偏离内分布设置有可能导致不安全行为。需要注意的是,在实践中并非所有的 OOD 输入都会导致不安全的情况,但保持在分布范围内就相当于保持在安全气泡和可预测的行为范围内。我们基于记忆的方法还有一个好处,那就是 OOD 检测器能为人类设计师提供可解释的反馈。这一点极为重要,因为它还能推荐潜在的修复方案。在其他竞争方法中,由于依赖于使用变异自动编码器的技术,很难获得这样的反馈。
{"title":"Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees","authors":"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee","doi":"10.1145/3643892","DOIUrl":"https://doi.org/10.1145/3643892","url":null,"abstract":"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\u0000 The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\u0000 Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139801162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees 基于内存的分布偏移检测,用于具有统计保障的学习型网络物理系统
Pub Date : 2024-02-06 DOI: 10.1145/3643892
Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee
Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi
由于底层深度神经网络的脆弱性,在当前最先进的网络物理系统(CPS)中加入基于学习的组件一直是一项挑战。从好的方面看,如果能在保证安全的前提下正确执行,这将有能力彻底改变自主系统、医学和其他安全关键领域。这是因为,它将允许系统设计师使用来自摄像头和激光雷达等传感器的高维输出。在部署带有视觉和激光雷达组件的系统时所产生的恐惧来自于现实世界中发生的灾难性故障事件。最近关于自动驾驶汽车遇到难以处理的情况的报道,在处理此类传感器输入的软件组件中根深蒂固。之所以能够处理如此高维的信号,是因为使用深度神经网络的算法层出不穷。遗憾的是,安全问题背后的原因也在于深度神经网络本身。由于可能存在过度拟合,以及对训练分布引起的盲点缺乏认识,这些隐患就会出现。理想情况下,系统设计者希望在训练过程中覆盖尽可能多的场景。但是,实现有意义的覆盖是不可能的。这自然会引出以下问题:在不引起过多误报的情况下标记出分布外样本(OOD)是否可行?这样的 OOD 检测器应该能以高效的计算方式执行。这是因为 OOD 检测器的执行频率通常与传感器的采样频率相同。我们在本文中的目标是建立一个有效的异常检测器。为此,我们提出了一个内存库的概念,用来缓存数据样本,这些样本具有足够的代表性,可以覆盖大部分的分布数据。与这些样本的相似度可以用来衡量测试输入的熟悉程度。针对我们感兴趣的传感器类型,我们可以选择适当的距离函数来实现这一点。此外,我们还调整了保形异常检测框架,以在保证误报率的前提下捕捉分布偏移。我们报告了我们的技术在两个具有挑战性的场景中的表现:在模拟器 CARLA 中使用图像输入实现的自动驾驶汽车设置和使用激光雷达输入的自动赛车导航设置。从实验中可以清楚地看出,偏离内分布设置有可能导致不安全行为。需要注意的是,在实践中并非所有的 OOD 输入都会导致不安全的情况,但保持在分布范围内就相当于保持在安全气泡和可预测的行为范围内。我们基于记忆的方法还有一个好处,那就是 OOD 检测器能为人类设计师提供可解释的反馈。这一点极为重要,因为它还能推荐潜在的修复方案。在其他竞争方法中,由于依赖于使用变异自动编码器的技术,很难获得这样的反馈。
{"title":"Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees","authors":"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee","doi":"10.1145/3643892","DOIUrl":"https://doi.org/10.1145/3643892","url":null,"abstract":"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\u0000 The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\u0000 Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139860789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Collaborative Visual Sensing System for Precise Quality Inspection at Manufacturing Lines 用于生产线精确质量检测的协作式视觉传感系统
Pub Date : 2024-01-26 DOI: 10.1145/3643136
Jiale Chen, Duc Van Le, Rui Tan, Daren Ho
Visual sensing has been widely adopted for quality inspection in production processes. This paper presents the design and implementation of a smart collaborative camera system, called BubCam , for automated quality inspection of manufactured ink bags in Hewlett-Packard (HP) Inc.’s factories. Specifically, BubCam estimates the volume of air bubbles in an ink bag, which may affect the printing quality. The design of BubCam faces challenges due to the dynamic ambient light reflection, motion blur effect, and data labeling difficulty. As a starting point, we design a single-camera system which leverages various deep learning (DL)-based image segmentation and depth fusion techniques. New data labeling and training approaches are proposed to utilize prior knowledge of the production system for training the segmentation model with a small dataset. Then, we design a multi-camera system which additionally deploys multiple wireless cameras to achieve better accuracy due to multi-view sensing. To save power of the wireless cameras, we formulate a configuration adaptation problem and develop the single-agent and multi-agent deep reinforcement learning (DRL)-based solutions to adjust each wireless camera’s operation mode and frame rate in response to the changes of presence of air bubbles and light reflection. The multi-agent DRL approach aims to reduce the retraining costs during the production line reconfiguration process by only retraining the DRL agents for the newly added cameras and the existing cameras with changed positions. Extensive evaluation on a lab testbed and real factory trial shows that BubCam outperforms six baseline solutions including the current manual inspection and existing bubble detection and camera configuration adaptation approaches. In particular, BubCam achieves 1.3x accuracy improvement and 300x latency reduction, compared with the manual inspection approach.
视觉传感已被广泛应用于生产过程中的质量检测。本文介绍了一种名为 BubCam 的智能协作摄像系统的设计与实施,该系统用于惠普(HP)公司工厂对生产的油墨袋进行自动质量检测。具体来说,BubCam 可估算油墨袋中可能影响印刷质量的气泡体积。由于动态环境光反射、运动模糊效应和数据标记困难,BubCam 的设计面临挑战。作为起点,我们设计了一个单摄像头系统,该系统利用了各种基于深度学习(DL)的图像分割和深度融合技术。我们提出了新的数据标注和训练方法,以利用生产系统的先验知识,通过小型数据集训练分割模型。然后,我们设计了一个多摄像头系统,该系统额外部署了多个无线摄像头,通过多视角传感实现更高的精度。为了节省无线摄像头的功耗,我们提出了一个配置适应问题,并开发了基于单代理和多代理深度强化学习(DRL)的解决方案,以根据气泡存在和光反射的变化调整每个无线摄像头的运行模式和帧速率。多代理 DRL 方法旨在减少生产线重新配置过程中的重新培训成本,只需针对新添加的摄像头和位置发生变化的现有摄像头重新培训 DRL 代理即可。在实验室测试平台和实际工厂试验中进行的广泛评估表明,BubCam 优于六种基准解决方案,包括当前的人工检测和现有的气泡检测与摄像头配置适应方法。与手动检测方法相比,BubCam 的准确性提高了 1.3 倍,延迟缩短了 300 倍。
{"title":"A Collaborative Visual Sensing System for Precise Quality Inspection at Manufacturing Lines","authors":"Jiale Chen, Duc Van Le, Rui Tan, Daren Ho","doi":"10.1145/3643136","DOIUrl":"https://doi.org/10.1145/3643136","url":null,"abstract":"\u0000 Visual sensing has been widely adopted for quality inspection in production processes. This paper presents the design and implementation of a smart collaborative camera system, called\u0000 BubCam\u0000 , for automated quality inspection of manufactured ink bags in Hewlett-Packard (HP) Inc.’s factories. Specifically, BubCam estimates the volume of air bubbles in an ink bag, which may affect the printing quality. The design of BubCam faces challenges due to the dynamic ambient light reflection, motion blur effect, and data labeling difficulty. As a starting point, we design a single-camera system which leverages various deep learning (DL)-based image segmentation and depth fusion techniques. New data labeling and training approaches are proposed to utilize prior knowledge of the production system for training the segmentation model with a small dataset. Then, we design a multi-camera system which additionally deploys multiple wireless cameras to achieve better accuracy due to multi-view sensing. To save power of the wireless cameras, we formulate a configuration adaptation problem and develop the single-agent and multi-agent deep reinforcement learning (DRL)-based solutions to adjust each wireless camera’s operation mode and frame rate in response to the changes of presence of air bubbles and light reflection. The multi-agent DRL approach aims to reduce the retraining costs during the production line reconfiguration process by only retraining the DRL agents for the newly added cameras and the existing cameras with changed positions. Extensive evaluation on a lab testbed and real factory trial shows that BubCam outperforms six baseline solutions including the current manual inspection and existing bubble detection and camera configuration adaptation approaches. In particular, BubCam achieves 1.3x accuracy improvement and 300x latency reduction, compared with the manual inspection approach.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139593820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimentation and Implementation of BFT++ Cyber-attack Resilience Mechanism for Cyber Physical Systems 网络物理系统的 BFT++ 网络攻击弹性机制的实验与实施
Pub Date : 2024-01-19 DOI: 10.1145/3639570
David R. Keppler, M. F. Karim, Matthew Mickelson, J. S. Mertoguno
Cyber-physical systems (CPS) are used in various safety-critical domains such as robotics, industrial manufacturing systems, and power systems. Faults and cyber attacks have been shown to cause safety violations, which can damage the system and endanger human lives. Traditional resiliency techniques fall short of protecting against cyber threats. In this paper, we show how to extend resiliency to cyber resiliency for CPS using a specific combination of diversification, redundancy, and the physical inertia of the system.
网络物理系统(CPS)被用于机器人、工业制造系统和电力系统等各种安全关键领域。故障和网络攻击已被证明会导致违反安全规定,从而损坏系统并危及人类生命。传统的弹性技术无法抵御网络威胁。在本文中,我们展示了如何利用多样化、冗余和系统物理惯性的特定组合,将 CPS 的弹性扩展到网络弹性。
{"title":"Experimentation and Implementation of BFT++ Cyber-attack Resilience Mechanism for Cyber Physical Systems","authors":"David R. Keppler, M. F. Karim, Matthew Mickelson, J. S. Mertoguno","doi":"10.1145/3639570","DOIUrl":"https://doi.org/10.1145/3639570","url":null,"abstract":"Cyber-physical systems (CPS) are used in various safety-critical domains such as robotics, industrial manufacturing systems, and power systems. Faults and cyber attacks have been shown to cause safety violations, which can damage the system and endanger human lives. Traditional resiliency techniques fall short of protecting against cyber threats. In this paper, we show how to extend resiliency to cyber resiliency for CPS using a specific combination of diversification, redundancy, and the physical inertia of the system.","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Cyber-Physical Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1