首页 > 最新文献

Computer-Aided Civil and Infrastructure Engineering最新文献

英文 中文
Corrigendum to “Deep spatial-temporal embedding for vehicle trajectory validation and refinement” 用于车辆轨迹验证和完善的深度时空嵌入 "的更正
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-08 DOI: 10.1111/mice.13303
<p>Zhang, T. T., Jin, P. J., Piccoli, B., & Sartipi, M. (2024). Deep spatial-temporal embedding for vehicle trajectory validation and refinement. Computer-Aided Civil and Infrastructure Engineering, 39, 1597−1615. https://doi.org/10.1111/mice.13160</p><p>In the “Methodology” section, Equation (2) “<span></span><math> <semantics> <mrow> <mi>S</mi> <mspace></mspace> <mrow> <mo>(</mo> <mrow> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>e</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mspace></mspace> <mfenced> <mrow> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <msubsup> <mi>e</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>∗</mo> <msub> <mi>e</mi> <mi>j</mi> </msub> </mrow> <mrow> <msub> <mi>e</mi> <mi>i</mi> </msub> <mn>2</mn> <msub> <mi>e</mi> <mi>j</mi> </msub> <mn>2</mn> </mrow> </mfrac> </mrow> </mfenced> </mrow> <annotation>$S ( {{{e}_i},{{e}_j}} ) = frac{1}{2} left( {1 + frac{{e_i^T{mathrm{*}}{{e}_j}}}{{{{e}_i}2{{{mathrm{e}}}_{mathrm{j}}}2}}} right)$</annotation> </semantics></math>” was incorrect. The correct equation should have been written as “<span></span><math> <semantics> <mrow> <mi>S</mi> <mspace></mspace> <mrow> <mo>(</mo>
正确的公式应该写成 " S ( e i , e j ) = 1 2 1 + e i T ∗ e j ∥ e i ∥ 2 ∥ e j ∥ 2 $S ( {{{e}_i},{{e}_j}} ) = frac{1}{2}}left( {1 + frac{{e_i^T{mathrm{*}}{{e}_j}}}{{{|{e}_i}|}_2 {|{e}_j}}{{e}_j}|_2}}})在 "方法论 "部分,公式 (3) L S T = Y ̂ - Y 2 = 1 H ∗ W ∗ N ∑ i = 1 N ∑ j = 1 H ∗ W s ̂ i , j - s i , j ${{mathcal{L}}_{ST}} = hat{Y} - {{Y}_2}} = frac{1}{{H{mathrm{*}}W{mathrm{*}}N}}} mathop sum limits_{i = 1}^N mathop sum limits_{j = 1}^{H{mathrm{*}}W}}{{hat{s}}_{i,j - }}{s}_{i,j}}$ 是不正确的。
{"title":"Corrigendum to “Deep spatial-temporal embedding for vehicle trajectory validation and refinement”","authors":"","doi":"10.1111/mice.13303","DOIUrl":"10.1111/mice.13303","url":null,"abstract":"&lt;p&gt;Zhang, T. T., Jin, P. J., Piccoli, B., &amp; Sartipi, M. (2024). Deep spatial-temporal embedding for vehicle trajectory validation and refinement. Computer-Aided Civil and Infrastructure Engineering, 39, 1597−1615. https://doi.org/10.1111/mice.13160&lt;/p&gt;&lt;p&gt;In the “Methodology” section, Equation (2) “&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mi&gt;S&lt;/mi&gt;\u0000 &lt;mspace&gt;&lt;/mspace&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mo&gt;(&lt;/mo&gt;\u0000 &lt;mrow&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;i&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;mo&gt;,&lt;/mo&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;j&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;mo&gt;)&lt;/mo&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;mo&gt;=&lt;/mo&gt;\u0000 &lt;mfrac&gt;\u0000 &lt;mn&gt;1&lt;/mn&gt;\u0000 &lt;mn&gt;2&lt;/mn&gt;\u0000 &lt;/mfrac&gt;\u0000 &lt;mspace&gt;&lt;/mspace&gt;\u0000 &lt;mfenced&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mn&gt;1&lt;/mn&gt;\u0000 &lt;mo&gt;+&lt;/mo&gt;\u0000 &lt;mfrac&gt;\u0000 &lt;mrow&gt;\u0000 &lt;msubsup&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;i&lt;/mi&gt;\u0000 &lt;mi&gt;T&lt;/mi&gt;\u0000 &lt;/msubsup&gt;\u0000 &lt;mo&gt;∗&lt;/mo&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;j&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;mrow&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;i&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;mn&gt;2&lt;/mn&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;j&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;mn&gt;2&lt;/mn&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/mfrac&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/mfenced&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;$S ( {{{e}_i},{{e}_j}} ) = frac{1}{2} left( {1 + frac{{e_i^T{mathrm{*}}{{e}_j}}}{{{{e}_i}2{{{mathrm{e}}}_{mathrm{j}}}2}}} right)$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt;” was incorrect. The correct equation should have been written as “&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mi&gt;S&lt;/mi&gt;\u0000 &lt;mspace&gt;&lt;/mspace&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mo&gt;(&lt;/mo&gt;\u0000 ","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 16","pages":"2553"},"PeriodicalIF":8.5,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141561375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing green splits in high‐dimensional traffic signal control with trust region Bayesian optimization 用信任区域贝叶斯优化法优化高维交通信号控制中的绿化分割
IF 11.775 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-08 DOI: 10.1111/mice.13293
Yunhai Gong, Shaopeng Zhong, Shengchuan Zhao, Feng Xiao, Wenwen Wang, Yu Jiang
Centralized traffic signal control has long been a challenging, high‐dimensional optimization problem. This study establishes a simulation‐based optimization framework and develops a novel optimization algorithm based on trust region Bayesian optimization (TuRBO), which can efficiently obtain an approximate optimal solution to the high‐dimensional traffic signal control problem. Local Gaussian process (GP), trust region, and Thompson sampling are employed in the TuRBO and contribute considerably to performance in terms of computational speed, solution quality, and scalability. Empirical studies are carried out using data from Mudanjiang and Chengdu, China. The performance of TuRBO is compared with that of Bayesian optimization (BO), genetic algorithm and random sampling. The results show that TuRBO converges the fastest because of its ability to balance exploration and exploitation through the trust region and Thompson sampling. Meanwhile, because TuRBO enables more efficient exploitation through the local GP, the solution quality of TuRBO outperforms others significantly. The average waiting time achieved by TuRBO was 2.84% lower than that achieved by BO. Finally, the method has been successfully extended to a large network with 233‐dimensional spaces and 122 signalized intersections, demonstrating that the developed methodology can deal with high‐dimensional traffic signal control effectively for real case applications.
长期以来,集中式交通信号控制一直是一个具有挑战性的高维优化问题。本研究建立了一个基于仿真的优化框架,并开发了一种基于信任区域贝叶斯优化(TuRBO)的新型优化算法,该算法可以高效地获得高维交通信号控制问题的近似最优解。TuRBO 采用了局部高斯过程 (GP)、信任区域和汤普森采样,在计算速度、解的质量和可扩展性等方面都有显著提高。利用中国牡丹江和成都的数据进行了实证研究。将 TuRBO 的性能与贝叶斯优化(BO)、遗传算法和随机抽样进行了比较。结果表明,TuRBO 的收敛速度最快,因为它能够通过信任区域和汤普森采样平衡探索和开发。同时,由于 TuRBO 通过局部 GP 实现了更有效的开发,因此其解决方案的质量明显优于其他方案。TuRBO 所实现的平均等待时间比 BO 低 2.84%。最后,该方法还成功地扩展到了一个拥有 233 维空间和 122 个信号交叉口的大型网络中,证明了所开发的方法可以有效地处理高维交通信号控制的实际应用。
{"title":"Optimizing green splits in high‐dimensional traffic signal control with trust region Bayesian optimization","authors":"Yunhai Gong, Shaopeng Zhong, Shengchuan Zhao, Feng Xiao, Wenwen Wang, Yu Jiang","doi":"10.1111/mice.13293","DOIUrl":"https://doi.org/10.1111/mice.13293","url":null,"abstract":"Centralized traffic signal control has long been a challenging, high‐dimensional optimization problem. This study establishes a simulation‐based optimization framework and develops a novel optimization algorithm based on trust region Bayesian optimization (TuRBO), which can efficiently obtain an approximate optimal solution to the high‐dimensional traffic signal control problem. Local Gaussian process (GP), trust region, and Thompson sampling are employed in the TuRBO and contribute considerably to performance in terms of computational speed, solution quality, and scalability. Empirical studies are carried out using data from Mudanjiang and Chengdu, China. The performance of TuRBO is compared with that of Bayesian optimization (BO), genetic algorithm and random sampling. The results show that TuRBO converges the fastest because of its ability to balance exploration and exploitation through the trust region and Thompson sampling. Meanwhile, because TuRBO enables more efficient exploitation through the local GP, the solution quality of TuRBO outperforms others significantly. The average waiting time achieved by TuRBO was 2.84% lower than that achieved by BO. Finally, the method has been successfully extended to a large network with 233‐dimensional spaces and 122 signalized intersections, demonstrating that the developed methodology can deal with high‐dimensional traffic signal control effectively for real case applications.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"45 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141561384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic generation of architecture drawings from point clouds 根据点云自动生成建筑图纸
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-07 DOI: 10.1111/mice.13302
Fengyu Zhang, Qingzhao Kong, Cheng Yuan, Peizhen Li

Traditional methods for producing architectural drawings require extensive manual labor. This paper proposes an automated method for generating a comprehensive set of three-view drawings, including the standardized labeling of doors and annotation of dimensions and areas. The output drawings are software-readable and editable, and the method is applicable to intricate structures with non-orthogonal or curved walls. To fully validate the accuracy of the proposed method, two distinct building scenarios were selected for experimentation.

传统的建筑图纸绘制方法需要大量的手工劳动。本文提出了一种自动生成全套三视图的方法,包括门的标准化标注以及尺寸和面积的注释。输出的图纸可由软件读取和编辑,该方法适用于非正交或弯曲墙壁的复杂结构。为了充分验证建议方法的准确性,我们选择了两种不同的建筑场景进行实验。
{"title":"Automatic generation of architecture drawings from point clouds","authors":"Fengyu Zhang,&nbsp;Qingzhao Kong,&nbsp;Cheng Yuan,&nbsp;Peizhen Li","doi":"10.1111/mice.13302","DOIUrl":"10.1111/mice.13302","url":null,"abstract":"<p>Traditional methods for producing architectural drawings require extensive manual labor. This paper proposes an automated method for generating a comprehensive set of three-view drawings, including the standardized labeling of doors and annotation of dimensions and areas. The output drawings are software-readable and editable, and the method is applicable to intricate structures with non-orthogonal or curved walls. To fully validate the accuracy of the proposed method, two distinct building scenarios were selected for experimentation.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 22","pages":"3477-3488"},"PeriodicalIF":8.5,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ego-planning-guided multi-graph convolutional network for heterogeneous agent trajectory prediction 用于异构代理轨迹预测的自我规划引导多图卷积网络
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-07 DOI: 10.1111/mice.13301
Zihao Sheng, Zilin Huang, Sikai Chen

Accurate prediction of the future trajectories of traffic agents is a critical aspect of autonomous vehicle navigation. However, most existing approaches focus on predicting trajectories from a static roadside perspective, ignoring the influence of autonomous vehicles’ future plans on neighboring traffic agents. To address this challenge, this paper introduces EPG-MGCN, an ego-planning-guided multi-graph convolutional network. EPG-MGCN leverages graph convolutional networks and ego-planning guidance to predict the trajectories of heterogeneous traffic agents near the ego vehicle. The model captures interactions through multiple graph topologies from four distinct perspectives: distance, visibility, ego planning, and category. Additionally, it encodes the ego vehicle's planning information via the planning graph and a planning-guided prediction module. The model is evaluated on three challenging trajectory datasets: ApolloScape, nuScenes, and next generation simulation (NGSIM). Comparative evaluations against mainstream methods demonstrate its superior predictive capabilities and inference speed.

准确预测交通参与者的未来轨迹是自动驾驶车辆导航的一个重要方面。然而,大多数现有方法都侧重于从静态路边角度预测轨迹,忽略了自动驾驶车辆未来计划对相邻交通参与者的影响。为了应对这一挑战,本文介绍了自我规划引导的多图卷积网络 EPG-MGCN。EPG-MGCN 利用图卷积网络和自我规划指导来预测自我车辆附近异构交通参与者的轨迹。该模型从距离、能见度、自我规划和类别四个不同角度,通过多个图拓扑捕捉互动。此外,它还通过规划图和规划指导预测模块对自我车辆的规划信息进行编码。该模型在三个具有挑战性的轨迹数据集上进行了评估:ApolloScape、nuScenes 和下一代模拟(NGSIM)。与主流方法的比较评估表明,该模型具有卓越的预测能力和推理速度。
{"title":"Ego-planning-guided multi-graph convolutional network for heterogeneous agent trajectory prediction","authors":"Zihao Sheng,&nbsp;Zilin Huang,&nbsp;Sikai Chen","doi":"10.1111/mice.13301","DOIUrl":"10.1111/mice.13301","url":null,"abstract":"<p>Accurate prediction of the future trajectories of traffic agents is a critical aspect of autonomous vehicle navigation. However, most existing approaches focus on predicting trajectories from a static roadside perspective, ignoring the influence of autonomous vehicles’ future plans on neighboring traffic agents. To address this challenge, this paper introduces EPG-MGCN, an ego-planning-guided multi-graph convolutional network. EPG-MGCN leverages graph convolutional networks and ego-planning guidance to predict the trajectories of heterogeneous traffic agents near the ego vehicle. The model captures interactions through multiple graph topologies from four distinct perspectives: distance, visibility, ego planning, and category. Additionally, it encodes the ego vehicle's planning information via the planning graph and a planning-guided prediction module. The model is evaluated on three challenging trajectory datasets: ApolloScape, nuScenes, and next generation simulation (NGSIM). Comparative evaluations against mainstream methods demonstrate its superior predictive capabilities and inference speed.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 22","pages":"3357-3374"},"PeriodicalIF":8.5,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13301","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 39, Issue 14 封面图片,第 39 卷第 14 期
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-03 DOI: 10.1111/mice.13296

The cover image is based on the Research Article Geoacoustic and geophysical data-driven seafloor sediment classification through machine learning algorithms with property-centered oversampling techniques by Junghee Park et al., https://doi.org/10.1111/mice.13126.

封面图像基于 Junghee Park 等人的研究文章《通过以属性为中心的超采样技术的机器学习算法进行地质声学和地球物理数据驱动的海底沉积物分类》,https://doi.org/10.1111/mice.13126。
{"title":"Cover Image, Volume 39, Issue 14","authors":"","doi":"10.1111/mice.13296","DOIUrl":"10.1111/mice.13296","url":null,"abstract":"<p><b>The cover image</b> is based on the Research Article <i>Geoacoustic and geophysical data-driven seafloor sediment classification through machine learning algorithms with property-centered oversampling techniques</i> by Junghee Park et al., https://doi.org/10.1111/mice.13126.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure>\u0000 </p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 14","pages":""},"PeriodicalIF":8.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13296","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 39, Issue 14 封面图片,第 39 卷第 14 期
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-03 DOI: 10.1111/mice.13297

The cover image is based on the Research Article Urban risk assessment model to quantify earthquake-induced elevator passenger entrapment with population heatmap by Donglian Gu et al., https://doi.org/10.1111/mice.13287.

封面图像基于顾冬莲等人的研究文章《利用人口热图量化地震诱发的电梯乘客被困情况的城市风险评估模型》,https://doi.org/10.1111/mice.13287。
{"title":"Cover Image, Volume 39, Issue 14","authors":"","doi":"10.1111/mice.13297","DOIUrl":"10.1111/mice.13297","url":null,"abstract":"<p><b>The cover image</b> is based on the Research Article <i>Urban risk assessment model to quantify earthquake-induced elevator passenger entrapment with population heatmap</i> by Donglian Gu et al., https://doi.org/10.1111/mice.13287.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure>\u0000 </p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 14","pages":""},"PeriodicalIF":8.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13297","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141495985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated signal-based evaluation of dynamic cone resistance via machine learning for subsurface characterization 通过机器学习自动评估基于信号的动态锥体阻力,用于地下特征描述
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-07-01 DOI: 10.1111/mice.13294
Samuel Olamide Aregbesola, Yong-Hoon Byun

Dynamic cone resistance (DCR) is a recently introduced soil resistance index that has the unit of stress. It is determined from the dynamic response at the tip of an instrumented dynamic cone penetrometer. However, DCR evaluation is generally a manual, time-consuming, and error-prone process. Thus, this study investigates the feasibility of determining DCR using a stacked ensemble (SE) machine learning (ML) model that utilizes signals obtained from dynamic cone penetration testing. Two ML experiments revealed that using only force signals provides more accurate predictions of DCR. In addition, the SE technique outperformed the base learning algorithms in both cases. Overall, the findings suggest that ML techniques can be used to automate the analysis of DCR with force and acceleration signals.

动锥阻力(DCR)是最近推出的一种土壤阻力指数,单位为应力。它是根据带仪器的动态锥形透度计顶端的动态响应确定的。然而,DCR 评估通常是一个手动、耗时且容易出错的过程。因此,本研究调查了使用叠加集合(SE)机器学习(ML)模型确定 DCR 的可行性,该模型利用了从动态锥入度测试中获得的信号。两个 ML 实验表明,仅使用力信号就能更准确地预测 DCR。此外,在这两种情况下,SE 技术都优于基础学习算法。总之,研究结果表明,ML 技术可用于利用力和加速度信号自动分析 DCR。
{"title":"Automated signal-based evaluation of dynamic cone resistance via machine learning for subsurface characterization","authors":"Samuel Olamide Aregbesola,&nbsp;Yong-Hoon Byun","doi":"10.1111/mice.13294","DOIUrl":"10.1111/mice.13294","url":null,"abstract":"<p>Dynamic cone resistance (DCR) is a recently introduced soil resistance index that has the unit of stress. It is determined from the dynamic response at the tip of an instrumented dynamic cone penetrometer. However, DCR evaluation is generally a manual, time-consuming, and error-prone process. Thus, this study investigates the feasibility of determining DCR using a stacked ensemble (SE) machine learning (ML) model that utilizes signals obtained from dynamic cone penetration testing. Two ML experiments revealed that using only force signals provides more accurate predictions of DCR. In addition, the SE technique outperformed the base learning algorithms in both cases. Overall, the findings suggest that ML techniques can be used to automate the analysis of DCR with force and acceleration signals.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 16","pages":"2541-2552"},"PeriodicalIF":8.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13294","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141489516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing-efficient video analytics for nighttime traffic sensing 用于夜间交通感知的高效计算视频分析技术
IF 8.5 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-06-27 DOI: 10.1111/mice.13295
Igor Lashkov, Runze Yuan, Guohui Zhang

The training workflow of neural networks can be quite complex, potentially time-consuming, and require specific hardware to accomplish operation needs. This study presents a novel analytical video-based approach for vehicle tracking and vehicle volume estimation at nighttime using a monocular traffic surveillance camera installed over the road. To build this approach, we employ computer vision-based algorithms to detect vehicle objects, perform vehicle tracking, and vehicle counting in a predefined detection zone. To address low-illumination conditions, we adapt and employ image noise reduction techniques, image binary conversion, image projective transformation, and a set of heuristic reasoning rules to extract the headlights of each vehicle, pair them belonging to the same vehicle, and track moving candidate vehicle objects continuously across a sequence of video frames. The robustness of the proposed method was tested in various scenarios and environmental conditions using a publicly available vehicle dataset as well as own labeled video data.

神经网络的训练工作流程可能相当复杂、耗时,并且需要特定的硬件才能满足操作需求。本研究提出了一种基于视频的新型分析方法,利用安装在道路上方的单目交通监控摄像头在夜间进行车辆跟踪和车辆数量估算。为了建立这种方法,我们采用了基于计算机视觉的算法来检测车辆目标,执行车辆跟踪,并在预定义的检测区域内进行车辆计数。针对低照度条件,我们调整并采用了图像降噪技术、图像二进制转换、图像投影变换和一套启发式推理规则,以提取每辆车的前大灯,将属于同一辆车的前大灯配对,并在一系列视频帧中连续跟踪移动的候选车辆对象。我们使用公开的车辆数据集和自己标注的视频数据,在各种场景和环境条件下测试了所提方法的鲁棒性。
{"title":"Computing-efficient video analytics for nighttime traffic sensing","authors":"Igor Lashkov,&nbsp;Runze Yuan,&nbsp;Guohui Zhang","doi":"10.1111/mice.13295","DOIUrl":"10.1111/mice.13295","url":null,"abstract":"<p>The training workflow of neural networks can be quite complex, potentially time-consuming, and require specific hardware to accomplish operation needs. This study presents a novel analytical video-based approach for vehicle tracking and vehicle volume estimation at nighttime using a monocular traffic surveillance camera installed over the road. To build this approach, we employ computer vision-based algorithms to detect vehicle objects, perform vehicle tracking, and vehicle counting in a predefined detection zone. To address low-illumination conditions, we adapt and employ image noise reduction techniques, image binary conversion, image projective transformation, and a set of heuristic reasoning rules to extract the headlights of each vehicle, pair them belonging to the same vehicle, and track moving candidate vehicle objects continuously across a sequence of video frames. The robustness of the proposed method was tested in various scenarios and environmental conditions using a publicly available vehicle dataset as well as own labeled video data.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"39 22","pages":"3392-3411"},"PeriodicalIF":8.5,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A rendering‐based lightweight network for segmentation of high‐resolution crack images 基于渲染的轻量级网络,用于分割高分辨率裂纹图像
IF 11.775 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-06-24 DOI: 10.1111/mice.13290
Honghu Chu, Diran Yu, Weiwei Chen, Jun Ma, Lu Deng
High‐resolution (HR) crack images provide detailed structural assessments crucial for maintenance planning. However, the discrete nature of feature extraction in mainstream deep learning algorithms and computational limitations hinder refined segmentation. This study introduces a rendering‐based lightweight crack segmentation network (RLCSN) designed to efficiently predict refined masks for HR crack images. The RLCSN combines a deep semantic feature extraction architecture—merging Transformer with a super‐resolution boundary‐guided branch—to reduce environmental noise and preserve crack edge details. It also incorporates customized point‐wise refined rendering for training and inference, focusing computational resources on critical areas, and an efficient sparse training method to ensure efficient inference on commercial mobile computing platforms. Each RLCSN's components are validated through ablation studies and field tests, demonstrating its capability to enable unmanned aerial vehicle‐based inspections to detect cracks as narrow as 0.15 mm from a distance of 3 m, thereby enhancing inspection safety and efficiency.
高分辨率(HR)裂纹图像可提供对维护规划至关重要的详细结构评估。然而,主流深度学习算法中特征提取的离散性和计算上的局限性阻碍了精细分割。本研究介绍了一种基于渲染的轻量级裂缝分割网络(RLCSN),旨在有效预测 HR 裂缝图像的精细掩膜。RLCSN 结合了深度语义特征提取架构--将 Transformer 与超分辨率边界引导分支相结合,以减少环境噪声并保留裂纹边缘细节。它还结合了用于训练和推理的定制点式精细渲染,将计算资源集中在关键区域,并采用高效的稀疏训练方法,确保在商用移动计算平台上进行高效推理。RLCSN 的每个组件都通过烧蚀研究和现场测试进行了验证,证明其能够使无人驾驶飞行器进行检测,从 3 米的距离检测出窄至 0.15 毫米的裂缝,从而提高检测的安全性和效率。
{"title":"A rendering‐based lightweight network for segmentation of high‐resolution crack images","authors":"Honghu Chu, Diran Yu, Weiwei Chen, Jun Ma, Lu Deng","doi":"10.1111/mice.13290","DOIUrl":"https://doi.org/10.1111/mice.13290","url":null,"abstract":"High‐resolution (HR) crack images provide detailed structural assessments crucial for maintenance planning. However, the discrete nature of feature extraction in mainstream deep learning algorithms and computational limitations hinder refined segmentation. This study introduces a rendering‐based lightweight crack segmentation network (RLCSN) designed to efficiently predict refined masks for HR crack images. The RLCSN combines a deep semantic feature extraction architecture—merging Transformer with a super‐resolution boundary‐guided branch—to reduce environmental noise and preserve crack edge details. It also incorporates customized point‐wise refined rendering for training and inference, focusing computational resources on critical areas, and an efficient sparse training method to ensure efficient inference on commercial mobile computing platforms. Each RLCSN's components are validated through ablation studies and field tests, demonstrating its capability to enable unmanned aerial vehicle‐based inspections to detect cracks as narrow as 0.15 mm from a distance of 3 m, thereby enhancing inspection safety and efficiency.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"62 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling of spatially embedded networks via regional spatial graph convolutional networks 通过区域空间图卷积网络建立空间嵌入式网络模型
IF 11.775 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-06-20 DOI: 10.1111/mice.13286
Xudong Fan, Jürgen Hackl
Efficient representation of complex infrastructure systems is crucial for system-level management tasks, such as edge prediction, component classification, and decision-making. However, the complex interactions between the infrastructure systems and their spatial environments increased the complexity of network representation learning. This study introduces a novel geometric-based multimodal deep learning model for spatially embedded network representation learning, namely the regional spatial graph convolutional network (RSGCN). The developed RSGCN model simultaneously learns from the node's multimodal spatial features. To evaluate the network representation performance, the introduced RSGCN model is used to embed different infrastructure networks into latent spaces and then reconstruct the networks. A synthetic network dataset, a California Highway Network, and a New Jersey Power Network were used as testbeds. The performance of the developed model is compared with two other state-of-the-art geometric deep learning models, GraphSAGE and Spatial Graph Convolutional Network. The results demonstrate the importance of considering regional information and the effectiveness of using novel graph convolutional neural networks for a more accurate representation of complex infrastructure systems.
高效地表示复杂的基础设施系统对于边缘预测、组件分类和决策等系统级管理任务至关重要。然而,基础设施系统与其空间环境之间复杂的相互作用增加了网络表示学习的复杂性。本研究为空间嵌入式网络表示学习引入了一种新颖的基于几何的多模态深度学习模型,即区域空间图卷积网络(RSGCN)。所开发的 RSGCN 模型可同时学习节点的多模态空间特征。为了评估网络表示性能,引入的 RSGCN 模型被用于将不同的基础设施网络嵌入潜在空间,然后重建网络。合成网络数据集、加利福尼亚州高速公路网络和新泽西州电力网络被用作测试平台。所开发模型的性能与另外两个最先进的几何深度学习模型(GraphSAGE 和空间图卷积网络)进行了比较。结果表明了考虑区域信息的重要性,以及使用新型图卷积神经网络更准确地表示复杂基础设施系统的有效性。
{"title":"Modeling of spatially embedded networks via regional spatial graph convolutional networks","authors":"Xudong Fan, Jürgen Hackl","doi":"10.1111/mice.13286","DOIUrl":"https://doi.org/10.1111/mice.13286","url":null,"abstract":"Efficient representation of complex infrastructure systems is crucial for system-level management tasks, such as edge prediction, component classification, and decision-making. However, the complex interactions between the infrastructure systems and their spatial environments increased the complexity of network representation learning. This study introduces a novel geometric-based multimodal deep learning model for spatially embedded network representation learning, namely the <i>regional spatial graph convolutional network</i> (RSGCN). The developed RSGCN model simultaneously learns from the node's multimodal spatial features. To evaluate the network representation performance, the introduced RSGCN model is used to embed different infrastructure networks into latent spaces and then reconstruct the networks. A synthetic network dataset, a California Highway Network, and a New Jersey Power Network were used as testbeds. The performance of the developed model is compared with two other state-of-the-art geometric deep learning models, GraphSAGE and Spatial Graph Convolutional Network. The results demonstrate the importance of considering regional information and the effectiveness of using novel graph convolutional neural networks for a more accurate representation of complex infrastructure systems.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"13 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141430603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer-Aided Civil and Infrastructure Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1