A Lightweight Pre-Crash Occupant Injury Prediction Model Distills Knowledge From Its Post-Crash Counterpart.

IF 1.7 4区 医学 Q4 BIOPHYSICS Journal of Biomechanical Engineering-Transactions of the Asme Pub Date : 2024-03-01 DOI:10.1115/1.4063033
Qingfan Wang, Ruiyang Li, Shi Shang, Qing Zhou, Bingbing Nie
{"title":"A Lightweight Pre-Crash Occupant Injury Prediction Model Distills Knowledge From Its Post-Crash Counterpart.","authors":"Qingfan Wang, Ruiyang Li, Shi Shang, Qing Zhou, Bingbing Nie","doi":"10.1115/1.4063033","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced \"teacher\". Following this, a lightweight pre-crash injury prediction model (\"student\") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.</p>","PeriodicalId":54871,"journal":{"name":"Journal of Biomechanical Engineering-Transactions of the Asme","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomechanical Engineering-Transactions of the Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063033","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BIOPHYSICS","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced "teacher". Following this, a lightweight pre-crash injury prediction model ("student") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
轻量级碰撞前乘员伤害预测模型从碰撞后模型中提炼知识。
准确预测近碰撞场景中的乘员伤害对于引导智能车辆找到伤害风险最小的最佳碰撞条件至关重要。现有的研究侧重于通过引入深度学习模型来提高预测性能,但由于模型本身的高复杂性而遇到了计算负担。为了更好地平衡这两个传统上相互矛盾的因素,本研究提出了一种碰撞前伤害预测模型的训练方法,即基于知识蒸馏(KD)的训练。该方法的灵感来源于知识蒸馏这一新兴的模型压缩方法。在技术上,我们首先使用信息丰富的碰撞后序列输入(即车辆碰撞脉冲)和相对复杂的网络结构作为经验丰富的 "老师",训练出一个高精度的伤害预测模型。随后,一个轻量级的碰撞前伤害预测模型("学生")在输出层(即传统预测损失)和中间层(即蒸馏损失)从地面实况中学习。在这种循序渐进的教学框架下,碰撞前模型在不牺牲计算效率的情况下,显著提高了乘员头部简略损伤量表(AIS)的预测准确率(即从 77.2% 提高到 83.2%)。多重验证实验证明了所提出的基于 KD 的训练框架的有效性。本研究有望为平衡碰撞前伤害预测模型的预测精度和计算效率提供参考,促进下一代智能汽车安全性的进一步提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.40
自引率
5.90%
发文量
169
审稿时长
4-8 weeks
期刊介绍: Artificial Organs and Prostheses; Bioinstrumentation and Measurements; Bioheat Transfer; Biomaterials; Biomechanics; Bioprocess Engineering; Cellular Mechanics; Design and Control of Biological Systems; Physiological Systems.
期刊最新文献
Image-Based Estimation of Left Ventricular Myocardial Stiffness. Phenomenological Muscle Constitutive Model With Actin-Titin Binding for Simulating Active Stretching. Regulatory Role of Collagen XI in the Establishment of Mechanical Properties of Tendons and Ligaments in Mice Is Tissue Dependent. Study of the Mechanism of Perceived Rotational Acceleration of a Bionic Semicircular Canal on the Basis of the "Circular Geometry Hypothesis". Walking Slope and Heavy Backpacks Affect Peak and Impulsive Lumbar Joint Contact Forces.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1