一种超声心动图关键帧智能检测方法
DOI:
作者:
作者单位:

1.南京医科大学生物医学工程与信息学院;2.南京大学医学院

作者简介:

通讯作者:

中图分类号:

基金项目:

江苏省重点研发计划(BE2022828);江苏省前沿引领技术基础研究专项(BK20222002)


An intelligent detection method of keyframe in echocardiography
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目的:探讨基于深度学习的ResNet+VST模型在超声心动图关键帧智能检测方面的可行性。方法:选取南京大学医学院附属鼓楼医院超声医学科采集的663个动态图像(含心尖二腔、心尖三腔与心尖四腔3类临床检查常用切面)以及EchoNet-Dynamic公开数据集中280个超声心动图心尖四腔切面动态图像,分别建立南京鼓楼医院数据集与EchoNet-Dynamic-Tiny数据集,各类别图像按4:1方式划分为训练集和测试集,进行ResNet+VST模型的训练以及与多种关键帧检测模型的性能对比,验证ResNet+VST模型的先进性。结果:ResNet+VST模型能够更准确地检测心脏舒张末期与收缩末期图像帧。在南京鼓楼医院数据集上,模型对心尖二腔、心尖三腔和心尖四腔切面数据的舒张末期预测帧差分别为1.52±1.09、1.62±1.43、1.27±1.17,收缩末期预测帧差分别为1.56±1.16、1.62±1.43、1.45±1.38;在EchoNet-Dynamic-Tiny数据集上,模型对心尖四腔切面数据的舒张末期预测帧差为1.62±1.26,收缩末期预测帧差为1.71±1.18,优于现有相关研究。此外,ResNet+VST模型有良好的实时性表现,在南京鼓楼医院数据集与EchoNet-Dynamic-Tiny数据集上,基于GTX 3090Ti GPU对16帧的超声序列片段推理的平均耗时分别为21ms与10ms,优于以长短期记忆单元进行时序建模的相关研究,基本满足临床即时处理的需求。结论:本研究提出的ResNet+VST模型在超声心动图关键帧检测的准确性、实时性方面,相较于现有研究有更出色的表现,该模型原则上可推广到任何超声切面,有着辅助超声医师提升诊断效率的潜力。

    Abstract:

    Objective: To explore the feasibility of ResNet+VST model based on deep learning in intelligent detection of keyframes of echocardiography. Methods: The 663 dynamic images collected by the Department of Ultrasound Medicine, Affiliated Hospital of Medical School, Nanjing University (including three types of common views for clinical examination, such as apical two chambers, apical three chambers and apical four chambers) and 280 echocardiographic apical four chambers view dynamic images of the EchoNet-Dynamic public dataset were selected to establish the Nanjing Drum Tower Hospital dataset and EchoNet-Dynamic-Tiny dataset, respectively. All kinds of images were divided into training set and test set in 4: 1 way, and the ResNet+VST model was trained and compared with other keyframe detection models to verify the advancement of ResNet+VST model. Results: ResNet+VST model can detect the end-diastolic and end-systolic image frames more accurately. On the dataset of Nanjing Drum Tower Hospital, the frame differences of end-diastolic prediction for apical two chambers, apical three chambers and apical four chambers data models were 1.52±1.09, 1.62±1.43 and 1.27±1.17, respectively, and the end-systolic prediction frame differences were 1.56±1.16, 1.62±1.43, 1.45±1.38, respectively; and on the EchoNet-Dynamic-Tiny dataset, the end-diastolic prediction frame differences of the apical four chambers model was 1.62 ±1.26, the end-systolic prediction frame differences was 1.71±1.18, which is better than the existing related studies. In addition, the ResNet+VST model has a good real-time performance. On the dataset of Nanjing Drum Tower Hospital and EchoNet-Dynamic-Tiny, the average time for inferencing 16 frames of ultrasonic sequence fragments based on GTX 3090Ti GPU was 21ms and 10ms respectively, which is better than the related researches on time series modeling with long short-term memory cells, and basically meets the needs of clinical real-time processing. Conclusion: Compared with the existing research, the ResNet+VST model proposed in this study has a better performance in the accuracy and real-time detection of keyframes. The model can be extended to any ultrasound section in principle and has the potential to assist ultrasound doctors to improve diagnostic efficiency.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-08-08
  • 最后修改日期:2023-11-30
  • 录用日期:2023-12-28
  • 在线发布日期:
  • 出版日期:
通知关闭
郑重声明