PT J
AU Yang, Yanwu
Ye, Chenfei
Guo, Xutao
Wu, Tao
Xiang, Yang
Ma, Ting
PT J AU Yang, 叶彦武, 郭晨飞, 吴旭涛, 翔涛, 马洋, 丁
TI Mapping Multi-Modal Brain Connectome for Brain Disorder Diagnosis via
Cross-Modal Mutual Learning
TI 绘制多模态脑连接组,通过跨模态相互学习进行脑部疾病诊断
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 108
EP 121
DI 10.1109/TMI.2023.3294967
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 108 EP 121 DI 10.1109/TMI.2023.3294967 DT 文章 PD 2024 年 1 月 PY 2024
AB Recently, the study of multi-modal brain connectome has recorded a
tremendous increase and facilitated the diagnosis of brain disorders. In
this paradigm, functional and structural networks, e.g., functional and
structural connectivity derived from fMRI and DTI, are in some manner
interacted but are not necessarily linearly related. Accordingly, there
remains a great challenge to leverage complementary information for
brain connectome analysis. Recently, Graph Convolutional Networks (GNN)
have been widely applied to the fusion of multi-modal brain connectome.
However, most existing GNN methods fail to couple inter-modal
relationships. In this regard, we propose a Cross-modal Graph Neural
Network (Cross-GNN) that captures inter-modal dependencies through
dynamic graph learning and mutual learning. Specifically, the
inter-modal representations are attentively coupled into a compositional
space for reasoning inter-modal dependencies. Additionally, we
investigate mutual learning in explicit and implicit ways: (1)
Cross-modal representations are obtained by cross-embedding explicitly
based on the inter-modal correspondence matrix. (2) We propose a
cross-modal distillation method to implicitly regularize latent
representations with cross-modal semantic contexts. We carry out
statistical analysis on the attentively learned correspondence matrices
to evaluate inter-modal relationships for associating disease
biomarkers. Our extensive experiments on three datasets demonstrate the
superiority of our proposed method for disease diagnosis with promising
prediction performance and multi-modal connectome biomarker location.
RI Ye, Chenfei/K-9207-2016; Xiang, Yang/ABI-7171-2020; Wu, Tao/S-4505-2019; Ma, Ting/N-3320-2014; yanwu, yang/; Xiang, Yang/
OI Ma, Ting/0000-0001-8819-6228; yanwu, yang/0000-0002-7547-4580; Xiang,
Yang/0000-0003-1395-6805
TC 7
ZA 0
AB 最近,多模式脑连接组的研究有了巨大的增长,并促进了脑部疾病的诊断。在这个范式中,功能和结构网络,例如源自fMRI和DTI的功能和结构连接,以某种方式相互作用,但不一定是线性相关的。因此,利用补充信息进行脑连接组分析仍然存在巨大挑战。近年来,图卷积网络(GNN)已被广泛应用于多模态脑连接组的融合。然而,大多数现有的 GNN 方法无法耦合模态间的关系。在这方面,我们提出了一种跨模态图神经网络(Cross-GNN),通过动态图学习和相互学习来捕获模态间依赖关系。具体来说,将模态间表示仔细地耦合到组合空间中以推理模态间依赖性。此外,我们以显式和隐式方式研究相互学习:(1)通过基于模间对应矩阵显式交叉嵌入来获得跨模态表示。 (2)我们提出了一种跨模态蒸馏方法,通过跨模态语义上下文隐式正则化潜在表示。我们对仔细学习的对应矩阵进行统计分析,以评估关联疾病生物标志物的模态关系。我们对三个数据集进行的广泛实验证明了我们提出的疾病诊断方法的优越性,具有良好的预测性能和多模式连接组生物标志物定位。日野晨飞/K-9207-2016;向阳/ABI-7171-2020;吴涛/S-4505-2019;马婷/N-3320-2014;杨彦武/;向阳/爱马婷/0000-0001-8819-6228;杨彦武/0000-0002-7547-4580;向阳/0000-0003-1395-6805 TC 7 ZA 0
ZS 0
ZR 0
ZB 2
Z8 0
Z9 7
C1 Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen
518055, Peoples R China
C1 Peng Cheng Lab, Shenzhen 518066, Guangdong, Peoples R China
C1 Harbin Inst Technol Shenzhen, Int Res Inst Artificial Intelligence,
Shenzhen 518055, Peoples R China
C1 Capital Med Univ, Beijing Tiantan Hosp, Ctr Movement Disorders, Dept
Neurol, Beijing 100054, Peoples R China
C1 China Natl Clin Res Ctr Neurol Dis, Beijing 100070, Peoples R China
C1 Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Guangdong Prov
Key Lab Aerosp Commun & Networking, Shenzhen 518055, Peoples R China
C1 Harbin Inst Technol Shenzhen, Int Res Inst Artificial Intelligence,
Shenzhen 518055, Peoples R China
C3 China Natl Clin Res Ctr Neurol Dis
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600024
PM 37440391
ER
ZS 0 ZR 0 ZB 2 Z8 0 Z9 7 C1 哈尔滨工业大学深圳分校,Sch Elect & Informat Engn,深圳 518055,人民 R 中国 C1 鹏城实验室,广东省深圳 518066,人民 R 中国 C1 哈尔滨工业大学深圳,Int Res Inst人工智能, 深圳 518055, 人民 R 中国 C1 首都医科大学, 北京天坛医院, 运动障碍中心, 神经科, 北京 100054, 人民 R 中国 C1 国家临床研究中心神经科, 北京 100070, 人民 R 中国 C1 哈尔滨工业大学深圳,工程与信息工程,广东省重点实验室 Aerosp 通信与网络,深圳 518055,人民 R 中国 C1 哈尔滨技术研究院深圳,国际研究研究所人工智能,深圳 518055,人民 R 中国 C3 中国国家临床研究中心神经病学中心 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600024 PM 37440391 ER
PT J
AU Yue, Zheng
Jiang, Jiayao
Hou, Wenguang
Zhou, Quan
David Spence, J
Fenster, Aaron
Qiu, Wu
Ding, Mingyue
PT J AU Yue, 郑江, Jiayao Hou, Wenguang Zhou, Quan David Spence, J Fenster, Aaron Qiu, Wu Ding, Mingyue
TI Prior-knowledge Embedded U-Net based Fully Automatic Vessel Wall Volume
Measurement of the Carotid Artery in 3D Ultrasound Image.
基于 TI 先验知识嵌入式 U-Net 的 3D 超声图像中颈动脉的全自动血管壁体积测量。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3457245
DT Journal Article
PD 2024-Sep-10
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3457245 DT 期刊文章 PD 2024 年 9 月 10 日 PY 2024
AB The vessel-wall-volume (VWV) measured based on three-dimensional (3D)
carotid artery (CA) ultrasound (US) images can help to assess carotid
atherosclerosis and manage patients at risk for stroke. Manual
involvement for measurement work is subjective and requires well-trained
operators, and fully automatic measurement tools are not yet available.
Thereby, we proposed a fully automatic VWV measurement framework
(Auto-VWV) using a CA prior-knowledge embedded U-Net (CAP-UNet) to
measure the VWV from 3D CA US images without manual intervention. The
Auto-VWV framework is designed to improve the repeated VWV measuring
consistency, which resulted in the first fully automatic framework for
VWV measurement. CAP-UNet is developed to improve segmentation accuracy
on the whole CA, which composed of a U-Net type backbone and three
additional prior-knowledge learning modules. Specifically, a continuity
learning module is used to learn the spatial continuity of the arteries
in a sequence of image slices. A voxel evolution learning module was
designed to learn the evolution of the artery in adjacent slices, and a
topology learning module was used to learn the unique topology of the
carotid artery. In two 3D CA US datasets, CAP-UNet architecture achieved
state-of-the-art performance compared to eight competing models.
Furthermore, CAP-UNet-based Auto-VWV achieved better accuracy and
consistency than Auto-VWV based on competing models in the simulated
repeated measurement. Finally, using 10 pairs of real repeatedly scanned
samples, Auto-VWV achieved better VWV measurement reproducibility than
intra- and inter-operator manual measurements.
RI Spence, J. David/K-6396-2013; Fenster, Aaron/; Qiu, Wu/
OI Spence, J. David/0000-0001-7478-1098; Fenster,
Aaron/0000-0003-3525-2788; Qiu, Wu/0000-0001-7827-8270
Z8 0
AB 基于三维 (3D) 颈动脉 (CA) 超声 (US) 图像测量的血管壁体积 (VWV) 有助于评估颈动脉粥样硬化并管理有中风风险的患者。人工参与测量工作具有主观性,需要训练有素的操作人员,目前还没有全自动测量工具。因此,我们提出了一种全自动 VWV 测量框架 (Auto-VWV),使用 CA 先验知识嵌入式 U-Net (CAP-UNet) 来测量 3D CA US 图像的 VWV,无需人工干预。 Auto-VWV 框架旨在提高重复 VWV 测量的一致性,从而产生了第一个全自动 VWV 测量框架。 CAP-UNet 是为了提高整个 CA 的分割精度而开发的,它由 U-Net 型主干和三个附加先验知识学习模块组成。具体来说,连续性学习模块用于学习图像切片序列中动脉的空间连续性。设计体素进化学习模块来学习相邻切片中动脉的进化,并使用拓扑学习模块来学习颈动脉的独特拓扑。在两个 3D CA US 数据集中,与八个竞争模型相比,CAP-UNet 架构实现了最先进的性能。此外,在模拟重复测量中,基于CAP-UNet的Auto-VWV比基于竞争模型的Auto-VWV取得了更好的准确性和一致性。最后,使用 10 对真实的重复扫描样本,自动 VWV 实现了比操作员内部和操作员之间手动测量更好的 VWV 测量再现性。 RI 斯宾塞,J.大卫/K-6396-2013;芬斯特,亚伦/;邱吴/OI Spence, J.大卫/0000-0001-7478-1098;亚伦·芬斯特/0000-0003-3525-2788;邱武/0000-0001-7827-8270 Z8 0
ZS 0
TC 0
ZA 0
ZR 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-13
UT MEDLINE:39255086
PM 39255086
ER
ZS 0 TC 0 ZA 0 ZR 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-13 UT MEDLINE:39255086 PM 39255086 ER
PT J
AU Yang, Xiaoyu
Xu, Lijian
Yu, Simon
Xia, Qing
Li, Hongsheng
Zhang, Shaoting
PT J AU Yang, 徐晓宇, 于立建, 夏西蒙, 李庆, 张宏生, 绍婷
TI Segmentation and Vascular Vectorization for Coronary Artery by
Geometry-based Cascaded Neural Network.
通过基于几何的级联神经网络进行冠状动脉 TI 分割和血管矢量化。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3435714
DT Journal Article
PD 2024-Jul-30
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3435714 DT 期刊文章 PD 2024-Jul-30 PY 2024
AB Segmentation of the coronary artery is an important task for the
quantitative analysis of coronary computed tomography angiography (CCTA)
images and is being stimulated by the field of deep learning. However,
the complex structures with tiny and narrow branches of the coronary
artery bring it a great challenge. Coupled with the medical image
limitations of low resolution and poor contrast, fragmentations of
segmented vessels frequently occur in the prediction. Therefore, a
geometry-based cascaded segmentation method is proposed for the coronary
artery, which has the following innovations: 1) Integrating geometric
deformation networks, we design a cascaded network for segmenting the
coronary artery and vectorizing results. The generated meshes of the
coronary artery are continuous and accurate for twisted and
sophisticated coronary artery structures, without fragmentations. 2)
Different from mesh annotations generated by the traditional marching
cube method from voxel-based labels, a finer vectorized mesh of the
coronary artery is reconstructed with the regularized morphology. The
novel mesh annotation benefits the geometry-based segmentation network,
avoiding bifurcation adhesion and point cloud dispersion in intricate
branches. 3) A dataset named CCA-200 is collected, consisting of 200
CCTA images with coronary artery disease. The ground truths of 200 cases
are coronary internal diameter annotations by professional radiologists.
Extensive experiments verify our method on our collected dataset CCA-200
and public ASOCA dataset, with a Dice of 0.778 on CCA-200 and 0.895 on
ASOCA, showing superior results. Especially, our geometry-based model
generates an accurate, intact and smooth coronary artery, devoid of any
fragmentations of segmented vessels.
RI Yu, Simon/D-1046-2011
OI Yu, Simon/0000-0002-8715-5026
Z8 0
TC 0
ZR 0
ZA 0
冠状动脉的AB分割是冠状动脉计算机断层扫描血管造影(CCTA)图像定量分析的一项重要任务,并且受到深度学习领域的推动。然而,冠状动脉结构复杂、分支细小、狭窄,给它带来了巨大的挑战。再加上医学图像分辨率低、对比度差的限制,预测中经常出现分割血管的碎片。因此,提出了一种基于几何的冠状动脉级联分割方法,该方法具有以下创新点:1)集成几何变形网络,设计了用于分割冠状动脉并对结果进行矢量化的级联网络。对于扭曲和复杂的冠状动脉结构,生成的冠状动脉网格是连续且准确的,没有碎片。 2)与传统的行进立方体方法根据基于体素的标签生成的网格注释不同,用正则化形态重建了更精细的冠状动脉矢量化网格。新颖的网格注释有利于基于几何的分割网络,避免复杂分支中的分叉粘附和点云分散。 3) 收集名为CCA-200的数据集,由200张冠状动脉疾病的CCTA图像组成。 200个病例的基本事实是专业放射科医生的冠状动脉内径注释。大量实验在我们收集的数据集 CCA-200 和公共 ASOCA 数据集上验证了我们的方法,CCA-200 上的 Dice 为 0.778,ASOCA 上的 Dice 为 0.895,显示出优异的结果。特别是,我们基于几何的模型生成了准确、完整且光滑的冠状动脉,没有任何分段血管的碎片。余西蒙/D-1046-2011 余西蒙/0000-0002-8715-5026 Z8 0 TC 0 ZR 0 ZA 0
ZS 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-08-01
UT MEDLINE:39078771
PM 39078771
ER
ZS 0 ZB 0 Z9 0 EI 1558-254X DA 2024-08-01 UT MEDLINE:39078771 PM 39078771 ER
PT J
AU Li, Zirong
Chang, Dingyue
Zhang, Zhenxi
Luo, Fulin
Liu, Qiegen
Zhang, Jianjia
Yang, Guang
Wu, Weiwen
PT J AU Li, 张子荣, 张定岳, 罗振西, 刘福林, 张切根, 杨健佳, 吴光, 伟文
TI Dual-domain Collaborative Diffusion Sampling for Multi-Source Stationary
Computed Tomography Reconstruction.
用于多源固定计算机断层扫描重建的 TI 双域协作扩散采样。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3420411
DT Journal Article
PD 2024-Jun-28
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3420411 DT 期刊文章 PD 2024-Jun-28 PY 2024
AB The multi-source stationary CT, where both the detector and X-ray source
are fixed, represents a novel imaging system with high temporal
resolution that has garnered significant interest. Limited space within
the system restricts the number of X-ray sources, leading to sparse-view
CT imaging challenges. Recent diffusion models for reconstructing
sparse-view CT have generally focused separately on sinogram or image
domains. Sinogram-centric models effectively estimate missing
projections but may introduce artifacts, lacking mechanisms to ensure
image correctness. Conversely, image-domain models, while capturing
detailed image features, often struggle with complex data distribution,
leading to inaccuracies in projections. Addressing these issues, the
Dual-domain Collaborative Diffusion Sampling (DCDS) model integrates
sinogram and image domain diffusion processes for enhanced sparse-view
reconstruction. This model combines the strengths of both domains in an
optimized mathematical framework. A collaborative diffusion mechanism
underpins this model, improving sinogram recovery and image generative
capabilities. This mechanism facilitates feedback-driven image
generation from the sinogram domain and uses image domain results to
complete missing projections. Optimization of the DCDS model is further
achieved through the alternative direction iteration method, focusing on
data consistency updates. Extensive testing, including numerical
simulations, real phantoms, and clinical cardiac datasets, demonstrates
the DCDS model's effectiveness. It consistently outperforms various
state-of-the-art benchmarks, delivering exceptional reconstruction
quality and precise sinogram.
RI Zhang, Jianjia/; Yang, Guang/S-5032-2016; Wu, Weiwen/R-9596-2018; Li, Zirong/KBQ-0125-2024
OI Zhang, Jianjia/0000-0001-5048-5606; Yang, Guang/0000-0001-7344-7733; Wu,
Weiwen/0000-0002-8295-5104; Li, Zirong/0000-0003-4005-4292
AB 多源固定 CT 探测器和 X 射线源都是固定的,代表了一种具有高时间分辨率的新型成像系统,引起了人们的极大兴趣。系统内有限的空间限制了 X 射线源的数量,导致稀疏视图 CT 成像面临挑战。最近用于重建稀疏视图 CT 的扩散模型通常分别关注正弦图或图像域。以正弦图为中心的模型可以有效地估计缺失的投影,但可能会引入伪影,缺乏确保图像正确性的机制。相反,图像域模型在捕获详细图像特征的同时,通常会与复杂的数据分布作斗争,从而导致投影不准确。为了解决这些问题,双域协作扩散采样 (DCDS) 模型集成了正弦图和图像域扩散过程,以增强稀疏视图重建。该模型在优化的数学框架中结合了两个领域的优势。协作扩散机制支撑了该模型,提高了正弦图恢复和图像生成能力。这种机制有利于从正弦图域生成反馈驱动的图像,并使用图像域结果来完成缺失的投影。通过交替方向迭代方法进一步实现DCDS模型的优化,重点关注数据一致性更新。广泛的测试,包括数值模拟、真实模型和临床心脏数据集,证明了 DCDS 模型的有效性。它始终优于各种最先进的基准,提供卓越的重建质量和精确的正弦图。张日,健佳/;杨光/S-5032-2016;吴伟文/R-9596-2018;李子荣/KBQ-0125-2024 OI 张健佳/0000-0001-5048-5606;杨光/0000-0001-7344-7733;吴伟文/0000-0002-8295-5104;李子荣/0000-0003-4005-4292
ZS 0
ZA 0
ZR 0
ZB 0
TC 1
Z8 0
Z9 1
EI 1558-254X
DA 2024-07-02
UT MEDLINE:38941197
PM 38941197
ER
ZS 0 ZA 0 ZR 0 ZB 0 TC 1 Z8 0 Z9 1 EI 1558-254X DA 2024-07-02 UT MEDLINE:38941197 PM 38941197 ER
PT J
AU Chen, Wenting
Liu, Jie
Chow, Tommy W S
Yuan, Yixuan
PT J AU Chen, Wenting Liu, Jie Chow, Tommy WS Yuan, Yixuan
TI STAR-RL: Spatial-temporal Hierarchical Reinforcement Learning for
Interpretable Pathology Image Super-Resolution.
TI STAR-RL:用于可解释病理学图像超分辨率的时空分层强化学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3419809
DT Journal Article
PD 2024-Jun-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3419809 DT 期刊文章 PD 2024-Jun-27 PY 2024
AB Pathology image are essential for accurately interpreting lesion cells
in cytopathology screening, but acquiring high-resolution digital slides
requires specialized equipment and long scanning times. Though
super-resolution (SR) techniques can alleviate this problem, existing
deep learning models recover pathology image in a black-box manner,
which can lead to untruthful biological details and misdiagnosis.
Additionally, current methods allocate the same computational resources
to recover each pixel of pathology image, leading to the sub-optimal
recovery issue due to the large variation of pathology image. In this
paper, we propose the first hierarchical reinforcement learning
framework named Spatial-Temporal hierARchical Reinforcement Learning
(STAR-RL), mainly for addressing the aforementioned issues in pathology
image super-resolution problem. We reformulate the SR problem as a
Markov decision process of interpretable operations and adopt the
hierarchical recovery mechanism in patch level, to avoid sub-optimal
recovery. Specifically, the higher-level spatial manager is proposed to
pick out the most corrupted patch for the lower-level patch worker.
Moreover, the higher-level temporal manager is advanced to evaluate the
selected patch and determine whether the optimization should be stopped
earlier, thereby avoiding the over-processed problem. Under the guidance
of spatial-temporal managers, the lower-level patch worker processes the
selected patch with pixel-wise interpretable actions at each time step.
Experimental results on medical images degraded by different kernels
show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the
promotion in tumor diagnosis with a large margin and shows
generalizability under various degradation. The source code is to be
released.
RI Chow, Tommy W S/; Chen, Wenting/; LIU, Jie/; Yuan, Yixuan/KSL-8440-2024
OI Chow, Tommy W S/0000-0001-7051-0434; Chen, Wenting/0000-0002-7457-9540;
LIU, Jie/0000-0002-1327-1315; Yuan, Yixuan/0000-0002-0853-6948
ZB 0
TC 0
ZR 0
ZA 0
Z8 0
AB 病理学图像对于细胞病理学筛查中准确解释病变细胞至关重要,但获取高分辨率数字切片需要专门的设备和较长的扫描时间。尽管超分辨率(SR)技术可以缓解这个问题,但现有的深度学习模型以黑盒方式恢复病理图像,这可能导致不真实的生物学细节和误诊。此外,当前的方法分配相同的计算资源来恢复病理图像的每个像素,由于病理图像的巨大变化而导致次优恢复问题。在本文中,我们提出了第一个分层强化学习框架,名为时空分层强化学习(STAR-RL),主要用于解决病理图像超分辨率问题中的上述问题。我们将SR问题重新表述为可解释操作的马尔可夫决策过程,并采用补丁级别的分层恢复机制,以避免次优恢复。具体来说,建议较高级别的空间管理器为较低级别的补丁工作人员挑选出损坏最严重的补丁。此外,高级时间管理器可以评估所选补丁并确定是否应提前停止优化,从而避免过度处理问题。在时空管理器的指导下,较低级别的补丁工作人员在每个时间步骤通过像素级可解释的动作来处理选定的补丁。不同内核降解的医学图像的实验结果表明了STAR-RL的有效性。此外,STAR-RL 极大地验证了其在肿瘤诊断中的促进作用,并在各种退化下显示出普遍性。 源代码即将发布。RI Chow,汤米 WS/;陈文婷/;刘杰/;袁逸轩/KSL-8440-2024 周霭美 WS/0000-0001-7051-0434;陈文婷/0000-0002-7457-9540;刘杰/0000-0002-1327-1315;袁逸轩/0000-0002-0853-6948 ZB 0 TC 0 ZR 0 ZA 0 Z8 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-06-29
UT MEDLINE:38935476
PM 38935476
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-06-29 UT MEDLINE:38935476 PM 38935476 ER
PT J
AU Chen, Zhihao
Niu, Chuang
Gao, Qi
Wang, Ge
Shan, Hongming
PT J AU Chen, 牛志浩, 高创, 王奇, 戈山, 洪明
TI LIT-Former: Linking In-Plane and Through-Plane Transformers for
Simultaneous CT Image Denoising and Deblurring
TI LIT-Former:连接面内和面内变压器以同时进行 CT 图像去噪和去模糊
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1880
EP 1894
DI 10.1109/TMI.2024.3351723
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1880 EP 1894 DI 10.1109/TMI.2024.3351723 DT 文章 PD 2024 年 5 月 PY 2024
AB This paper studies 3D low-dose computed tomography (CT) imaging.
Although various deep learning methods were developed in this context,
typically they focus on 2D images and perform denoising due to low-dose
and deblurring for super-resolution separately. Up to date, little work
was done for simultaneous in-plane denoising and through-plane
deblurring, which is important to obtain high-quality 3D CT images with
lower radiation and faster imaging speed. For this task, a
straightforward method is to directly train an end-to-end 3D network.
However, it demands much more training data and expensive computational
costs. Here, we propose to link in-plane and through-plane transformers
for simultaneous in-plane denoising and through-plane deblurring, termed
as LIT-Former, which can efficiently synergize in-plane and
through-plane sub-tasks for 3D CT imaging and enjoy the advantages of
both convolution and transformer networks. LIT-Former has two novel
designs: efficient multi-head self-attention modules (eMSM) and
efficient convolutional feed-forward networks (eCFN). First, eMSM
integrates in-plane 2D self-attention and through-plane 1D
self-attention to efficiently capture global interactions of 3D
self-attention, the core unit of transformer networks. Second, eCFN
integrates 2D convolution and 1D convolution to extract local
information of 3D convolution in the same fashion. As a result, the
proposed LIT-Former synergizes these two sub-tasks, significantly
reducing the computational complexity as compared to 3D counterparts and
enabling rapid convergence. Extensive experimental results on simulated
and clinical datasets demonstrate superior performance over
state-of-the-art models. The source code is made available at
https://github.com/hao1635/LIT-Former.
RI Chen, Zhihao/GWC-3331-2022; Gao, Qi/KRQ-8680-2024; Shan, Hongming/AAG-3878-2019; Wang, Ge/AAH-8592-2020; Gao, Qi/; Wang, Ge/; Niu, Chuang/; Shan, Hongming/; Chen, Zhihao/
OI Gao, Qi/0009-0002-2106-8445; Wang, Ge/0000-0002-2656-7705; Niu,
Chuang/0000-0002-3310-7803; Shan, Hongming/0000-0002-0604-3197; Chen,
Zhihao/0000-0002-0133-0509
AB 本文研究 3D 低剂量计算机断层扫描 (CT) 成像。尽管在此背景下开发了各种深度学习方法,但它们通常专注于 2D 图像,并分别执行低剂量去噪和超分辨率去模糊。迄今为止,同时进行面内去噪和穿面去模糊的工作还很少,这对于获得具有较低辐射和更快成像速度的高质量 3D CT 图像非常重要。对于此任务,一种简单的方法是直接训练端到端 3D 网络。然而,它需要更多的训练数据和昂贵的计算成本。在这里,我们建议链接平面内和穿过平面变压器以同时进行平面内去噪和穿过平面去模糊,称为 LIT-Former,它可以有效地协同 3D CT 成像的平面内和穿过平面子任务并享受卷积网络和变压器网络的优点。 LIT-Former 有两种新颖的设计:高效的多头自注意力模块(eMSM)和高效的卷积前馈网络(eCFN)。首先,eMSM 集成了平面内 2D 自注意力和跨平面 1D 自注意力,以有效捕获变压器网络核心单元 3D 自注意力的全局交互。其次,eCFN 集成了 2D 卷积和 1D 卷积,以相同的方式提取 3D 卷积的局部信息。因此,所提出的 LIT-Former 协同这两个子任务,与 3D 对应任务相比显着降低了计算复杂性,并实现快速收敛。模拟和临床数据集的大量实验结果表明,其性能优于最先进的模型。源代码位于 https://github.com/hao1635/LIT-Former。日陈志浩/GWC-3331-2022;高琪/KRQ-8680-2024;单洪明/AAG-3878-2019;王戈/AAH-8592-2020;高琪/;王戈/;牛庄/;单洪明/;陈志豪/高琪/0009-0002-2106-8445;王戈/0000-0002-2656-7705;牛庄/0000-0002-3310-7803;单洪明/0000-0002-0604-3197;陈志豪/0000-0002-0133-0509
ZS 0
ZR 0
Z8 0
TC 2
ZB 0
ZA 0
Z9 2
C1 Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, Shanghai
200433, Peoples R China
C1 Rensselaer Polytech Inst, Biomed Imaging Ctr, Ctr Biotechnol &
Interdisciplinary Studies, Ctr Computat Innovat,Dept Biomed Engn, Troy,
NY 12180 USA
C1 Rensselaer Polytech Inst, Sch Engn, Troy, NY 12180 USA
C1 Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, MOE
Frontiers Ctr Brain Sci, Shanghai 200433, Peoples R China
C1 Fudan Univ, Key Lab Computat Neurosci & Brain Inspired Intelli, Shanghai
200433, Peoples R China
C1 Shanghai Ctr Brain Sci & Brain Inspired Technol, Shanghai 201210,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800024
PM 38194396
ER
ZS 0 ZR 0 Z8 0 TC 2 ZB 0 ZA 0 Z9 2 C1 复旦大学脑启发智能科技研究所,上海 200433,人民 R 中国 C1 伦斯勒理工学院,生物医学成像中心,生物技术与跨学科研究中心,创新计算中心,Dept Biomed Engn, Troy, NY 12180 USA C1 Rensselaer Polytech Inst, Sch Engn, Troy, NY 12180 USA C1 Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, MOE Frontiers Ctr Brain Sci, Shanghai 200433, Peoples R China C1 Fudan Univ , 计算神经科学与类脑智能重点实验室, 上海 200433, 人民大学上海中心 C1 脑科学与类脑智能技术重点实验室, 上海 201210, 人民大学 SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800024下午 38194396 ER
PT J
AU Liu, Pan
Huang, Gao
Jing, Jing
Bian, Suyan
Cheng, Liuquan
Lu, Xin Yang
Rao, Chongyou
Liu, Yu
Hua, Yun
Wang, Yongjun
He, Kunlun
PT J AU Liu, Pan Huang, 高静, Jing Bing, Suyan Cheng, Liuquan Lu, Xin Yang Rao, Chongyou Liu, Yu Hua, Yun Wang, Yongjun He, Kunlun
TI An Energy Matching Vessel Segmentation Framework in 3-D Medical Images
TI 3D 医学图像中的能量匹配血管分割框架
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1476
EP 1488
DI 10.1109/TMI.2023.3339204
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1476 EP 1488 DI 10.1109/TMI.2023.3339204 DT 文章 PD 2024 年 4 月 PY 2024
AB Accurate vascular segmentation from High Resolution 3-Dimensional (HR3D)
medical scans is crucial for clinicians to visualize complex vasculature
and diagnose related vascular diseases. However, a reliable and scalable
vessel segmentation framework for HR3D scans remains a challenge. In
this work, we propose a High-resolution Energy-matching Segmentation
(HrEmS) framework that utilizes deep learning to directly process the
entire HR3D scan and segment the vasculature to the finest level. The
HrEmS framework introduces two novel components. Firstly, it uses the
real-order total variation operator to construct a new loss function
that guides the segmentation network to obtain the correct topology
structure by matching the energy of the predicted segment to the energy
of the manual label. This is different from traditional loss functions
such as dice loss, which matches the pixels between predicted segment
and manual label. Secondly, a curvature-based weight-correction module
is developed, which directs the network to focus on crucial and complex
structural parts of the vasculature instead of the easy parts. The
proposed HrEmS framework was tested on three in-house multi-center
datasets and three public datasets, and demonstrated improved results in
comparison with the state-of-the-art methods using both
topology-relevant and volumetric-relevant metrics. Furthermore, a
double-blind assessment by three experienced radiologists on the
critical points of the clinical diagnostic processes provided additional
evidence of the superiority of the HrEmS framework.
RI He, Kunlun/IXN-1108-2023; bian, suyan/GZL-4911-2022; Yan, Jing/JFA-6705-2023; Liu, Pan/; Jing, Jing/KDN-4352-2024; rao, chongyou/; Huang, Gao/
OI Liu, Pan/0000-0002-7578-1556; Jing, Jing/0000-0001-9822-5758; rao,
chongyou/0000-0002-6405-0862; Huang, Gao/0000-0002-7251-0988
ZA 0
ZB 0
AB 高分辨率 3 维 (HR3D) 医学扫描的准确血管分割对于临床医生可视化复杂血管系统和诊断相关血管疾病至关重要。然而,用于 HR3D 扫描的可靠且可扩展的血管分割框架仍然是一个挑战。在这项工作中,我们提出了一种高分辨率能量匹配分割(HrEmS)框架,该框架利用深度学习直接处理整个 HR3D 扫描并将脉管系统分割到最精细的水平。 HrEmS 框架引入了两个新颖的组件。首先,它使用实阶全变分算子构造一个新的损失函数,通过将预测分段的能量与手动标签的能量相匹配来引导分段网络获得正确的拓扑结构。这与骰子损失等传统损失函数不同,后者匹配预测片段和手动标签之间的像素。其次,开发了基于曲率的权重校正模块,引导网络关注脉管系统的关键和复杂的结构部分,而不是简单的部分。所提出的 HrEmS 框架在三个内部多中心数据集和三个公共数据集上进行了测试,并与使用拓扑相关和体积相关指标的最先进方法相比,展示了改进的结果。此外,三位经验丰富的放射科医生对临床诊断过程的关键点进行的双盲评估为 HrEmS 框架的优越性提供了额外的证据。日河昆仑/IXN-1108-2023;卞素艳/GZL-4911-2022;严静/JFA-6705-2023;刘潘/;静静/KDN-4352-2024;饶、崇友/;黄高/刘爱潘/0000-0002-7578-1556;靖靖/0000-0001-9822-5758;饶崇友/0000-0002-6405-0862;黄高/0000-0002-7251-0988 ZA 0 ZB 0
ZS 0
ZR 0
TC 1
Z8 0
Z9 1
C1 Chinese Peoples Liberat Army Gen Hosp, Med Big Data Res Ctr, Beijing
100853, Peoples R China
C1 Tsinghua Univ, Dept Automat, Beijing, Peoples R China
C1 Capital Med Univ, Natl Clin Res Ctr Neurol Dis, Beijing 100070, Peoples
R China
C1 Capital Med Univ, Beijing Tiantan Hosp, Dept Neurol, Beijing, Peoples R
China
C1 Chinese Peoples Liberat Army Gen Hosp, Med Ctr 2, Beijing, Peoples R
China
C1 Chinese Peoples Liberat Army Gen Hosp, Natl Clin Res Ctr Geriatr Dis,
Dept Cardiol, Beijing 100853, Peoples R China
C1 Chinese Peoples Liberat Army Gen Hosp, Med Ctr 6, Dept Radiol, Beijing
100853, Peoples R China
C1 Lakehead Univ, Dept Math Sci, Thunder Bay, ON P7E 5E1, Canada
SN 0278-0062
EI 1558-254X
DA 2024-07-12
UT WOS:001196733400003
PM 38048240
ER
ZS 0 ZR 0 TC 1 Z8 0 Z9 1 C1 中国人民解放军将军医院,医学大数据研究中心,北京 100853,人民 R 中国 C1 清华大学,自动化系,北京,人民 R 中国 C1 首都医科大学,国家临床研究中心Ctr Neurol Dis,北京 100070,人民 R 中国 C1 首都医科大学,北京天坛医院,Dept Neurol,北京,人民解放军 C1 中国人民解放军将军医院,医学中心 2,北京,人民 R 中国 C1 中国人民解放军将军中国人民解放军国家临床研究中心老年人医院,心血管科,北京 100853,中国人民解放军上将医院,医学中心 6,放射科,北京 100853,中国人民解放军 C1 湖首大学数学科学系,桑德贝, ON P7E 5E1,加拿大 SN 0278-0062 EI 1558-254X DA 2024-07-12 UT WOS:001196733400003 PM 38048240 ER
PT J
AU Miao, Juzheng
Zhou, Si-Ping
Zhou, Guang-Quan
Wang, Kai-Ni
Yang, Meng
Zhou, Shoujun
Chen, Yang
PT J AU Miao, 周居正, 周四平, 王广泉, 杨凯妮, 周孟, 陈守军, 杨
TI SC-SSL: Self-Correcting Collaborative and Contrastive Co-Training Model
for Semi-Supervised Medical Image Segmentation
TI SC-SSL:用于半监督医学图像分割的自校正协作和对比协同训练模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1347
EP 1364
DI 10.1109/TMI.2023.3336534
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1347 EP 1364 DI 10.1109/TMI.2023.3336534 DT 文章 PD 2024 年 4 月 PY 2024
AB Image segmentation achieves significant improvements with deep neural
networks at the premise of a large scale of labeled training data, which
is laborious to assure in medical image tasks. Recently, semi-supervised
learning (SSL) has shown great potential in medical image segmentation.
However, the influence of the learning target quality for unlabeled data
is usually neglected in these SSL methods. Therefore, this study
proposes a novel self-correcting co-training scheme to learn a better
target that is more similar to ground-truth labels from collaborative
network outputs. Our work has three-fold highlights. First, we advance
the learning target generation as a learning task, improving the
learning confidence for unannotated data with a self-correcting module.
Second, we impose a structure constraint to encourage the shape
similarity further between the improved learning target and the
collaborative network outputs. Finally, we propose an innovative
pixel-wise contrastive learning loss to boost the representation
capacity under the guidance of an improved learning target, thus
exploring unlabeled data more efficiently with the awareness of semantic
context. We have extensively evaluated our method with the
state-of-the-art semi-supervised approaches on four public-available
datasets, including the ACDC dataset, M&Ms dataset, Pancreas-CT dataset,
and Task_07 CT dataset. The experimental results with different
labeled-data ratios show our proposed method's superiority over other
existing methods, demonstrating its effectiveness in semi-supervised
medical image segmentation.
RI Chen, Yu/Y-3292-2019; Miao, Juzheng/HDN-3261-2022; Zhou, Shoujun/; Wang, Kaini/; Zhou, Guangquan/P-2153-2015
OI Zhou, Shoujun/0000-0003-3232-6796; Wang, Kaini/0000-0002-4000-188X;
Zhou, Guangquan/0000-0002-6467-3592
ZA 0
AB图像分割在大规模标记训练数据的前提下,利用深度神经网络取得了显着的改进,而这在医学图像任务中很难保证。最近,半监督学习(SSL)在医学图像分割中显示出巨大的潜力。然而,这些 SSL 方法通常忽略了学习目标质量对未标记数据的影响。因此,本研究提出了一种新颖的自校正协同训练方案,以学习与协作网络输出中的真实标签更相似的更好目标。我们的工作有三个亮点。首先,我们将学习目标生成作为一项学习任务来推进,通过自校正模块提高未注释数据的学习信心。其次,我们施加结构约束以进一步鼓励改进的学习目标和协作网络输出之间的形状相似性。最后,我们提出了一种创新的逐像素对比学习损失,以在改进的学习目标的指导下提高表示能力,从而在语义上下文的感知下更有效地探索未标记的数据。我们使用最先进的半监督方法在四个公开数据集(包括 ACDC 数据集、M&Ms 数据集、Pancreas-CT 数据集和 Task_07 CT 数据集)上广泛评估了我们的方法。不同标记数据比例的实验结果表明我们提出的方法相对于其他现有方法的优越性,证明了其在半监督医学图像分割中的有效性。日晨于/Y-3292-2019;苗居正/HDN-3261-2022;周守军/;王凯妮/;周光全/P-2153-2015 OI 周守军/0000-0003-3232-6796;王凯妮/0000-0002-4000-188X;周光全/0000-0002-6467-3592 ZA 0
ZS 0
TC 2
ZB 1
Z8 0
ZR 0
Z9 2
C1 Southeast Univ, Sch Biol Sci & Med Engn, Nanjing 211189, Peoples R China
C1 Southeast Univ, Jiangsu Key Lab Biomat & Devices, Nanjing 211189,
Peoples R China
C1 Chinese Acad Med Sci & Peking Union Med Coll, Dept Ultrasound, State Key
Lab Complex Severe & Rare Dis, Beijing 100006, Peoples R China
C1 Chinese Acad Med Sci & Peking Union Med Coll, Peking Union Med Coll
Hosp, Beijing 100006, Peoples R China
C1 Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R
China
C1 Southeast Univ, Sch Comp Sci & Engn, Lab Image Sci & Technol, Jiangsu
Prov Joint Int Res Lab Med Informat Proc,M, Nanjing 210096, Peoples R
China
C1 Southeast Univ, Key Lab New Generat Artificial Intelligence Techno,
Minist Educ, Nanjing 210096, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-06
UT WOS:001196733400021
PM 37995173
ER
ZS 0 TC 2 ZB 1 Z8 0 ZR 0 Z9 2 C1 东南大学生物科学与医学工程学院,南京 211189,人民大学 C1 东南大学江苏省生物材料与器件重点实验室,南京 211189,人民大学 C1 中国医学科学院北京协和医科大学超声科,重症罕见病国家重点实验室,北京 100006,人民 R 中国 C1 中国医学科学院 北京协和医学院,北京协和医院医院,北京 100006,人民 R 中国 C1 中国深圳先进技术研究院,深圳 518055,人民 R 中国 C1 东南大学,科学与工程科学与工程实验室,江苏省医学信息联合国际研究实验室,M,南京 210096,人民 R 中国 C1 东南南京大学新一代人工智能技术重点实验室 南京 210096, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-07-06 UT WOS:001196733400021 PM 37995173 ER
PT J
AU Liu, Jicheng
Liu, Hui
Fu, Huazhu
Ye, Yu
Chen, Kun
Lu, Yu
Mao, Jianbo
Xu, Ronald X.
Sun, Mingzhai
PT J AU Liu, Ji Cheng Liu, Fu Hui, Huazhu Ye, Yu Chen, Kun Lu, Yu Mao, Jiabo Xu, Ronald X. Sun, Mingzhai
TI Edge-Guided Contrastive Adaptation Network for Arteriovenous Nicking
Classification Using Synthetic Data
TI 边缘引导对比适应网络使用合成数据进行动静脉切口分类
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1237
EP 1246
DI 10.1109/TMI.2023.3332168
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1237 EP 1246 DI 10.1109/TMI.2023.3332168 DT 文章 PD 2024 年 3 月 PY 2024
AB Retinal arteriovenous nicking (AVN) manifests as a reduced venular
caliber of an arteriovenous crossing. AVNs are signs of many systemic,
particularly cardiovascular diseases. Studies have shown that people
with AVN are twice as likely to have a stroke. However, AVN
classification faces two challenges. One is the lack of data, especially
AVNs compared to the normal arteriovenous (AV) crossings. The other is
the significant intra-class variations and minute inter-class
differences. AVNs may look different in shape, scale, pose, and color.
On the other hand, the AVN could be different from the normal AV
crossing only by slight thinning of the vein. To address these
challenges, first, we develop a data synthesis method to generate AV
crossings, including normal and AVNs. Second, to mitigate the domain
shift between the synthetic and real data, an edge-guided unsupervised
domain adaptation network is designed to guide the transfer of domain
invariant information. Third, a semantic contrastive learning branch
(SCLB) is introduced and a set of semantically related images, as a
semantic triplet, are input to the network simultaneously to guide the
network to focus on the subtle differences in venular width and to
ignore the differences in appearance. These strategies effectively
mitigate the lack of data, domain shift between synthetic and real data,
and significant intra- but minute inter-class differences. Extensive
experiments have been performed to demonstrate the outstanding
performance of the proposed method.
RI Sun, Mingzhai/B-4945-2012; Chen, Kun/; Fu, Huazhu/A-1411-2014
OI Chen, Kun/0000-0002-4121-1221; Fu, Huazhu/0000-0002-9702-5524
ZB 0
ZR 0
Z8 0
AB 视网膜动静脉切口 (AVN) 表现为动静脉交叉口的小静脉口径减小。 AVN 是许多全身性疾病,特别是心血管疾病的征兆。研究表明,患有 AVN 的人中风的可能性是普通人的两倍。然而,AVN 分类面临两个挑战。一是缺乏数据,尤其是 AVN 与正常动静脉 (AV) 交叉点的比较。另一个是显着的类内差异和微小的类间差异。 AVN 的形状、比例、姿势和颜色可能有所不同。另一方面,AVN 与正常 AV 交叉的不同之处仅在于静脉稍微变细。为了解决这些挑战,首先,我们开发了一种数据合成方法来生成 AV 交叉,包括正常和 AVN。其次,为了减轻合成数据和真实数据之间的域转移,设计了边缘引导的无监督域适应网络来指导域不变信息的传输。第三,引入语义对比学习分支(SCLB),将一组语义相关的图像作为语义三元组同时输入到网络中,引导网络关注小静脉宽度的细微差异,而忽略静脉宽度的差异。外貌。这些策略有效地缓解了数据缺乏、合成数据与真实数据之间的领域转移以及类内但微小的类间显着差异的问题。已经进行了大量的实验来证明所提出的方法的出色性能。孙日明斋/B-4945-2012;陈坤/;付华珠/A-1411-2014 陈锟/0000-0002-4121-1221;付华柱/0000-0002-9702-5524 ZB 0 ZR 0 Z8 0
ZS 0
ZA 0
TC 0
Z9 0
C1 Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Jiangsu,
Peoples R China
C1 Univ Sci & Technol China, Sch Biomed Engn, Div Life Sci & Med, Hefei
230026, Anhui, Peoples R China
C1 Univ Sci & Technol China, Sch Engn Sci, Dept Precis Machinery & Precis
Instrumentat, Hefei 230026, Peoples R China
C1 Agcy Sci Technol & Res, Inst High Performance Comp, Singapore 138632,
Singapore
C1 Orbis Biotech Co Ltd, Hefei 231100, Peoples R China
C1 Hangzhou Med Coll, Affiliated Peoples Hosp, Zhejiang Prov Peoples Hosp,
Ctr Rehabil Med,Dept Ophthalmol, Hangzhou 310014, Zhejiang, Peoples R
China
C3 Orbis Biotech Co Ltd
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600009
PM 37956005
ER
ZS 0 ZA 0 TC 0 Z9 0 C1 中国科技大学,苏州先进研究中心,苏州 215123,江苏,人民 R 中国 C1 中国科技大学,Sch Biomed Engn,Div Life Sci & Med,安徽合肥 230026人民 R 中国 C1 科学技术大学,工程科学部,精密机械与精密仪器部,合肥 230026,人民 R 中国 C1 农业科学技术与研究中心,高性能药学研究所,新加坡 138632,新加坡 C1 奥比斯生物科技有限公司,合肥231100,人民医院 C1 杭州医学院,浙江省人民医院附属人民医院眼科康复中心,杭州 310014,浙江省,人民医院 C3 奥比斯生物科技有限公司 SN 0278-0062 EI 1558-254X DA 2024- 04-12 UT WOS:001180906600009 PM 37956005 ER
PT J
AU Spieker, Veronika
Eichhorn, Hannah
Hammernik, Kerstin
Rueckert, Daniel
Preibisch, Christine
Karampinos, Dimitrios C.
Schnabel, Julia A.
PT J AU Spieker、Veronika Eichhorn、Hannah Hammernik、Kerstin Rueckert、Daniel Preibisch、Christine Karampinos、Dimitrios C. Schnabel、Julia A.
TI Deep Learning for Retrospective Motion Correction in MRI: A
Comprehensive Review
用于 MRI 回顾性运动校正的 TI 深度学习:综合回顾
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 846
EP 859
DI 10.1109/TMI.2023.3323215
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 846 EP 859 DI 10.1109/TMI.2023.3323215 DT 文章 PD 2024 年 2 月 PY 2024
AB Motion represents one of the major challenges in magnetic resonance
imaging (MRI). Since the MR signal is acquired in frequency space, any
motion of the imaged object leads to complex artefacts in the
reconstructed image in addition to other MR imaging artefacts. Deep
learning has been frequently proposed for motion correction at several
stages of the reconstruction process. The wide range of MR acquisition
sequences, anatomies and pathologies of interest, and motion patterns
(rigid vs. deformable and random vs. regular) makes a comprehensive
solution unlikely. To facilitate the transfer of ideas between different
applications, this review provides a detailed overview of proposed
methods for learning-based motion correction in MRI together with their
common challenges and potentials. This review identifies differences and
synergies in underlying data usage, architectures, training and
evaluation strategies. We critically discuss general trends and outline
future directions, with the aim to enhance interaction between different
application areas and research fields.
RI Schnabel, Julia/ACE-6863-2022; Preibisch, Christine/AAD-8851-2020; Rueckert, Daniel/C-4393-2008; Rueckert, Daniel/; Hammernik, Kerstin/; Eichhorn, Hannah/; Schnabel, Julia Anne/; Preibisch, Christine/
OI Rueckert, Daniel/0000-0002-5683-5889; Hammernik,
Kerstin/0000-0002-2734-1409; Eichhorn, Hannah/0000-0001-6980-9703;
Schnabel, Julia Anne/0000-0001-6107-3009; Preibisch,
Christine/0000-0003-4067-1928
TC 6
ZA 0
ZB 1
ZR 0
AB Motion 是磁共振成像 (MRI) 领域的主要挑战之一。由于 MR 信号是在频率空间中采集的,因此除了其他 MR 成像伪影之外,成像对象的任何运动都会导致重建图像中出现复杂的伪影。深度学习经常被提出用于重建过程的几个阶段的运动校正。广泛的 MR 采集序列、感兴趣的解剖结构和病理学以及运动模式(刚性与可变形、随机与规则)使得全面的解决方案不太可能。为了促进不同应用之间的思想转移,本综述详细概述了 MRI 中基于学习的运动校正所提出的方法及其常见挑战和潜力。本次审查确定了基础数据使用、架构、培训和评估策略方面的差异和协同作用。我们批判性地讨论总体趋势并概述未来方向,旨在加强不同应用领域和研究领域之间的互动。 RI 施纳贝尔,朱莉娅/ACE-6863-2022;克里斯汀·普雷比什/AAD-8851-2020;丹尼尔·吕克特/C-4393-2008;丹尼尔·吕克特/;哈默尼克,克斯汀/;艾希霍恩,汉娜/;朱莉娅·安妮·施纳贝尔/;克里斯汀·普雷比什/丹尼尔·OI Rueckert/0000-0002-5683-5889;哈默尼克,克斯坦/0000-0002-2734-1409;艾希霍恩,汉娜/0000-0001-6980-9703;朱莉娅·安妮·施纳贝尔/0000-0001-6107-3009;克里斯汀·普雷比什/0000-0003-4067-1928 TC 6 ZA 0 ZB 1 ZR 0
ZS 0
Z8 0
Z9 6
C1 Helmholtz Munich, Inst Machine Learning Biomed Imaging, D-85764 Munich,
Germany
C1 Tech Univ Munich, Sch Computat Informat & Technol, D-80333 Munich,
Germany
C1 Helmholtz Munich, Inst Machine Learning Biomed Imaging, D-85764 Munich,
Germany
C1 Imperial Coll London, Dept Comp, London SW7 2AZ, England
C1 Tech Univ Munich, Sch Med & Hlth, Klinikum Rechts Isar, D-80333 Munich,
Germany
C1 Tech Univ Munich, Sch Med & Hlth, Dept Neuroradiol, Klinikum Rechts
Isar, D-80333 Munich, Germany
C1 Tech Univ Munich, Sch Med & Hlth, Dept Diagnost & Intervent Radiol,
Klinikum rechts Isar, D-80333 Munich, Germany
C1 Kings Coll London, Sch Biomed Imaging & Imaging Sci, London WC2R 2LS,
England
C3 Helmholtz Munich
C3 Helmholtz Munich
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400015
PM 37831582
ER
ZS 0 Z8 0 Z9 6 C1 亥姆霍兹慕尼黑,机器学习生物医学成像研究所,D-85764 德国慕尼黑 C1 慕尼黑工业大学,Sch Computat Informat & Technol,D-80333 德国慕尼黑 C1 亥姆霍兹慕尼黑,机器学习生物医学成像研究所,D -85764 慕尼黑,德国 C1 Imperial Coll London,Dept Comp,伦敦 SW7 2AZ,英格兰 C1 Tech Univ 慕尼黑,Sch Med & Hlth,Klinikum Rechts Isar,D-80333 慕尼黑,德国 C1 Tech Univ 慕尼黑,Sch Med & Hlth,Dept Neuroradiol , Klinikum Rechts Isar, D-80333 慕尼黑, 德国 C1 Tech Univ 慕尼黑, Sch Med & Hlth, 诊断与介入放射科, Klinikum rechts Isar, D-80333 慕尼黑, 德国 C1 Kings Coll London, Sch Biomed Imaging & Imaging Sci, 伦敦WC2R 2LS,英国 C3 亥姆霍兹慕尼黑 C3 亥姆霍兹慕尼黑 SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400015 PM 37831582 ER
PT J
AU Cui, Zhuo-Xu
Liu, Congcong
Fan, Xiaohong
Cao, Chentao
Cheng, Jing
Zhu, Qingyong
Liu, Yuanyuan
Jia, Sen
Wang, Haifeng
Zhu, Yanjie
Zhou, Yihang
Zhang, Jianping
Liu, Qiegen
Liang, Dong
PT J AU Cui, 刘卓旭, 范聪聪, 曹晓红, 成涛, 朱静, 刘庆勇, 贾媛媛, 王森, 朱海峰, 周艳杰, 张一航, 刘建平, 梁切根, 董
TI Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion.
TI 物理信息 DeepMRI:k 空间插值与热扩散的结合。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3462988
DT Journal Article
PD 2024-Sep-18
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3462988 DT 期刊文章 PD 2024-Sep-18 PY 2024
AB Recently, diffusion models have shown considerable promise for MRI
reconstruction. However, extensive experimentation has revealed that
these models are prone to generating artifacts due to the inherent
randomness involved in generating images from pure noise. To achieve
more controlled image reconstruction, we reexamine the concept of
interpolatable physical priors in k-space data, focusing specifically on
the interpolation of high-frequency (HF) k-space data from low-frequency
(LF) k-space data. Broadly, this insight drives a shift in the
generation paradigm from random noise to a more deterministic approach
grounded in the existing LF k-space data. Building on this, we first
establish a relationship between the interpolation of HF k-space data
from LF k-space data and the reverse heat diffusion process, providing a
fundamental framework for designing diffusion models that generate
missing HF data. To further improve reconstruction accuracy, we
integrate a traditional physics-informed k-space interpolation model
into our diffusion framework as a data fidelity term. Experimental
validation using publicly available datasets demonstrates that our
approach significantly surpasses traditional k-space interpolation
methods, deep learning-based k-space interpolation techniques, and
conventional diffusion models, particularly in HF regions. Finally, we
assess the generalization performance of our model across various
out-of-distribution datasets. Our code are available at
https://github.com/ZhuoxuCui/Heat-Diffusion.
OI Cao, Chentao/0000-0002-3974-3413; Fan, Xiaohong/0000-0003-0265-7279;
Cui, Zhuo-Xu/0000-0001-9283-881X; Cheng, Jing/0000-0001-9098-8048
AB 最近,扩散模型在 MRI 重建方面显示出了巨大的前景。然而,大量的实验表明,由于从纯噪声生成图像所涉及的固有随机性,这些模型很容易产生伪影。为了实现更受控的图像重建,我们重新审视了 k 空间数据中可插值物理先验的概念,特别关注从低频 (LF) k 空间数据插值高频 (HF) k 空间数据。从广义上讲,这种见解推动了生成范式从随机噪声转向基于现有 LF k 空间数据的更具确定性的方法。在此基础上,我们首先建立了从 LF k 空间数据插值 HF k 空间数据与反向热扩散过程之间的关系,为设计生成缺失 HF 数据的扩散模型提供了基本框架。为了进一步提高重建精度,我们将传统的物理信息 k 空间插值模型作为数据保真度项集成到我们的扩散框架中。使用公开数据集进行的实验验证表明,我们的方法显着优于传统的 k 空间插值方法、基于深度学习的 k 空间插值技术和传统的扩散模型,特别是在 HF 区域。最后,我们评估模型在各种分布外数据集上的泛化性能。我们的代码可在 https://github.com/ZhuoxuCui/Heat-Diffusion 获取。曹爱,陈涛/0000-0002-3974-3413;范晓红/0000-0003-0265-7279;崔卓旭/0000-0001-9283-881X;程静/0000-0001-9098-8048
ZS 0
ZB 0
Z8 0
TC 0
ZA 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-09-21
UT MEDLINE:39292579
PM 39292579
ER
ZS 0 ZB 0 Z8 0 TC 0 ZA 0 ZR 0 Z9 0 EI 1558-254X DA 2024-09-21 UT MEDLINE:39292579 PM 39292579 ER
PT J
AU Jiang, Yikun
Pei, Yuru
Xu, Tianmin
Yuan, Xiaoru
Zha, Hongbin
PT J AU Jiang, 裴一琨, 徐玉茹, 袁天民, 查晓茹, 洪斌
TI Towards Semantically-Consistent Deformable 2D-3D Registration for 3D
Craniofacial Structure Estimation from A Single-View Lateral
Cephalometric Radiograph.
TI 致力于实现语义一致的可变形 2D-3D 配准,用于根据单视图侧位头影测量 X 光片估计 3D 颅面结构。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3456251
DT Journal Article
PD 2024-Sep-09
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3456251 DT 期刊文章 PD 2024-Sep-09 PY 2024
AB The deep neural networks combined with the statistical shape model have
enabled efficient deformable 2D-3D registration and recovery of 3D
anatomical structures from a single radiograph. However, the recovered
volumetric image tends to lack the volumetric fidelity of fine-grained
anatomical structures and explicit consideration of cross-dimensional
semantic correspondence. In this paper, we introduce a simple but
effective solution for semantically-consistent deformable 2D-3D
registration and detailed volumetric image recovery by inferring a
voxel-wise registration field between the cone-beam computed tomography
and a single lateral cephalometric radiograph (LC). The key idea is to
refine the initial statistical model-based registration field with
craniofacial structural details and semantic consistency from the LC.
Specifically, our framework employs a self-supervised scheme to learn a
voxel-level refiner of registration fields to provide fine-grained
craniofacial structural details and volumetric fidelity. We also present
a weakly supervised semantic consistency measure for semantic
correspondence, relieving the requirements of volumetric image
collections and annotations. Experiments showcase that our method
achieves deformable 2D-3D registration with performance gains over
state-of-the-art registration and radiograph-based volumetric
reconstruction methods. The source code is available at
https://github.com/Jyk-122/SC-DREG.
OI Yuan, Xiaoru/0000-0002-7233-980X
Z8 0
AB 深度神经网络与统计形状模型相结合,实现了高效的可变形 2D-3D 配准,并从单张 X 光照片中恢复 3D 解剖结构。然而,恢复的体积图像往往缺乏细粒度解剖结构的体积保真度和对跨维度语义对应的明确考虑。在本文中,我们通过推断锥形束计算机断层扫描和单侧头影测量 X 光片 (LC) 之间的体素配准场,介绍了一种简单但有效的解决方案,用于语义一致的可变形 2D-3D 配准和详细的体积图像恢复。关键思想是利用 LC 的颅面结构细节和语义一致性来完善基于初始统计模型的配准字段。具体来说,我们的框架采用自我监督方案来学习配准字段的体素级细化器,以提供细粒度的颅面结构细节和体积保真度。我们还提出了一种用于语义对应的弱监督语义一致性度量,减轻了体积图像收集和注释的要求。实验表明,我们的方法实现了可变形 2D-3D 配准,其性能优于最先进的配准和基于射线照相的体积重建方法。源代码可在 https://github.com/Jyk-122/SC-DREG 获取。袁晓茹/0000-0002-7233-980X Z8 0
ZS 0
TC 0
ZR 0
ZB 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-09-11
UT MEDLINE:39250375
PM 39250375
ER
ZS 0 TC 0 ZR 0 ZB 0 ZA 0 Z9 0 EI 1558-254X DA 2024-09-11 UT MEDLINE:39250375 PM 39250375 ER
PT J
AU Zhang, Jianjia
Mao, Haiyang
Chang, Dingyue
Yu, Hengyong
Wu, Weiwen
Shen, Dinggang
PT J AU 张, 毛健佳, 常海洋, 于丁月, 吴恒勇, 沉伟文, 丁刚
TI Adaptive and Iterative Learning With Multi-Perspective Regularizations
for Metal Artifact Reduction
TI 自适应迭代学习与多视角正则化可减少金属伪影
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3354
EP 3365
DI 10.1109/TMI.2024.3395348
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3354 EP 3365 DI 10.1109/TMI.2024.3395348 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Metal artifact reduction (MAR) is important for clinical diagnosis with
CT images. The existing state-of-the-art deep learning methods usually
suppress metal artifacts in sinogram or image domains or both. However,
their performance is limited by the inherent characteristics of the two
domains, i.e., the errors introduced by local manipulations in the
sinogram domain would propagate throughout the whole image during
backprojection and lead to serious secondary artifacts, while it is
difficult to distinguish artifacts from actual image features in the
image domain. To alleviate these limitations, this study analyzes the
desirable properties of wavelet transform in-depth and proposes to
perform MAR in the wavelet domain. First, wavelet transform yields
components that possess spatial correspondence with the image, thereby
preventing the spread of local errors to avoid secondary artifacts.
Second, using wavelet transform could facilitate identification of
artifacts from image since metal artifacts are mainly high-frequency
signals. Taking these advantages of the wavelet transform, this paper
decomposes an image into multiple wavelet components and introduces
multi-perspective regularizations into the proposed MAR model. To
improve the transparency and validity of the model, all the modules in
the proposed MAR model are designed to reflect their mathematical
meanings. In addition, an adaptive wavelet module is also utilized to
enhance the flexibility of the model. To optimize the model, an
iterative algorithm is developed. The evaluation on both synthetic and
real clinical datasets consistently confirms the superior performance of
the proposed method over the competing methods.
RI Wu, Weiwen/AAF-6920-2019; Chang, Dingyue/HLX-7426-2023; Wu, Weiwen/R-9596-2018; Zhang, Jianjia/
OI Wu, Weiwen/0000-0002-8295-5104; Zhang, Jianjia/0000-0001-5048-5606
ZR 0
ZB 0
AB 金属伪影减少 (MAR) 对于 CT 图像的临床诊断非常重要。现有最先进的深度学习方法通常会抑制正弦图或图像域或两者中的金属伪影。然而,它们的性能受到两个域的固有特征的限制,即正弦图域中的局部操作引入的误差会在反投影过程中传播到整个图像并导致严重的二次伪影,而很难将伪影与伪影区分开来。图像域中的实际图像特征。为了缓解这些限制,本研究深入分析了小波变换的理想特性,并提出在小波域中执行 MAR。首先,小波变换产生与图像具有空间对应关系的分量,从而防止局部误差的扩散,从而避免二次伪影。其次,使用小波变换可以促进从图像中识别伪影,因为金属伪影主要是高频信号。利用小波变换的这些优点,本文将图像分解为多个小波分量,并将多视角正则化引入到所提出的 MAR 模型中。为了提高模型的透明度和有效性,所提出的MAR模型中的所有模块都旨在反映其数学含义。此外,还利用自适应小波模块来增强模型的灵活性。为了优化模型,开发了迭代算法。对合成和真实临床数据集的评估一致证实了所提出的方法相对于竞争方法的优越性能。吴日伟文/AAF-6920-2019;常丁月/HLX-7426-2023;吴伟文/R-9596-2018;张健佳/吴伟文/0000-0002-8295-5104;张建佳/0000-0001-5048-5606 ZR 0 ZB 0
ZS 0
TC 0
ZA 0
Z8 0
Z9 0
C1 Sun Yat Sen Univ, Sch Biomed Engn, Shenzhen Campus, Shenzhen 518100,
Guangdong, Peoples R China
C1 China Acad Engn Phys, Inst Mat, Mianyang 621700, Peoples R China
C1 Univ Massachusetts Lowell, Dept Elect & Comp Engn, Lowell, MA 01854 USA
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 ShanghaiTech Univ, State Key Lab Adv Med Mat & Devices, Shanghai 201210,
Peoples R China
C1 Shanghai United Imaging Intelligence Co Ltd, Shanghai 200230, Peoples R
China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 201210, Peoples R China
C3 Shanghai United Imaging Intelligence Co Ltd
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600022
PM 38687653
ER
ZS 0 TC 0 ZA 0 Z8 0 Z9 0 C1 中山大学生物医学工程学院深圳校区,深圳 518100,广东,人民 C1 中国科学院工程物理研究所,绵阳 621700,人民 R 中国 C1 马萨诸塞州洛厄尔大学, Dept Elect & Comp Engn, Lowell, MA 01854 USA C1 上海科技大学,生物医学工程,上海 201210,人民 R 中国 C1 上海科技大学,先进医学材料与器械国家重点实验室,上海 201210,人民 R 中国 C1 上海联影智能有限公司,上海 200230,人民 R 中国 C1 上海临床研究与试验中心,上海 201210,人民 R 中国 C3 上海联影智能有限公司 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-09- 18 UT WOS:001307429600022 PM 38687653 ER
PT J
AU Kyung, Sunggu
Won, Jongjun
Pak, Seongyong
Kim, Sunwoo
Lee, Sangyoon
Park, Kanggil
Hong, Gil-Sun
Kim, Namkug
PT J AU Kyung、Sunggu Won、Jongjun Pak、Seongyong Kim、Sunwoo Lee、Sangyoon Park、Kanggil Hong、Gil-Sun Kim、Namkug
TI Generative Adversarial Network with Robust Discriminator Through
Multi-Task Learning for Low-Dose CT Denoising.
TI 生成对抗网络,具有鲁棒鉴别器,通过多任务学习实现低剂量 CT 去噪。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3449647
DT Journal Article
PD 2024-Aug-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3449647 DT 期刊文章 PD 2024-Aug-26 PY 2024
AB Reducing the dose of radiation in computed tomography (CT) is vital to
decreasing secondary cancer risk. However, the use of low-dose CT (LDCT)
images is accompanied by increased noise that can negatively impact
diagnoses. Although numerous deep learning algorithms have been
developed for LDCT denoising, several challenges persist, including the
visual incongruence experienced by radiologists, unsatisfactory
performances across various metrics, and insufficient exploration of the
networks' robustness in other CT domains. To address such issues, this
study proposes three novel accretions. First, we propose a generative
adversarial network (GAN) with a robust discriminator through multi-task
learning that simultaneously performs three vision tasks: restoration,
image-level, and pixel-level decisions. The more multi-tasks that are
performed, the better the denoising performance of the generator, which
means multi-task learning enables the discriminator to provide more
meaningful feedback to the generator. Second, two regulatory mechanisms,
restoration consistency (RC) and non-difference suppression (NDS), are
introduced to improve the discriminator's representation capabilities.
These mechanisms eliminate irrelevant regions and compare the
discriminator's results from the input and restoration, thus
facilitating effective GAN training. Lastly, we incorporate residual
fast Fourier transforms with convolution (Res-FFT-Conv) blocks into the
generator to utilize both frequency and spatial representations. This
approach provides mixed receptive fields by using spatial (or local),
spectral (or global), and residual connections. Our model was evaluated
using various pixel- and feature-space metrics in two denoising tasks.
Additionally, we conducted visual scoring with radiologists. The results
indicate superior performance in both quantitative and qualitative
measures compared to state-of-the-art denoising techniques.
OI Lee, Sangyoon/0009-0005-8825-5175; Kim, Namkug/0000-0002-3438-2217
ZA 0
ZR 0
AB 减少计算机断层扫描 (CT) 中的辐射剂量对于降低继发性癌症风险至关重要。然而,低剂量 CT (LDCT) 图像的使用伴随着噪声的增加,可能会对诊断产生负面影响。尽管已经开发了许多用于 LDCT 去噪的深度学习算法,但仍然存在一些挑战,包括放射科医生遇到的视觉不一致、各种指标的性能不令人满意,以及对网络在其他 CT 领域的鲁棒性探索不足。为了解决这些问题,本研究提出了三个新的增长点。首先,我们通过多任务学习提出了一种具有鲁棒判别器的生成对抗网络(GAN),该网络同时执行三个视觉任务:恢复、图像级和像素级决策。执行的多任务越多,生成器的去噪性能就越好,这意味着多任务学习使鉴别器能够向生成器提供更有意义的反馈。其次,引入恢复一致性(RC)和无差异抑制(NDS)两种调节机制来提高判别器的表示能力。这些机制消除了不相关的区域,并比较了判别器的输入和恢复结果,从而促进有效的 GAN 训练。最后,我们将残差快速傅里叶变换与卷积(Res-FFT-Conv)块合并到生成器中,以利用频率和空间表示。这种方法通过使用空间(或局部)、光谱(或全局)和残差连接来提供混合感受野。我们的模型在两个去噪任务中使用各种像素和特征空间指标进行了评估。 此外,我们与放射科医生一起进行了视觉评分。结果表明,与最先进的去噪技术相比,在定量和定性测量方面都具有优越的性能。OI 李桑云/0009-0005-8825-5175;金·南库格/0000-0002-3438-2217 ZA 0 ZR 0
ZS 0
Z8 0
TC 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-08-29
UT MEDLINE:39186436
PM 39186436
ER
ZS 0 Z8 0 TC 0 ZB 0 Z9 0 EI 1558-254X DA 2024-08-29 UT MEDLINE:39186436 PM 39186436 ER
PT J
AU Luo, Mengting
Zhou, Nan
Wang, Tao
He, Linchao
Wang, Wang
Chen, Hu
Liao, Peixi
Zhang, Yi
PT J AU Luo, 周梦婷, 王楠, 何涛, 王林超, 王晨, 廖胡, 张培熙, 易
TI Bi-Constraints Diffusion: A Conditional Diffusion Model with Degradation
Guidance for Metal Artifact Reduction.
TI 双约束扩散:具有用于减少金属伪影的降解指导的条件扩散模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3442950
DT Journal Article
PD 2024-Aug-15
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3442950 DT 期刊文章 PD 2024-Aug-15 PY 2024
AB In recent years, score-based diffusion models have emerged as effective
tools for estimating score functions from empirical data distributions,
particularly in integrating implicit priors with inverse problems like
CT reconstruction. However, score-based diffusion models are rarely
explored in challenging tasks such as metal artifact reduction (MAR). In
this paper, we introduce the BiConstraints Diffusion Model for Metal
Artifact Reduction (BCDMAR), an innovative approach that enhances
iterative reconstruction with a conditional diffusion model for MAR.
This method employs a metal artifact degradation operator in place of
the traditional metal-excluded projection operator in the data-fidelity
term, thereby preserving structure details around metal regions.
However, scorebased diffusion models tend to be susceptible to grayscale
shifts and unreliable structures, making it challenging to reach an
optimal solution. To address this, we utilize a precorrected image as a
prior constraint, guiding the generation of the score-based diffusion
model. By iteratively applying the score-based diffusion model and the
data-fidelity step in each sampling iteration, BCDMAR effectively
maintains reliable tissue representation around metal regions and
produces highly consistent structures in non-metal regions. Through
extensive experiments focused on metal artifact reduction tasks, BCDMAR
demonstrates superior performance over other state-of-the-art
unsupervised and supervised methods, both quantitatively and in terms of
visual results.
OI Wang, Tao/0000-0002-5311-3821
AB 近年来,基于分数的扩散模型已成为根据经验数据分布估计分数函数的有效工具,特别是在将隐式先验与 CT 重建等逆问题相结合时。然而,在金属伪影减少(MAR)等具有挑战性的任务中,很少探索基于评分的扩散模型。在本文中,我们介绍了用于金属伪影减少的 BiConstraints 扩散模型 (BCDMAR),这是一种利用 MAR 条件扩散模型增强迭代重建的创新方法。该方法在数据保真度方面采用金属伪影退化算子代替传统的金属排除投影算子,从而保留金属区域周围的结构细节。然而,基于分数的扩散模型往往容易受到灰度变化和不可靠结构的影响,使得达到最佳解决方案具有挑战性。为了解决这个问题,我们利用预先校正的图像作为先验约束,指导基于分数的扩散模型的生成。通过在每次采样迭代中迭代应用基于分数的扩散模型和数据保真度步骤,BCDMAR 有效地保持金属区域周围可靠的组织表示,并在非金属区域中产生高度一致的结构。通过针对金属伪影减少任务的大量实验,BCDMAR 在定量和视觉结果方面都表现出了优于其他最先进的无监督和监督方法的性能。王爱涛/0000-0002-5311-3821
ZS 0
ZA 0
ZR 0
ZB 0
Z8 0
TC 0
Z9 0
EI 1558-254X
DA 2024-08-17
UT MEDLINE:39146167
PM 39146167
ER
ZS 0 ZA 0 ZR 0 ZB 0 Z8 0 TC 0 Z9 0 EI 1558-254X DA 2024-08-17 UT MEDLINE:39146167 PM 39146167 ER
PT J
AU Yan, Siyuan
Yu, Zhen
Liu, Chi
Ju, Lie
Mahapatra, Dwarikanath
Betz-Stablein, Brigid
Mar, Victoria
Janda, Monika
Soyer, Peter
Ge, Zongyuan
PT J AU Yan, 于思源, 刘震, Chi Ju, Lie Mahapatra, Dwarikanath Betz-Stablein, Brigid Mar, Victoria Janda, Monika Soyer, Peter Ge, 宗源
TI Prompt-driven Latent Domain Generalization for Medical Image
Classification.
TI 提示驱动的医学图像分类潜域泛化。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3443119
DT Journal Article
PD 2024-Aug-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3443119 DT 期刊文章 PD 2024-Aug-13 PY 2024
AB Deep learning models for medical image analysis easily suffer from
distribution shifts caused by dataset artifact bias, camera variations,
differences in the imaging station, etc., leading to unreliable
diagnoses in real-world clinical settings. Domain generalization (DG)
methods, which aim to train models on multiple domains to perform well
on unseen domains, offer a promising direction to solve the problem.
However, existing DG methods assume domain labels of each image are
available and accurate, which is typically feasible for only a limited
number of medical datasets. To address these challenges, we propose a
unified DG framework for medical image classification without relying on
domain labels, called Prompt-driven Latent Domain Generalization (PLDG).
PLDG consists of unsupervised domain discovery and prompt learning. This
framework first discovers pseudo domain labels by clustering the
bias-associated style features, then leverages collaborative domain
prompts to guide a Vision Transformer to learn knowledge from discovered
diverse domains. To facilitate cross-domain knowledge learning between
different prompts, we introduce a domain prompt generator that enables
knowledge sharing between domain prompts and a shared prompt. A domain
mixup strategy is additionally employed for more flexible decision
margins and mitigates the risk of incorrect domain assignments.
Extensive experiments on three medical image classification tasks and
one debiasing task demonstrate that our method can achieve comparable or
even superior performance than conventional DG algorithms without
relying on domain labels. Our code is publicly available at
https://github.com/SiyuanYan1/PLDG/tree/main.
OI yu, zhen/0000-0001-5210-2672; Ge, Zongyuan/0000-0002-5880-8673
ZR 0
ZA 0
TC 0
Z8 0
AB 用于医学图像分析的深度学习模型很容易受到数据集伪影偏差、相机变化、成像站差异等引起的分布变化的影响,从而导致现实临床环境中的诊断不可靠。领域泛化(DG)方法旨在训练多个领域的模型,使其在未见过的领域中表现良好,为解决该问题提供了一个有希望的方向。然而,现有的 DG 方法假设每个图像的域标签可用且准确,这通常仅适用于有限数量的医学数据集。为了应对这些挑战,我们提出了一种不依赖域标签的医学图像分类统一 DG 框架,称为提示驱动的潜在域泛化(PLDG)。 PLDG 由无监督领域发现和即时学习组成。该框架首先通过聚类与偏差相关的风格特征来发现伪域标签,然后利用协作域提示来指导 Vision Transformer 从发现的不同域中学习知识。为了促进不同提示之间的跨领域知识学习,我们引入了领域提示生成器,它可以实现领域提示和共享提示之间的知识共享。另外还采用了域混合策略,以获得更灵活的决策裕度,并降低了错误域分配的风险。对三个医学图像分类任务和一个去偏任务的广泛实验表明,我们的方法可以在不依赖域标签的情况下实现与传统 DG 算法相当甚至更好的性能。我们的代码可在 https://github.com/SiyuanYan1/PLDG/tree/main 上公开获取。爱宇振/0000-0001-5210-2672;葛宗源/0000-0002-5880-8673 ZR 0 ZA 0 TC 0 Z8 0
ZS 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-08-16
UT MEDLINE:39137089
PM 39137089
ER
ZS 0 ZB 0 Z9 0 EI 1558-254X DA 2024-08-16 UT MEDLINE:39137089 PM 39137089 ER
PT J
AU Wang, Yuyang
Liu, Xiaomo
Li, Liang
PT J AU Wang, 刘玉阳, 李晓沫, 梁
TI Metal Artifacts Reducing Method Based on Diffusion Model Using Intraoral
Optical Scanning Data for Dental Cone-beam CT.
基于扩散模型的 TI 金属伪影减少方法,使用牙科锥束 CT 口内光学扫描数据。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3440009
DT Journal Article
PD 2024-Aug-07
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3440009 DT 期刊文章 PD 2024-Aug-07 PY 2024
AB In dental cone-beam computed tomography (CBCT), metal implants can cause
metal artifacts, affecting image quality and the final medical
diagnosis. To reduce the impact of metal artifacts, our proposed metal
artifacts reduction (MAR) method takes a novel approach by integrating
CBCT data with intraoral optical scanning data, utilizing information
from these two different modalities to correct metal artifacts in the
projection domain using a guided-diffusion model. The intraoral optical
scanning data provides a more accurate generation domain for the
diffusion model. We have proposed a multi-channel generation method in
the training and generation stage of the diffusion model, considering
the physical mechanism of CBCT, to ensure the consistency of the
diffusion model generation. In this paper, we present experimental
results that convincingly demonstrate the feasibility and efficacy of
our approach, which introduces intraoral optical scanning data into the
analysis and processing of projection domain data using the diffusion
model for the first time, and modifies the diffusion model to better
adapt to the physical model of CBCT.
OI Wang, Yuyang/0009-0008-3825-0300; , Liang/0000-0003-1148-1260
ZA 0
ZR 0
Z8 0
ZB 0
TC 0
AB 在牙科锥形束计算机断层扫描 (CBCT) 中,金属植入物可能会产生金属伪影,影响图像质量和最终的医疗诊断。为了减少金属伪影的影响,我们提出的金属伪影减少(MAR)方法采用了一种新颖的方法,将 CBCT 数据与口内光学扫描数据相结合,利用这两种不同模式的信息,使用引导-校正投影域中的金属伪影。扩散模型。口内光学扫描数据为扩散模型提供了更准确的生成域。考虑到CBCT的物理机制,我们在扩散模型的训练和生成阶段提出了多通道生成方法,以保证扩散模型生成的一致性。在本文中,我们提出的实验结果令人信服地证明了我们方法的可行性和有效性,该方法首次使用扩散模型将口内光学扫描数据引入到投影域数据的分析和处理中,并将扩散模型修改为更好的适应CBCT的物理模型。王爱玉阳/0009-0008-3825-0300; , 梁/0000-0003-1148-1260 ZA 0 ZR 0 Z8 0 ZB 0 TC 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-08-10
UT MEDLINE:39110561
PM 39110561
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-08-10 UT MEDLINE:39110561 PM 39110561 ER
PT J AU Jiang, Xiajun Missel, Ryan Toloubidokhti, Maryam Gillette, Karli Prassl, Anton J. Plank, Gernot Horacek, B. Milan Sapp, John L. Wang, Linwei
TI Hybrid Neural State-Space Modeling for Supervised and Unsupervised
Electrocardiographic Imaging
用于监督和无监督心电图成像的 TI 混合神经状态空间建模
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2733
EP 2744
DI 10.1109/TMI.2024.3377094
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2733 EP 2744 DI 10.1109/TMI.2024.3377094 DT 文章 PD 2024 年 8 月 PY 2024
AB State-space modeling (SSM) provides a general framework for many image
reconstruction tasks. Error in a priori physiological knowledge of the
imaging physics, can bring incorrectness to solutions. Modern
deep-learning approaches show great promise but lack interpretability
and rely on large amounts of labeled data. In this paper, we present a
novel hybrid SSM framework for electrocardiographic imaging (ECGI) to
leverage the advantage of state-space formulations in data-driven
learning. We first leverage the physics-based forward operator to
supervise the learning. We then introduce neural modeling of the
transition function and the associated Bayesian filtering strategy. We
applied the hybrid SSM framework to reconstruct electrical activity on
the heart surface from body-surface potentials. In unsupervised settings
of both in-silico and in-vivo data without cardiac electrical activity
as the ground truth to supervise the learning, we demonstrated improved
ECGI performances of the hybrid SSM framework trained from a small
number of ECG observations in comparison to the fixed SSM. We further
demonstrated that, when in-silico simulation data becomes available,
mixed supervised and unsupervised training of the hybrid SSM achieved a
further 40.6% and 45.6% improvements, respectively, in comparison to
traditional ECGI baselines and supervised data-driven ECGI baselines for
localizing the origin of ventricular activations in real data.
RI Toloubidokhti, Maryam/JYQ-5628-2024; Jiang, Xiajun/ISV-6214-2023; Gillette, Karli/JMR-0126-2023; Plank, Gernot/C-9498-2011; Sapp, John/; Wang, Linwei/; Jiang, Xiajun/; Gillette, Karli/; Prassl, Anton J/
OI Plank, Gernot/0000-0002-7380-6908; Sapp, John/0000-0002-9602-2751; Wang,
Linwei/0000-0002-5678-2369; Jiang, Xiajun/0000-0003-1075-6736; Gillette,
Karli/0000-0002-0420-5375; Prassl, Anton J/0000-0002-1920-1377
AB 状态空间建模 (SSM) 为许多图像重建任务提供了通用框架。成像物理学的先验生理知识的错误可能会给解决方案带来不正确的结果。现代深度学习方法显示出巨大的前景,但缺乏可解释性,并且依赖大量标记数据。在本文中,我们提出了一种用于心电图成像(ECGI)的新型混合 SSM 框架,以利用数据驱动学习中状态空间公式的优势。我们首先利用基于物理的前向算子来监督学习。然后我们介绍转换函数的神经建模和相关的贝叶斯过滤策略。我们应用混合 SSM 框架从体表电位重建心脏表面的电活动。在计算机和体内数据的无监督设置中,没有心电活动作为监督学习的基本事实,我们证明了与固定 SSM 相比,通过少量 ECG 观察训练的混合 SSM 框架的 ECGI 性能得到了改善。我们进一步证明,当计算机模拟数据可用时,与传统 ECGI 基线和用于本地化的监督数据驱动 ECGI 基线相比,混合 SSM 的混合监督和无监督训练分别实现了 40.6% 和 45.6% 的改进。真实数据中心室激活的起源。RI Toloubidkhti,玛丽亚姆/JYQ-5628-2024;蒋夏军/ISV-6214-2023;吉列,卡利/JMR-0126-2023;格诺普兰克/C-9498-2011;萨普,约翰/;王林伟/;蒋夏军/;吉列,卡利/;普拉斯尔,安东·J/OI Plank,格诺特/0000-0002-7380-6908;约翰·萨普/0000-0002-9602-2751;王林伟/0000-0002-5678-2369;蒋夏军/0000-0003-1075-6736;吉列,卡利/0000-0002-0420-5375;普拉斯尔,安东 J/0000-0002-1920-1377
ZS 0
ZR 0
ZA 0
TC 0
ZB 0
Z8 0
Z9 0
C1 Rochester Inst Technol, Golisano Coll Comp & Informat Sci, Rochester, NY
14623 USA
C1 Med Univ Graz, Gottfried Schatz Res Ctr, Graz 8010, Austria
C1 Dalhousie Univ, Sch Biomed Engn, Halifax, NS B3H 4R2, Canada
C1 Dalhousie Univ, QEII Hlth Sci Ctr, Halifax, NS, Canada
C1 Dalhousie Univ, Dept Med, Halifax, NS B3H 4R2, Canada
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200021
PM 38478452
ER
ZS 0 ZR 0 ZA 0 TC 0 ZB 0 Z8 0 Z9 0 C1 Rochester Inst Technol, Golisano Coll Comp & Informat Sci, Rochester, NY 14623 USA C1 Med Univ Graz, Gottfried Schatz Res Ctr, Graz 8010, Austria C1 Dalhousie Univ, Sch Biomed Engn,哈利法克斯,NS B3H 4R2,加拿大 C1 达尔豪斯大学,QEII Hlth Sci Ctr,哈利法克斯,NS,加拿大 C1 达尔豪斯大学,医学部,哈利法克斯,NS B3H 4R2,加拿大 SN 0278-0062 EI 1558-254X DA 2024-08 -18 UT WOS:001285367200021 PM 38478452 ER
PT J
AU Zeng, Qingjie
Xie, Yutong
Lu, Zilin
Lu, Mengkang
Zhang, Jingfeng
Zhou, Yuyin
Xia, Yong
PT J AU Zeng, 谢清杰, 陆雨桐, 陆子林, 张孟康, 周景峰, 夏玉银, 勇
TI Consistency-guided Differential Decoding for Enhancing Semi-supervised
Medical Image Segmentation.
用于增强半监督医学图像分割的 TI 一致性引导差分解码。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3429340
DT Journal Article
PD 2024-Aug-01
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3429340 DT 期刊文章 PD 2024-Aug-01 PY 2024
AB Semi-supervised learning (SSL) has been proven beneficial for mitigating
the issue of limited labeled data, especially on volumetric medical
image segmentation. Unlike previous SSL methods which focus on exploring
highly confident pseudo-labels or developing consistency regularization
schemes, our empirical findings suggest that differential decoder
features emerge naturally when two decoders strive to generate
consistent predictions. Based on the observation, we first analyze the
treasure of discrepancy in learning towards consistency, under both
pseudo-labeling and consistency regularization settings, and
subsequently propose a novel SSL method called LeFeD, which learns the
feature-level discrepancies obtained from two decoders, by feeding such
information as feedback signals to the encoder. The core design of LeFeD
is to enlarge the discrepancies by training differential decoders, and
then learn from the differential features iteratively. We evaluate LeFeD
against eight state-of-the-art (SOTA) methods on three public datasets.
Experiments show LeFeD surpasses competitors without any bells and
whistles, such as uncertainty estimation and strong constraints, as well
as setting a new state of the art for semi-supervised medical image
segmentation. Code has been released at
https://github.com/maxwell0027/LeFeD.
OI Xie, Yutong/0000-0002-6644-1250
Z8 0
ZB 0
TC 0
ZR 0
AB 半监督学习 (SSL) 已被证明有助于缓解有限标记数据的问题,尤其是在体积医学图像分割方面。与之前的 SSL 方法专注于探索高度置信的伪标签或开发一致性正则化方案不同,我们的实证研究结果表明,当两个解码器努力生成一致的预测时,差分解码器特征会自然出现。基于观察,我们首先分析了在伪标签和一致性正则化设置下学习一致性的差异宝藏,随后提出了一种称为 LeFeD 的新颖 SSL 方法,该方法学习从两个解码器获得的特征级别差异,通过将此类信息作为反馈信号馈送到编码器。 LeFeD的核心设计是通过训练差分解码器来放大差异,然后迭代地从差分特征中学习。我们在三个公共数据集上针对八种最先进 (SOTA) 方法评估 LeFeD。实验表明,LeFeD 在没有任何附加功能(例如不确定性估计和强约束)的情况下超越了竞争对手,并为半监督医学图像分割设定了新的技术水平。代码已发布于 https://github.com/maxwell0027/LeFeD。 OI 谢宇通/0000-0002-6644-1250 Z8 0 ZB 0 TC 0 ZR 0
ZS 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-08-03
UT MEDLINE:39088492
PM 39088492
ER
ZS 0 ZA 0 Z9 0 EI 1558-254X DA 2024-08-03 UT MEDLINE:39088492 PM 39088492 ER
PT J
AU Li, Jun
Su, Tongkun
Zhao, Baoliang
Lv, Faqin
Wang, Qiong
Navab, Nassir
Hu, Ying
Jiang, Zhongliang
PT J AU Li, 苏军, 赵同坤, 吕宝亮, 王发勤, 纳瓦布琼, 胡纳西尔, 蒋英, 忠良
TI Ultrasound Report Generation with Cross-Modality Feature Alignment via
Unsupervised Guidance.
通过无监督指导生成具有跨模态特征对齐的 TI 超声报告。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3424978
DT Journal Article
PD 2024-Jul-16
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3424978 DT 期刊文章 PD 2024-Jul-16 PY 2024
AB Automatic report generation has arisen as a significant research area in
computer-aided diagnosis, aiming to alleviate the burden on clinicians
by generating reports automatically based on medical images. In this
work, we propose a novel framework for automatic ultrasound report
generation, leveraging a combination of unsupervised and supervised
learning methods to aid the report generation process. Our framework
incorporates unsupervised learning methods to extract potential
knowledge from ultrasound text reports, serving as the prior information
to guide the model in aligning visual and textual features, thereby
addressing the challenge of feature discrepancy. Additionally, we design
a global semantic comparison mechanism to enhance the performance of
generating more comprehensive and accurate medical reports. To enable
the implementation of ultrasound report generation, we constructed three
large-scale ultrasound image-text datasets from different organs for
training and validation purposes. Extensive evaluations with other
state-of-the-art approaches exhibit its superior performance across all
three datasets. Code and dataset are valuable at this link.
OI Li, Jun/0000-0002-0652-6041; Jiang, Zhongliang/0000-0001-7461-2200
TC 0
ZR 0
AB 自动报告生成已成为计算机辅助诊断的一个重要研究领域,旨在通过根据医学图像自动生成报告来减轻临床医生的负担。在这项工作中,我们提出了一种自动超声报告生成的新颖框架,利用无监督和监督学习方法的组合来帮助报告生成过程。我们的框架采用无监督学习方法,从超声文本报告中提取潜在知识,作为先验信息来指导模型对齐视觉和文本特征,从而解决特征差异的挑战。此外,我们设计了一种全局语义比较机制,以提高生成更全面、更准确的医疗报告的性能。为了实现超声报告生成,我们构建了来自不同器官的三个大规模超声图像文本数据集,用于训练和验证目的。对其他最先进方法的广泛评估显示了其在所有三个数据集上的卓越性能。代码和数据集在此链接中很有价值。李爱君/0000-0002-0652-6041;姜忠良/0000-0001-7461-2200 TC 0 ZR 0
ZS 0
ZB 0
Z8 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-07-20
UT MEDLINE:39012730
PM 39012730
ER
ZS 0 ZB 0 Z8 0 ZA 0 Z9 0 EI 1558-254X DA 2024-07-20 UT MEDLINE:39012730 PM 39012730 ER
PT J
AU Zhang, Dong
Liu, Xiujian
Wang, Anbang
Zhang, Hongwei
Yang, Guang
Zhang, Heye
Gao, Zhifan
PT J AU 张, 刘栋, 王秀剑, 张安邦, 杨宏伟, 张广, 高荷叶, 志凡
TI Constraint-Aware Learning for Fractional Flow Reserve Pullback Curve
Estimation from Invasive Coronary Imaging.
用于通过侵入性冠状动脉成像估计血流储备分数回拉曲线的 TI 约束感知学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3412935
DT Journal Article
PD 2024-Jun-11
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3412935 DT 期刊文章 PD 2024-Jun-11 PY 2024
AB Estimation of the fractional flow reserve (FFR) pullback curve from
invasive coronary imaging is important for the intraoperative guidance
of coronary intervention. Machine/deep learning has been proven
effective in FFR pullback curve estimation. However, the existing
methods suffer from inadequate incorporation of intrinsic geometry
associations and physics knowledge. In this paper, we propose a
constraint-aware learning framework to improve the estimation of the FFR
pullback curve from invasive coronary imaging. It incorporates both
geometrical and physical constraints to approximate the relationships
between the geometric structure and FFR values along the coronary artery
centerline. Our method also leverages the power of synthetic data in
model training to reduce the collection costs of clinical data.
Moreover, to bridge the domain gap between synthetic and real data
distributions when testing on real-world imaging data, we also employ a
diffusion-driven test-time data adaptation method that preserves the
knowledge learned in synthetic data. Specifically, this method learns a
diffusion model of the synthetic data distribution and then projects
real data to the synthetic data distribution at test time. Extensive
experimental studies on a synthetic dataset and a real-world dataset of
382 patients covering three imaging modalities have shown the better
performance of our method for FFR estimation of stenotic coronary
arteries, compared with other machine/deep learning-based FFR estimation
models and computational fluid dynamics-based model. The results also
provide high agreement and correlation between the FFR predictions of
our method and the invasively measured FFR values. The plausibility of
FFR predictions along the coronary artery centerline is also validated.
RI Gao, Zhifan/; Zhang, Dong/; Yang, Guang/S-5032-2016
OI Gao, Zhifan/0000-0002-1576-4439; Zhang, Dong/0000-0001-6975-7851; Yang,
Guang/0000-0001-7344-7733
ZA 0
Z8 0
ZR 0
ZB 1
TC 1
AB 通过侵入性冠状动脉成像估计血流储备分数 (FFR) 回拉曲线对于冠状动脉介入治疗的术中指导非常重要。机器/深度学习已被证明在 FFR 回调曲线估计中有效。然而,现有的方法缺乏对内在几何关联和物理知识的充分结合。在本文中,我们提出了一种约束感知学习框架,以改进侵入性冠状动脉成像中 FFR 回拉曲线的估计。它结合了几何和物理约束来近似沿着冠状动脉中心线的几何结构和 FFR 值之间的关系。我们的方法还利用模型训练中合成数据的力量来降低临床数据的收集成本。此外,为了在测试真实世界成像数据时弥合合成数据分布和真实数据分布之间的领域差距,我们还采用了扩散驱动的测试时数据适应方法,该方法保留了在合成数据中学到的知识。具体来说,该方法学习合成数据分布的扩散模型,然后在测试时将真实数据投影到合成数据分布。对合成数据集和涵盖三种成像模式的 382 名患者的真实数据集进行的广泛实验研究表明,与其他基于机器/深度学习的 FFR 估计模型和计算相比,我们的方法在狭窄冠状动脉的 FFR 估计方面具有更好的性能基于流体动力学的模型。结果还提供了我们方法的 FFR 预测与侵入式测量的 FFR 值之间的高度一致性和相关性。沿着冠状动脉中心线的 FFR 预测的合理性也得到了验证。高日,范志/;张东/;杨光/S-5032-2016 OI 高志凡/0000-0002-1576-4439;张栋/0000-0001-6975-7851;杨光/0000-0001-7344-7733 ZA 0 Z8 0 ZR 0 ZB 1 TC 1
ZS 0
Z9 1
EI 1558-254X
DA 2024-06-13
UT MEDLINE:38861432
PM 38861432
ER
ZS 0 Z9 1 EI 1558-254X DA 2024-06-13 UT MEDLINE:38861432 PM 38861432 ER
PT J
AU Shao, Wei
Shi, Hang
Liu, Jianxin
Zuo, Yingli
Sun, Liang
Xia, Tiansong
Chen, Wanyuan
Wan, Peng
Sheng, Jianpeng
Zhu, Qi
Zhang, Daoqiang
PT J AU Shao、Wei Shi、Hang Liu、Jianxin Zuo、Yingli Sun、Liang Xia、Tiansong Chen、Wanyuan Wan、Peng Shen、Jianpeng Zhu、Qi Zhu、Daoqiang
TI Multi-Instance Multi-Task Learning for Joint Clinical Outcome and
Genomic Profile Predictions From the Histopathological Images
TI 多实例多任务学习,用于根据组织病理学图像进行联合临床结果和基因组图谱预测
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2266
EP 2278
DI 10.1109/TMI.2024.3362852
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2266 EP 2278 DI 10.1109/TMI.2024.3362852 DT 文章 PD 2024 年 6 月 PY 2024
AB With the remarkable success of digital histopathology and the deep
learning technology, many whole-slide pathological images (WSIs) based
deep learning models are designed to help pathologists diagnose human
cancers. Recently, rather than predicting categorical variables as in
cancer diagnosis, several deep learning studies are also proposed to
estimate the continuous variables such as the patients' survival or
their transcriptional profile. However, most of the existing studies
focus on conducting these predicting tasks separately, which overlooks
the useful intrinsic correlation among them that can boost the
prediction performance of each individual task. In addition, it is sill
challenge to design the WSI-based deep learning models, since a WSI is
with huge size but annotated with coarse label. In this study, we
propose a general multi-instance multi-task learning framework
(HistMIMT) for multi-purpose prediction from WSIs. Specifically, we
firstly propose a novel multi-instance learning module (TMICS)
considering both common and specific task information across different
tasks to generate bag representation for each individual task. Then, a
soft-mask based fusion module with channel attention (SFCA) is developed
to leverage useful information from the related tasks to help improve
the prediction performance on target task. We evaluate our method on
three cancer cohorts derived from the Cancer Genome Atlas (TCGA). For
each cohort, our multi-purpose prediction tasks range from cancer
diagnosis, survival prediction and estimating the transcriptional
profile of gene TP53. The experimental results demonstrated that
HistMIMT can yield better outcome on all clinical prediction tasks than
its competitors.
RI Zhang, Daoqiang/D-3754-2011; Zhu, Qi/KYR-6238-2024; Zhu, Qi/; Wan, Peng/
OI Zhu, Qi/0000-0001-5497-9478; Wan, Peng/0000-0002-6094-7250
AB 随着数字组织病理学和深度学习技术的巨大成功,许多基于全幻灯片病理图像(WSI)的深度学习模型被设计用来帮助病理学家诊断人类癌症。最近,还提出了几项深度学习研究来估计连续变量,例如患者的生存或其转录谱,而不是像癌症诊断中那样预测分类变量。然而,大多数现有研究都集中于单独执行这些预测任务,而忽略了它们之间有用的内在相关性,而这些内在相关性可以提高每个单独任务的预测性能。此外,设计基于 WSI 的深度学习模型仍然是一个挑战,因为 WSI 尺寸巨大,但标签标注粗糙。在本研究中,我们提出了一种通用的多实例多任务学习框架(HistMIMT),用于 WSI 的多用途预测。具体来说,我们首先提出了一种新颖的多实例学习模块(TMICS),考虑不同任务之间的常见和特定任务信息,为每个单独的任务生成包表示。然后,开发了一种具有通道注意功能的基于软掩模的融合模块(SFCA),以利用相关任务中的有用信息来帮助提高目标任务的预测性能。我们在来自癌症基因组图谱 (TCGA) 的三个癌症队列中评估了我们的方法。对于每个队列,我们的多用途预测任务包括癌症诊断、生存预测和估计基因 TP53 的转录谱。实验结果表明,HistMIMT 在所有临床预测任务上都能比竞争对手产生更好的结果。张日道强/D-3754-2011;朱琪/KYR-6238-2024;朱琪/;万鹏/朱爱琪/0000-0001-5497-9478;万鹏/0000-0002-6094-7250
ZS 0
ZA 0
TC 1
ZB 0
ZR 0
Z8 0
Z9 1
C1 Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Key Lab
Brain Machine Intelligence Technol, Nanjing 211106, Peoples R China
C1 Nanjing Univ Aeronaut & Astronaut, MIIT Key Lab Pattern Anal & Machine
Intelligence, Nanjing 211106, Peoples R China
C1 Nanjing Med Univ, Affiliated Hosp 1, Nanjing 210008, Peoples R China
C1 Zhejiang Prov Peoples Hosp, Affiliated Peoples Hosp, Hangzhou Med Coll,
Dept Pathol, Hangzhou 310004, Zhejiang, Peoples R China
C1 Zhejiang Univ, Affiliated Hosp 1, Sch Med, Hangzhou 310058, Peoples R
China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100005
PM 38319755
ER
ZS 0 ZA 0 TC 1 ZB 0 ZR 0 Z8 0 Z9 1 C1 南京航空航天大学 计算科学技术学院 脑机智能技术重点实验室 南京 211106 人民共和国 C1 南京航空航天大学 工信部重点实验室模式肛门与机器智能,南京 211106,人民 R 中国 C1 南京医科大学附属第一医院,南京 210008,人民 R 中国 C1 浙江省人民医院,附属人民医院,杭州医学院,病理科,浙江省杭州市 310004 R 中国 C1 浙江大学附属第一医院,杭州 310058,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100005 PM 38319755 ER
PT J
AU Ding, Saisai
Li, Juncheng
Wang, Jun
Ying, Shihui
Shi, Jun
PT J AU Ding, Saisai Li, Jun Cheng Wang, Jun Ying, Shihui Shi, Jun
TI Multimodal Co-attention Fusion Network with Online Data Augmentation for
Cancer Subtype Classification.
TI 多模态协同注意力融合网络,具有用于癌症亚型分类的在线数据增强功能。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3405535
DT Journal Article
PD 2024-May-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3405535 DT 期刊文章 PD 2024-May-27 PY 2024
AB It is an essential task to accurately diagnose cancer subtypes in
computational pathology for personalized cancer treatment. Recent
studies have indicated that the combination of multimodal data, such as
whole slide images (WSIs) and multi-omics data, could achieve more
accurate diagnosis. However, robust cancer diagnosis remains challenging
due to the heterogeneity among multimodal data, as well as the
performance degradation caused by insufficient multimodal patient data.
In this work, we propose a novel multimodal co-attention fusion network
(MCFN) with online data augmentation (ODA) for cancer subtype
classification. Specifically, a multimodal mutual-guided co-attention
(MMC) module is proposed to effectively perform dense multimodal
interactions. It enables multimodal data to mutually guide and calibrate
each other during the integration process to alleviate inter- and
intra-modal heterogeneities. Subsequently, a self-normalizing network
(SNN)-Mixer is developed to allow information communication among
different omics data and alleviate the high-dimensional small-sample
size problem in multi-omics data. Most importantly, to compensate for
insufficient multimodal samples for model training, we propose an ODA
module in MCFN. The ODA module leverages the multimodal knowledge to
guide the data augmentations of WSIs and maximize the data diversity
during model training. Extensive experiments are conducted on the public
TCGA dataset. The experimental results demonstrate that the proposed
MCFN outperforms all the compared algorithms, suggesting its
effectiveness.
RI Wang, Jun/D-6393-2017; Ding, Saisai/; Li, Juncheng/
OI Wang, Jun/0000-0001-9548-0411; Ding, Saisai/0000-0002-4324-7316; Li,
Juncheng/0000-0001-7314-6754
AB 在计算病理学中准确诊断癌症亚型是个性化癌症治疗的一项重要任务。最近的研究表明,多模态数据(例如全幻灯片图像(WSI)和多组学数据)的结合可以实现更准确的诊断。然而,由于多模态数据之间的异质性,以及多模态患者数据不足导致的性能下降,稳健的癌症诊断仍然具有挑战性。在这项工作中,我们提出了一种新颖的多模式共同注意融合网络(MCFN)和在线数据增强(ODA),用于癌症亚型分类。具体来说,提出了一种多模态相互引导共同注意(MMC)模块来有效地执行密集的多模态交互。它使多模态数据能够在集成过程中相互指导和校准,以减轻模态间和模内的异质性。随后,开发了自归一化网络(SNN)-Mixer,以允许不同组学数据之间的信息通信,并缓解多组学数据中的高维小样本问题。最重要的是,为了弥补模型训练的多模态样本不足的问题,我们在 MCFN 中提出了 ODA 模块。 ODA模块利用多模态知识来指导WSI的数据增强,并在模型训练期间最大化数据多样性。在公共 TCGA 数据集上进行了大量实验。实验结果表明,所提出的 MCFN 优于所有比较算法,表明其有效性。王日,Jun/D-6393-2017;丁、赛赛/;李俊成/王爱俊/0000-0001-9548-0411;丁赛赛/0000-0002-4324-7316;李俊成/0000-0001-7314-6754
ZS 0
ZA 0
ZB 0
ZR 0
TC 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-05-31
UT MEDLINE:38801690
PM 38801690
ER
ZS 0 ZA 0 ZB 0 ZR 0 TC 0 Z8 0 Z9 0 EI 1558-254X DA 2024-05-31 UT MEDLINE:38801690 PM 38801690 ER
PT J
AU Li, Yicong
Li, Wanhua
Chen, Qi
Huang, Wei
Zou, Yuda
Xiao, Xin
Shinomiya, Kazunori
Gunn, Pat
Gupta, Nishika
Polilov, Alexey
Xu, Yongchao
Zhang, Yueyi
Xiong, Zhiwei
Pfister, Hanspeter
Wei, Donglai
Wu, Jingpeng
PT J AU Li, 李一聪, 陈万华, 黄奇, 邹伟, 肖宇达, Xin Shinomiya, Kazunori Gunn, Pat Gupta, Nishika Polilov, Alexey Xu, 张永超, Yueyi Xiong,zhiwei Pfister, Hanspeter Wei, Donglai Wu,景鹏
TI WASPSYN: A Challenge for Domain Adaptive Synapse Detection in Microwasp
Brain Connectomes.
TI WASPSYN:Microwasp 大脑连接体中域自适应突触检测的挑战。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3400276
DT Journal Article
PD 2024-May-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3400276 DT 期刊文章 PD 2024-May-13 PY 2024
AB The size of image volumes in connectomics studies now reaches terabyte
and often petabyte scales with a great diversity of appearance due to
different sample preparation procedures. However, manual annotation of
neuronal structures (e.g., synapses) in these huge image volumes is
time-consuming, leading to limited labeled training data often smaller
than 0.001% of the large-scale image volumes in application. Methods
that can utilize in-domain labeled data and generalize to out-of-domain
unlabeled data are in urgent need. Although many domain adaptation
approaches are proposed to address such issues in the natural image
domain, few of them have been evaluated on connectomics data due to a
lack of domain adaptation benchmarks. Therefore, to enable developments
of domain adaptive synapse detection methods for large-scale
connectomics applications, we annotated 14 image volumes from a
biologically diverse set of Megaphragma viggianii brain regions
originating from three different whole-brain datasets and organized the
WASPSYN challenge at ISBI 2023. The annotations include coordinates of
pre-synapses and post-synapses in the 3D space, together with their
one-to-many connectivity information. This paper describes the dataset,
the tasks, the proposed baseline, the evaluation method, and the results
of the challenge. Limitations of the challenge and the impact on
neuroscience research are also discussed. The challenge is and will
continue to be available at
https://codalab.lisn.upsaclay.fr/competitions/9169. Successful
algorithms that emerge from our challenge may potentially revolutionize
real-world connectomics research and further the cause that aims to
unravel the complexity of brain structure and function.
RI Huang, Wei/; Polilov, Alexey/K-4599-2013; Li, Yicong/
OI Huang, Wei/0000-0001-7513-3105; Polilov, Alexey/0000-0002-6214-3627; Li,
Yicong/0000-0001-5108-4038
Z8 0
ZA 0
ZB 0
ZR 0
TC 0
AB 连接组学研究中的图像体积大小现已达到 TB 级甚至 PB 级,由于样品制备程序不同,其外观具有很大的多样性。然而,对这些巨大图像量中的神经元结构(例如突触)进行手动注释非常耗时,导致有限的标记训练数据通常小于应用中大规模图像量的 0.001%。迫切需要能够利用域内标记数据并推广到域外未标记数据的方法。尽管提出了许多领域适应方法来解决自然图像领域中的此类问题,但由于缺乏领域适应基准,很少有人对连接组学数据进行评估。因此,为了开发适用于大规模连接组学应用的域自适应突触检测方法,我们注释了来自来自三个不同全脑数据集的生物学多样性的维吉亚巨噬菌大脑区域的 14 个图像卷,并在 ISBI 2023 上组织了 WASPSYN 挑战赛。注释包括 3D 空间中突触前和突触后的坐标,以及它们的一对多连接信息。本文描述了数据集、任务、建议的基线、评估方法和挑战结果。还讨论了挑战的局限性以及对神经科学研究的影响。挑战赛现已并将继续在 https://codalab.lisn.upsaclay.fr/competitions/9169 上进行。我们的挑战中出现的成功算法可能会彻底改变现实世界的连接组学研究,并进一步推动旨在揭示大脑结构和功能复杂性的事业。黄日伟/;阿列克谢·波利洛夫/K-4599-2013;李一聪/黄伟/0000-0001-7513-3105;阿列克谢·波利洛夫/0000-0002-6214-3627;李一聪/0000-0001-5108-4038 Z8 0 ZA 0 ZB 0 ZR 0 TC 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-05-16
UT MEDLINE:38739506
PM 38739506
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-05-16 UT MEDLINE:38739506 PM 38739506 ER
PT J
AU Naughton, Noel
Cahoon, Stacey
Sutton, Brad
Georgiadis, John G
PT J AU Naughton、Noel Cahoon、Stacey Sutton、Brad Georgiadis、John G
TI Accelerated, physics-inspired inference of skeletal muscle
microstructure from diffusion-weighted MRI.
TI 通过扩散加权 MRI 加速、受物理启发的骨骼肌微观结构推断。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3397790
DT Journal Article
PD 2024-May-06
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3397790 DT 期刊文章 PD 2024-May-06 PY 2024
AB Muscle health is a critical component of overall health and quality of
life. However, current measures of skeletal muscle health take limited
account of microstructural variations within muscle, which play a
crucial role in mediating muscle function. To address this, we present a
physics-inspired, machine learning-based framework for the non-invasive
estimation of microstructural organization in skeletal muscle from
diffusion-weighted MRI (dMRI) in an uncertainty-aware manner. To reduce
the computational expense associated with direct numerical simulations
of dMRI physics, a polynomial meta-model is developed that accurately
represents the input/output relationships of a high-fidelity numerical
model. This meta-model is used to develop a Gaussian process (GP) model
that provides voxel-wise estimates and confidence intervals of
microstructure organization in skeletal muscle. Given noise-free data,
the GP model accurately estimates microstructural parameters. In the
presence of noise, the diameter, intracellular diffusion coefficient,
and membrane permeability are accurately estimated with narrow
confidence intervals, while volume fraction and extracellular diffusion
coefficient are poorly estimated and exhibit wide confidence intervals.
A reduced-acquisition GP model, consisting of one-third the
diffusion-encoding measurements, is shown to predict parameters with
similar accuracy to the original model. The fiber diameter and volume
fraction estimated by the reduced GP model is validated via histology,
with both parameters accurately estimated, demonstrating the capability
of the proposed framework as a promising non-invasive tool for assessing
skeletal muscle health and function.
OI Georgiadis, John/0000-0002-8217-2003; Naughton, Noel/0000-0002-5553-4718
AB 肌肉健康是整体健康和生活质量的重要组成部分。然而,目前的骨骼肌健康测量方法对肌肉内部微观结构变化的考虑有限,而肌肉内部微观结构变化在调节肌肉功能中发挥着至关重要的作用。为了解决这个问题,我们提出了一种受物理启发、基于机器学习的框架,用于以不确定性感知的方式通过扩散加权 MRI (dMRI) 对骨骼肌的微观结构组织进行非侵入性估计。为了减少与 dMRI 物理直接数值模拟相关的计算费用,开发了一种多项式元模型,可以准确地表示高保真数值模型的输入/输出关系。该元模型用于开发高斯过程 (GP) 模型,该模型提供骨骼肌微观结构组织的体素估计和置信区间。给定无噪声数据,GP 模型可以准确估计微观结构参数。在存在噪声的情况下,直径、细胞内扩散系数和膜渗透性可以以狭窄的置信区间准确估计,而体积分数和细胞外扩散系数的估计较差并且表现出宽的置信区间。减少采集的 GP 模型由三分之一的扩散编码测量组成,可以以与原始模型相似的精度预测参数。通过组织学验证了简化 GP 模型估计的纤维直径和体积分数,这两个参数都得到了准确估计,证明了所提出的框架作为评估骨骼肌健康和功能的有前景的非侵入性工具的能力。 OI 乔治亚迪斯,约翰/0000-0002-8217-2003;诺埃尔·诺顿/0000-0002-5553-4718
ZS 0
ZR 0
ZA 0
TC 0
ZB 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-05-09
UT MEDLINE:38709599
PM 38709599
ER
ZS 0 ZR 0 ZA 0 TC 0 ZB 0 Z8 0 Z9 0 EI 1558-254X DA 2024-05-09 UT MEDLINE:38709599 PM 38709599 ER
PT J
AU van Herten, Rudolf L. M.
Hampe, Nils
Takx, Richard A. P.
Franssen, Klaas Jan
Wang, Yining
Sucha, Dominika
Henriques, Jose P.
Leiner, Tim
Planken, R. Nils
Isgum, Ivana
PT J AU van Herten、Rudolf LM Hampe、Nils Takx、Richard AP Franssen、Klaas Jan Wang、Yining Sucha、Dominika Henriques、Jose P. Leiner、Tim Planken、R. Nils Isgum、Ivana
TI Automatic Coronary Artery Plaque Quantification and CAD-RADS Prediction
Using Mesh Priors
使用网格先验的 TI 自动冠状动脉斑块量化和 CAD-RADS 预测
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1272
EP 1283
DI 10.1109/TMI.2023.3326243
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1272 EP 1283 DI 10.1109/TMI.2023.3326243 DT 文章 PD 2024 年 4 月 PY 2024
AB Coronary artery disease (CAD) remains the leading cause of death
worldwide. Patients with suspected CAD undergo coronary CT angiography
(CCTA) to evaluate the risk of cardiovascular events and determine the
treatment. Clinical analysis of coronary arteries in CCTA comprises the
identification of atherosclerotic plaque, as well as the grading of any
coronary artery stenosis typically obtained through the CAD-Reporting
and Data System (CAD-RADS). This requires analysis of the coronary lumen
and plaque. While voxel-wise segmentation is a commonly used approach in
various segmentation tasks, it does not guarantee topologically
plausible shapes. To address this, in this work, we propose to directly
infer surface meshes for coronary artery lumen and plaque based on a
centerline prior and use it in the downstream task of CAD-RADS scoring.
The method is developed and evaluated using a total of 2407 CCTA scans.
Our method achieved lesion-wise volume intraclass correlation
coefficients of 0.98, 0.79, and 0.85 for calcified, non-calcified, and
total plaque volume respectively. Patient-level CAD-RADS categorization
was evaluated on a representative hold-out test set of 300 scans, for
which the achieved linearly weighted kappa (kappa) was 0.75. CAD-RADS
categorization on the set of 658 scans from another hospital and scanner
led to a kappa of 0.71. The results demonstrate that direct inference of
coronary artery meshes for lumen and plaque is feasible, and allows for
the automated prediction of routinely performed CAD-RADS categorization.
RI Isgum, Ivana/H-8659-2014; Planken, R. Nils/; Leiner, Tim/; Hampe, Nils/; van Herten, Roel/
OI Planken, R. Nils/0000-0003-3940-4670; Leiner, Tim/0000-0003-1885-5499;
Hampe, Nils/0000-0002-2554-5176; van Herten, Roel/0000-0002-4777-7955
AB 冠状动脉疾病 (CAD) 仍然是全世界死亡的主要原因。疑似 CAD 患者接受冠状动脉 CT 血管造影 (CCTA),以评估心血管事件的风险并确定治疗方案。 CCTA 中冠状动脉的临床分析包括动脉粥样硬化斑块的识别,以及通常通过 CAD 报告和数据系统 (CAD-RADS) 获得的任何冠状动脉狭窄的分级。这需要对冠状动脉腔和斑块进行分析。虽然体素分割是各种分割任务中常用的方法,但它不能保证拓扑上合理的形状。为了解决这个问题,在这项工作中,我们建议根据先验中心线直接推断冠状动脉管腔和斑块的表面网格,并将其用于 CAD-RADS 评分的下游任务。该方法是使用总共 2407 个 CCTA 扫描来开发和评估的。我们的方法实现了钙化、非钙化和总斑块体积的病变体积组内相关系数分别为 0.98、0.79 和 0.85。患者级别 CAD-RADS 分类在 300 次扫描的代表性保留测试集上进行评估,其中获得的线性加权 kappa (kappa) 为 0.75。对来自另一家医院和扫描仪的 658 个扫描集进行 CAD-RADS 分类,得出的 kappa 为 0.71。结果表明,直接推断冠状动脉网格的管腔和斑块是可行的,并且可以自动预测常规执行的 CAD-RADS 分类。 RI Isgum,伊万娜/H-8659-2014; Planken,R.尼尔斯/;蒂姆·莱纳/;汉佩,尼尔斯/;范·赫滕 (Roel)/OI Planken, R.尼尔斯/0000-0003-3940-4670;蒂姆·莱纳/0000-0003-1885-5499;汉普,尼尔斯/0000-0002-2554-5176;罗尔·范·赫滕/0000-0002-4777-7955
ZS 0
Z8 0
ZR 0
ZA 0
ZB 0
TC 0
Z9 0
C1 Amsterdam UMC, Dept Biomed Engn & Phys, Amsterdam Cardiovasc Sci,
NL-1105AZ Amsterdam, Netherlands
C1 Univ Amsterdam, Informat Inst, NL-1090GH Amsterdam, Netherlands
C1 Amsterdam UMC, Dept Radiol & Nucl Med, NL-1105AZ Amsterdam, Netherlands
C1 Chinese Acad Med Sci & Peking Union Med Coll, Peking Union Med Coll
Hosp, Dept Radiol, Beijing 100730, Peoples R China
C1 Univ Utrecht, Dept Radiol, UMC Utrecht, NL-3508GA Utrecht, Netherlands
C1 Amsterdam Cardiovasc Sci, Heart Ctr, NL-1105AZ Amsterdam, Netherlands
C1 Amsterdam Cardiovasc Sci, Heart Failure & Arrhythmias, Acad Med Ctr,
NL-1105AZ Amsterdam, Netherlands
C1 Mayo Clin, Dept Radiol, Rochester, MN 55905 USA
C1 Amsterdam UMC, Dept Biomed Engn & Phys, NL-1105AZ Amsterdam, Netherlands
C1 Amsterdam UMC, Dept Radiol & Nucl Med, Amsterdam Cardiovasc Sci,
NL-1105AZ Amsterdam, Netherlands
C1 Univ Amsterdam, Informat Inst, NL-1090GH Amsterdam, Netherlands
C3 Amsterdam UMC
C3 Amsterdam UMC
C3 Amsterdam Cardiovasc Sci
C3 Amsterdam Cardiovasc Sci
C3 Amsterdam UMC
C3 Amsterdam UMC
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400029
PM 37862273
ER
ZS 0 Z8 0 ZR 0 ZA 0 ZB 0 TC 0 Z9 0 C1 阿姆斯特丹 UMC,生物医学工程与物理系,阿姆斯特丹心血管科学,NL-1105AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹大学,Informat Inst,NL-1090GH 阿姆斯特丹,荷兰 C1 阿姆斯特丹 UMC , Dept Radiol & Nucl Med, NL-1105AZ 阿姆斯特丹, 荷兰 C1 中国医学科学院 & 北京协和医科大学, 北京协和医科大学医院, 放射科, 北京 100730, Peoples R China C1 乌得勒支大学, 放射科, UMC Utrecht, NL -3508GA 乌得勒支,荷兰 C1 阿姆斯特丹心血管科学,心脏中心,NL-1105AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹心血管科学,心力衰竭和心律失常,Acad Med Ctr,NL-1105AZ 阿姆斯特丹,荷兰 C1 Mayo Clin,放射科,罗彻斯特,明尼苏达州 55905美国 C1 阿姆斯特丹 UMC,Dept Biomed Engn & Phys,NL-1105AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹 UMC,Dept Radiol & Nucl Med,阿姆斯特丹 Cardiovasc Sci,NL-1105AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹大学,Informat Inst,NL-1090GH 阿姆斯特丹,荷兰C3 阿姆斯特丹 UMC C3 阿姆斯特丹 UMC C3 阿姆斯特丹 Cardiovasc Sci C3 阿姆斯特丹 Cardiovasc Sci C3 阿姆斯特丹 UMC C3 阿姆斯特丹 UMC SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400029 PM 37862273 ER
PT J
AU Guo, Jia
Lu, Shuai
Jia, Lize
Zhang, Weihang
Li, Huiqi
PT J AU 郭, 路佳, 贾帅, 张丽泽, 李伟航, 慧琪
TI Encoder-Decoder Contrast for Unsupervised Anomaly Detection in Medical
Images
用于医学图像中无监督异常检测的 TI 编码器-解码器对比
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1102
EP 1112
DI 10.1109/TMI.2023.3327720
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1102 EP 1112 DI 10.1109/TMI.2023.3327720 DT 文章 PD 2024 年 3 月 PY 2024
AB Unsupervised anomaly detection (UAD) aims to recognize anomalous images
based on the training set that contains only normal images. In medical
image analysis, UAD benefits from leveraging the easily obtained normal
(healthy) images, avoiding the costly collecting and labeling of
anomalous (unhealthy) images. Most advanced UAD methods rely on frozen
encoder networks pre-trained using ImageNet for extracting feature
representations. However, the features extracted from the frozen
encoders that are borrowed from natural image domains coincide little
with the features required in the target medical image domain. Moreover,
optimizing encoders usually causes pattern collapse in UAD. In this
paper, we propose a novel UAD method, namely Encoder-Decoder Contrast
(EDC), which optimizes the entire network to reduce biases towards
pre-trained image domain and orient the network in the target medical
domain. We start from feature reconstruction approach that detects
anomalies from reconstruction errors. Essentially, a contrastive
learning paradigm is introduced to tackle the problem of pattern
collapsing while optimizing the encoder and the reconstruction decoder
simultaneously. In addition, to prevent instability and further improve
performances, we propose to bring globality into the contrastive
objective function. Extensive experiments are conducted across four
medical image modalities including optical coherence tomography, color
fundus image, brain MRI, and skin lesion image, where our method
outperforms all current state-of-the-art UAD methods.
OI Guo, Jia/0000-0002-4449-6867; Lu, Shuai/0000-0002-3532-7498
TC 6
ZA 0
ZB 1
Z8 0
ZR 0
AB 无监督异常检测(UAD)旨在基于仅包含正常图像的训练集来识别异常图像。在医学图像分析中,UAD 受益于利用容易获得的正常(健康)图像,避免了昂贵的收集和标记异常(不健康)图像。大多数先进的 UAD 方法依赖于使用 ImageNet 预先训练的冻结编码器网络来提取特征表示。然而,从自然图像域借用的冻结编码器中提取的特征与目标医学图像域中所需的特征几乎不相符。此外,优化编码器通常会导致 UAD 中的模式崩溃。在本文中,我们提出了一种新颖的UAD方法,即编码器-解码器对比度(EDC),该方法优化整个网络以减少对预训练图像域的偏差并将网络定位在目标医学领域。我们从检测重建错误中的异常的特征重建方法开始。本质上,引入对比学习范式来解决模式崩溃问题,同时优化编码器和重建解码器。此外,为了防止不稳定并进一步提高性能,我们建议将全局性引入对比目标函数中。对四种医学图像模式进行了广泛的实验,包括光学相干断层扫描、彩色眼底图像、脑部 MRI 和皮肤病变图像,我们的方法优于当前所有最先进的 UAD 方法。郭郭贾/0000-0002-4449-6867;卢帅/0000-0002-3532-7498 TC 6 ZA 0 ZB 1 Z8 0 ZR 0
ZS 0
Z9 6
C1 Beijing Inst Technol, Beijing 100081, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600026
PM 37883280
ER
ZS 0 Z9 6 C1 北京理工大学,北京 100081,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600026 PM 37883280 ER
PT J AU Zhu, Jiening Veeraraghavan, Harini Jiang, Jue Oh, Jung Hun Norton, Larry Deasy, Joseph O. Tannenbaum, Allen
TI Wasserstein HOG: Local Directionality Extraction via Optimal Transport
TI Wasserstein HOG:通过最佳传输提取局部方向性
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 916
EP 927
DI 10.1109/TMI.2023.3325295
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 916 EP 927 DI 10.1109/TMI.2023.3325295 DT 文章 PD 2024 年 3 月 PY 2024
AB Directionally sensitive radiomic features including the histogram of
oriented gradient (HOG) have been shown to provide objective and
quantitative measures for predicting disease outcomes in multiple
cancers. However, radiomic features are sensitive to imaging
variabilities including acquisition differences, imaging artifacts and
noise, making them impractical for using in the clinic to inform patient
care. We treat the problem of extracting robust local directionality
features by mapping via optimal transport a given local image patch to
an iso-intense patch of its mean. We decompose the transport map into
sub-work costs each transporting in different directions. To test our
approach, we evaluated the ability of the proposed approach to quantify
tumor heterogeneity from magnetic resonance imaging (MRI) scans of brain
glioblastoma multiforme, computed tomography (CT) scans of head and neck
squamous cell carcinoma as well as longitudinal CT scans in lung cancer
patients treated with immunotherapy. By considering the entropy
difference of the extracted local directionality within tumor regions,
we found that patients with higher entropy in their images, had
significantly worse overall survival for all three datasets, which
indicates that tumors that have images exhibiting flows in many
directions may be more malignant. This may seem to reflect high tumor
histologic grade or disorganization. Furthermore, by comparing the
changes in entropy longitudinally using two imaging time points, we
found patients with reduction in entropy from baseline CT are associated
with longer overall survival (hazard ratio = 1.95, 95% confidence
interval of 1.4-2.8, ${p}$ = 1.65e-5). The proposed method provides a
robust, training free approach to quantify the local directionality
contained in images.
RI Zhu, Jiening/HTM-2498-2023; Jiang, Jue/AFN-5623-2022; Veeraraghavan, Harini/M-1986-2018; Oh, Jung Hun/; Jiang, Jue/; Zhu, Jiening/; Deasy, Joseph/
OI Veeraraghavan, Harini/0000-0002-4054-5529; Oh, Jung
Hun/0000-0001-8791-2755; Jiang, Jue/0000-0001-9642-3515; Zhu,
Jiening/0000-0002-9506-073X; Deasy, Joseph/0000-0002-9437-266X
ZR 0
ZA 0
TC 0
AB 方向敏感放射组学特征,包括定向梯度直方图 (HOG) 已被证明可以为预测多种癌症的疾病结果提供客观和定量的测量。然而,放射组学特征对成像变异(包括采集差异、成像伪影和噪声)很敏感,这使得它们在临床中用于指导患者护理是不切实际的。我们通过最优传输将给定的局部图像块映射到其均值的等强度块来处理提取鲁棒局部方向性特征的问题。我们将运输地图分解为每个在不同方向上运输的子工作成本。为了测试我们的方法,我们评估了所提出的方法通过脑多形性胶质母细胞瘤的磁共振成像(MRI)扫描、头颈部鳞状细胞癌的计算机断层扫描(CT)以及纵向CT扫描来量化肿瘤异质性的能力。接受免疫疗法治疗的肺癌患者。通过考虑肿瘤区域内提取的局部方向性的熵差,我们发现图像中熵较高的患者在所有三个数据集中的总体生存率明显较差,这表明具有在多个方向上显示流动的图像的肿瘤可能更差。恶性的。这似乎反映了肿瘤组织学分级较高或组织混乱。此外,通过使用两个成像时间点纵向比较熵的变化,我们发现熵较基线 CT 减少的患者与较长的总生存期相关(风险比 = 1.95,95% 置信区间为 1.4-2.8,${p} $ = 1.65e-5)。 所提出的方法提供了一种稳健的、免训练的方法来量化图像中包含的局部方向性。日朱杰宁/HTM-2498-2023;江珏/AFN-5623-2022;哈里尼·维拉哈万/M-1986-2018;哦,正勋/;蒋珏/;朱洁宁/;约瑟夫·迪西/哈里尼·OI Veeraraghavan/0000-0002-4054-5529;哦,正勋/0000-0001-8791-2755;江珏/0000-0001-9642-3515;朱洁宁/0000-0002-9506-073X;约瑟夫·迪西/0000-0002-9437-266X ZR 0 ZA 0 TC 0
ZS 0
Z8 0
ZB 0
Z9 0
C1 SUNY Stony Brook, Dept Appl Math & Stat, Stony Brook, NY 11794 USA
C1 Mem Sloan Kettering Canc Ctr, Dept Med Phys, New York, NY 10065 USA
C1 Mem Sloan Kettering Canc Ctr, Dept Med, New York, NY 10065 USA
C1 SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600022
PM 37874704
ER
ZS 0 Z8 0 ZB 0 Z9 0 C1 纽约州立大学石溪分校,应用数学与统计系,石溪,NY 11794 USA C1 Mem Sloan Kettering Canc Ctr,Dept Med Phys,New York,NY 10065 USA C1 Mem Sloan Kettering Canc Ctr,Dept Med, New York, NY 10065 USA C1 SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 美国 SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600022 PM 37874704 ER
PT J
AU Fu, Suzhong
Xu, Jing
Chang, Shilong
Yang, Luyao
Ling, Shuting
Cai, Jinghan
Chen, Jiayin
Yuan, Jiacheng
Cai, Ying
Zhang, Bei
Huang, Zicheng
Yang, Kun
Sui, Wenhai
Xue, Linyan
Zhao, Qingliang
PT J AU Fu, Suzhong Xu, Jing Chang, Shilong Yang, Luyao Ling, Shuting Cai, Jinghan Chen, Jiayin Yuan, Jia Cheng Cai, Ying Zhu, Be Huang, Zi Cheng Yang, 隋坤, 薛文海, Linyan Zhao, Qingliang
TI Robust Vascular Segmentation for Raw Complex Images of Laser Speckle
Contrast Based on Weakly Supervised Learning
基于弱监督学习的 TI 鲁棒血管分割激光散斑对比原始复杂图像
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 39
EP 50
DI 10.1109/TMI.2023.3287200
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 39 EP 50 DI 10.1109/TMI.2023.3287200 DT 文章 PD 2024 年 1 月 PY 2024
AB Laser speckle contrast imaging (LSCI) is widely used for in vivo
real-time detection and analysis of local blood flow microcirculation
due to its non-invasive ability and excellent spatial and temporal
resolution. However, vascular segmentation of LSCI images still faces a
lot of difficulties due to numerous specific noises caused by the
complexity of blood microcirculation's structure and irregular vascular
aberrations in diseased regions. In addition, the difficulties of LSCI
image data annotation have hindered the application of deep learning
methods based on supervised learning in the field of LSCI image vascular
segmentation. To tackle these difficulties, we propose a robust weakly
supervised learning method, which selects the threshold combinations and
processing flows instead of labor-intensive annotation work to construct
the ground truth of the dataset, and design a deep neural network,
FURNet, based on UNet++ and ResNeXt. The model obtained from training
achieves high-quality vascular segmentation and captures multi-scene
vascular features on both constructed and unknown datasets with good
generalization. Furthermore, we intravital verified the availability of
this method on a tumor before and after embolization treatment. This
work provides a new approach for realizing LSCI vascular segmentation
and also makes a new application-level advance in the field of
artificial intelligence-assisted disease diagnosis.
RI Zhao, Qingliang/AAH-5050-2020; Sui, Wenhai/HMO-8110-2023; HUANG, ZICHENG/HNI-3358-2023; Fu, Suzhong/KPY-8242-2024; Fu, Suzhong/
OI Fu, Suzhong/0009-0009-7549-5849
Z8 0
ZR 0
ZB 0
ZA 0
AB激光散斑对比成像(LSCI)因其无创能力和优异的时空分辨率而被广泛应用于体内局部血流微循环的实时检测和分析。然而,由于血液微循环结构的复杂性和病变区域不规则的血管畸变而产生大量的特定噪声,LSCI图像的血管分割仍然面临很多困难。此外,LSCI图像数据标注的困难阻碍了基于监督学习的深度学习方法在LSCI图像血管分割领域的应用。为了解决这些困难,我们提出了一种鲁棒的弱监督学习方法,该方法选择阈值组合和处理流程而不是劳动密集型的注释工作来构建数据集的基本事实,并设计了一个基于 UNet++ 的深度神经网络 FURNet和 ResNeXt。训练获得的模型实现了高质量的血管分割,并在构建的和未知的数据集上捕获多场景血管特征,具有良好的泛化性。此外,我们在栓塞治疗前后验证了该方法在肿瘤上的可用性。该工作为实现LSCI血管分割提供了新的途径,也在人工智能辅助疾病诊断领域取得了新的应用层面的进展。日照清亮/AAH-5050-2020;隋文海/HMO-8110-2023;黄子成/HNI-3358-2023;付苏忠/KPY-8242-2024;付苏中/ OI 付苏中/0009-0009-7549-5849 Z8 0 ZR 0 ZB 0 ZA 0
ZS 0
TC 3
Z9 3
C1 Xiamen Univ, State Key Lab Vaccines Infect Dis, Ctr Mol Imaging &
Translat Med, Sch Publ Hlth, Xiamen 361102, Peoples R China
C1 Xiamen Univ, Ctr Mol Imaging & Translat Med, Sch Publ Hlth, Xiang An
Biomed Lab, Xiamen 361102, Peoples R China
C1 Xiamen Univ, Inst Artificial Intelligence, Xiamen 361102, Peoples R
China
C1 Hebei Univ, Coll Qual & Tech Supervis, Baoding 071002, Peoples R China
C1 Hebei Technol Innovat Ctr Lightweight New Energy V, Baoding 071002,
Peoples R China
C1 Shandong Univ, Key Lab Cardiovasc Remodeling & Funct Res, Chinese Minist
Educ, Jinan 250100, Peoples R China
C1 Shandong Univ, State & Shandong Prov Joint Key Lab Translat Cardi,
Chinese Natl Hlth Commiss, Jinan 250100, Peoples R China
C1 Shandong Univ, Qilu Hosp, Chinese Acad Med Sci, Cheeloo Coll Med,Dept
Cardiol, Jinan 250100, Peoples R China
C1 Innovat Lab Sci & Technol Energy Mat Fujian Prov I, Xiamen 361104,
Peoples R China
C1 Xiamen Univ, Shenzhen Res Inst, Shenzhen 518057, Peoples R China
C3 Hebei Technol Innovat Ctr Lightweight New Energy V
C3 Innovat Lab Sci & Technol Energy Mat Fujian Prov I
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600005
PM 37335795
ER
ZS 0 TC 3 Z9 3 C1 厦门大学, 疫苗感染国家重点实验室, Ctr Mol Imaging & Translat Med, Sch Publ Hlth, 厦门 361102, Peoples R China C1 厦门大学, Ctr Mol Imaging & Translat Med, Sch Publ Hlth, 翔厦门大学生物医学实验室, 厦门 361102, 人民路 C1 厦门大学人工智能研究所, 厦门 361102, 人民路 C1 河北大学质量技术监督所, 保定 071002, 人民路 C1 河北科创中心轻量化新能源五期,保定 071002, 人民 R China C1 山东大学, 中华人民共和国教育部心血管重塑与功能研究重点实验室, 济南 250100, 人民 R China C1 山东大学, 国家山东省联合重点实验室 Translat Cardi, 国家卫生健康委员会, 济南 250100,人民R中国C1山东大学齐鲁医院,中国医学科学院齐鲁医学院心血管系,济南250100,人民R中国C1创新实验室科技能源材料福建省第一,厦门361104,人民R中国C1厦门大学,深圳研究院,深圳 518057,人民 R 中国 C3 河北科技创新中心轻量化新能源 V C3 创新实验室科技能源材料福建省 I SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600005 PM 37335795急诊室
PT J
AU Lobos, Rodrigo A.
Chan, Chin-Cheng
Haldar, Justin P.
PT J AU Lobos、Rodrigo A. Chan、Chin-Cheng Haldar、Justin P.
TI New Theory and Faster Computations for Subspace-Based Sensitivity Map
Estimation in Multichannel MRI
TI 多通道 MRI 中基于子空间的灵敏度图估计的新理论和更快的计算
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 286
EP 296
DI 10.1109/TMI.2023.3297851
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 286 EP 296 DI 10.1109/TMI.2023.3297851 DT 文章 PD 2024 年 1 月 PY 2024
AB Sensitivity map estimation is important in many multichannel MRI
applications. Subspace-based sensitivity map estimation methods like
ESPIRiT are popular and perform well, though can be computationally
expensive and their theoretical principles can be nontrivial to
understand. In the first part of this work, we present a novel
theoretical derivation of subspace-based sensitivity map estimation
based on a linear-predictability/structured low-rank modeling
perspective. This results in an estimation approach that is equivalent
to ESPIRiT, but with distinct theory that may be more intuitive for some
readers. In the second part of this work, we propose and evaluate a set
of computational acceleration approaches (collectively known as PISCO)
that can enable substantial improvements in computation time (up to
similar to 100x in the examples we show) and memory for subspace-based
sensitivity map estimation.
RI Haldar, Justin/B-4983-2008; Lobos, Rodrigo/
OI Haldar, Justin/0000-0002-1838-0211; Lobos, Rodrigo/0000-0003-0425-8976
ZB 3
Z8 0
ZR 0
ZA 0
TC 3
AB 灵敏度图估计在许多多通道 MRI 应用中非常重要。基于子空间的灵敏度图估计方法(例如 ESPIRiT)很流行并且性能良好,但计算成本可能很高,而且其理论原理也很难理解。在这项工作的第一部分中,我们提出了一种基于线性可预测性/结构化低秩建模视角的基于子空间的灵敏度图估计的新颖理论推导。这产生了与 ESPIRiT 等效的估计方法,但具有对某些读者来说可能更直观的独特理论。在这项工作的第二部分中,我们提出并评估了一组计算加速方法(统称为 PISCO),这些方法可以显着改善基于子空间的计算时间(在我们展示的示例中高达 100 倍)和内存。敏感性图估计。 RI 哈尔达尔,贾斯汀/B-4983-2008;罗德里戈·洛博斯/贾斯汀·OI Haldar/0000-0002-1838-0211;洛博斯,罗德里戈/0000-0003-0425-8976 ZB 3 Z8 0 ZR 0 ZA 0 TC 3
ZS 0
Z9 3
C1 Univ Southern Calif, Signal & Image Proc Inst, Ming Hsieh Dept Elect &
Comp Engn, Los Angeles, CA 90089 USA
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600021
PM 37478037
ER
ZS 0 Z9 3 C1 南加州大学,信号与图像处理研究所,Ming Hsieh Dept Elect & Comp Engn,洛杉矶,CA 90089 美国 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600021 PM 37478037 ER
PT J
AU Tang, Xinlu
Zhang, Chencheng
Guo, Rui
Yang, Xinling
Qian, Xiaohua
PT J AU Tang、张新禄、郭陈成、杨锐、钱新岭、小华
TI A Causality-Aware Graph Convolutional Network Framework for Rigidity
Assessment in Parkinsonians
TI 用于帕金森病僵化评估的因果关系感知图卷积网络框架
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 229
EP 240
DI 10.1109/TMI.2023.3294182
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 229 EP 240 DI 10.1109/TMI.2023.3294182 DT 文章 PD 2024 年 1 月 PY 2024
AB Rigidity is one of the common motor disorders in Parkinson's disease
(PD), which lead to life quality deterioration. The widely-used
rating-scale-based approach for rigidity assessment still depends on the
availability of experienced neurologists and is limited by rating
subjectivity. Given the recent successful applications of quantitative
susceptibility mapping (QSM) in auxiliary PD diagnosis, automated
assessment of PD rigidity can be essentially achieved through QSM
analysis. However, a major challenge is the performance instability due
to the confounding factors (e.g., noise and distribution shift) which
conceal the truly-causal features. Therefore, we propose a
causality-aware graph convolutional network (GCN) framework, where
causal feature selection is combined with causal invariance to ensure
that causality-informed model decisions are reached. Firstly, a GCN
model that integrates causal feature selection is systematically
constructed at three graph levels: node, structure, and representation.
In this model, a causal diagram is learned to extract a subgraph with
truly-causal information. Secondly, a non-causal perturbation strategy
is developed along with an invariance constraint to ensure the stability
of the assessment results under different distributions, and thus avoid
spurious correlations caused by distribution shifts. The superiority of
the proposed method is shown by extensive experiments and the clinical
value is revealed by the direct relevance of selected brain regions to
rigidity in PD. Besides, its extensibility is verified on other two
tasks: PD bradykinesia and mental state for Alzheimer's disease.
Overall, we provide a clinically-potential tool for automated and stable
assessment of PD rigidity. Our source code will be available at
https://github.com/SJTUBME-QianLab/Causality-Aware-Rigidity.
RI Yang, Xinling/C-8289-2015; Zhang, ChenCheng/AAK-6096-2020; Zhang, ChenCheng/
OI Zhang, ChenCheng/0000-0003-4472-4134
TC 1
Z8 0
AB强直是帕金森病(PD)常见的运动障碍之一,会导致生活质量恶化。广泛使用的基于评级量表的僵化评估方法仍然取决于经验丰富的神经科医生的可用性,并且受到评级主观性的限制。鉴于定量磁化率图(QSM)最近在辅助PD诊断中的成功应用,PD刚性的自动评估基本上可以通过QSM分析来实现。然而,一个主要的挑战是由于隐藏了真正因果特征的混杂因素(例如噪声和分布变化)而导致的性能不稳定。因此,我们提出了一种因果关系感知图卷积网络(GCN)框架,其中因果特征选择与因果不变性相结合,以确保达成因果关系知情的模型决策。首先,在节点、结构和表示三个图层面系统地构建了集成因果特征选择的GCN模型。在这个模型中,学习因果图来提取具有真正因果信息的子图。其次,制定了非因果扰动策略和不变性约束,以确保不同分布下评估结果的稳定性,从而避免分布变化引起的虚假相关性。广泛的实验证明了所提出方法的优越性,并且所选大脑区域与 PD 僵硬的直接相关性揭示了临床价值。此外,它的可扩展性在另外两个任务上得到了验证:PD运动迟缓和阿尔茨海默病的精神状态。总体而言,我们提供了一种具有临床潜力的工具,用于自动、稳定地评估 PD 硬度。 我们的源代码可在 https://github.com/SJTUBME-QianLab/Causality-Aware-Rigidity 获取。日阳新岭/C-8289-2015;张晨成/AAK-6096-2020;张晨城/ OI 张晨城/0000-0003-4472-4134 TC 1 Z8 0
ZS 0
ZA 0
ZR 0
ZB 0
Z9 1
C1 Shanghai Jiao Tong Univ, Sch Biomed Engn, Med Image & Hlth Informat Lab,
Shanghai 200030, Peoples R China
C1 Shanghai Jiao Tong Univ, Ruijin Hosp, Sch Med, Dept Funct Neurosurg,
Shanghai 200025, Peoples R China
C1 Xinjiang Med Univ, Affiliated Hosp 2, Urumqi 830054, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600042
PM 37432810
ER
ZS 0 ZA 0 ZR 0 ZB 0 Z9 1 C1 上海交通大学,生物医学工程学院,医学影像与健康信息实验室,上海 200030,人民 R 中国 C1 上海交通大学,瑞金医院,医学院,功能神经外科,上海200025,人民路中国 C1 新疆医科大学附属第二医院,乌鲁木齐 830054,人民路中国 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600042 PM 37432810 ER
PT J
AU Wu, Yongjian
Zhou, Yang
Saiyin, Jiya
Wei, Bingzheng
Lai, Maode
Shou, Jianzhong
Xu, Yan
PT J AU 吴, 周永健, 杨赛银, 魏吉雅, 赖秉正, 寿茂德, 徐建中, 严
TI AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot
Nuclei Detection via Visual-Language Pre-trained Models.
TI AttriPrompter:通过视觉语言预训练模型进行零样本核检测的属性语义自动提示。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3473745
DT Journal Article
PD 2024-Oct-03
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3473745 DT 期刊文章 PD 2024 年 10 月 3 日 PY 2024
AB Large-scale visual-language pre-trained models (VLPMs) have demonstrated
exceptional performance in downstream object detection through text
prompts for natural scenes. However, their application to zero-shot
nuclei detection on histopathology images remains relatively unexplored,
mainly due to the significant gap between the characteristics of medical
images and the weboriginated text-image pairs used for pre-training.
This paper aims to investigate the potential of the object-level VLPM,
Grounded Language-Image Pre-training (GLIP), for zero-shot nuclei
detection. Specifically, we propose an innovative auto-prompting
pipeline, named AttriPrompter, comprising attribute generation,
attribute augmentation, and relevance sorting, to avoid subjective
manual prompt design. AttriPrompter utilizes VLPMs' text-to-image
alignment to create semantically rich text prompts, which are then fed
into GLIP for initial zero-shot nuclei detection. Additionally, we
propose a self-trained knowledge distillation framework, where GLIP
serves as the teacher with its initial predictions used as pseudo
labels, to address the challenges posed by high nuclei density,
including missed detections, false positives, and overlapping instances.
Our method exhibits remarkable performance in label-free nuclei
detection, out-performing all existing unsupervised methods and
demonstrating excellent generality. Notably, this work highlights the
astonishing potential of VLPMs pre-trained on natural image-text pairs
for downstream tasks in the medical field as well. Code will be released
at github.com/AttriPrompter.
Z8 0
TC 0
ZR 0
AB 大规模视觉语言预训练模型 (VLPM) 通过自然场景的文本提示在下游对象检测中展现了卓越的性能。然而,它们在组织病理学图像上的零样本核检测中的应用仍然相对未经探索,这主要是由于医学图像的特征与用于预训练的网络原始文本图像对之间存在显着差距。本文旨在研究对象级 VLPM(基础语言图像预训练(GLIP))在零样本核检测中的潜力。具体来说,我们提出了一种创新的自动提示管道,名为 AttriPrompter,包括属性生成、属性增强和相关性排序,以避免主观的手动提示设计。 AttriPrompter 利用 VLPM 的文本到图像对齐来创建语义丰富的文本提示,然后将其输入 GLIP 中以进行初始零样本核检测。此外,我们提出了一个自我训练的知识蒸馏框架,其中 GLIP 作为老师,其初始预测用作伪标签,以解决高核密度带来的挑战,包括漏检、误报和重叠实例。我们的方法在无标记细胞核检测中表现出卓越的性能,优于所有现有的无监督方法,并表现出出色的通用性。值得注意的是,这项工作凸显了在自然图像文本对上进行预训练的 VLPM 对于医学领域下游任务的惊人潜力。代码将在 github.com/AttriPrompter 发布。 Z8 0 TC 0 ZR 0
ZS 0
ZB 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-10-05
UT MEDLINE:39361456
PM 39361456
ER
ZS 0 ZB 0 ZA 0 Z9 0 EI 1558-254X DA 2024-10-05 UT MEDLINE:39361456 PM 39361456 ER
PT J AU You, Xin He, Junjun Yang, Jie Gu, Yun
TI Learning with Explicit Shape Priors for Medical Image Segmentation.
TI 利用显式形状先验进行医学图像分割学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3469214
DT Journal Article
PD 2024-Sep-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3469214 DT 期刊文章 PD 2024-Sep-27 PY 2024
AB Medical image segmentation is a fundamental task for medical image
analysis and surgical planning. In recent years, UNet-based networks
have prevailed in the field of medical image segmentation. However,
convolutional neural networks (CNNs) suffer from limited receptive
fields, which fail to model the long-range dependency of organs or
tumors. Besides, these models are heavily dependent on the training of
the final segmentation head. And existing methods can not well address
aforementioned limitations simultaneously. Hence, in our work, we
proposed a novel shape prior module (SPM), which can explicitly
introduce shape priors to promote the segmentation performance of
UNet-based models. The explicit shape priors consist of global and local
shape priors. The former with coarse shape representations provides
networks with capabilities to model global contexts. The latter with
finer shape information serves as additional guidance to relieve the
heavy dependence on the learnable prototype in the segmentation head. To
evaluate the effectiveness of SPM, we conduct experiments on three
challenging public datasets. And our proposed model achieves
state-of-the-art performance. Furthermore, SPM can serve as a
plug-and-play structure into classic CNNs and Transformer-based
backbones, facilitating the segmentation task on different datasets.
Source codes are available at https://github.
com/AlexYouXin/Explicit-Shape-Priors.
OI Gu, Yun/0000-0002-4199-0675
ZB 0
AB 医学图像分割是医学图像分析和手术规划的一项基本任务。近年来,基于UNet的网络在医学图像分割领域盛行。然而,卷积神经网络(CNN)的感受野有限,无法模拟器官或肿瘤的远程依赖性。此外,这些模型很大程度上依赖于最终分割头的训练。而现有方法并不能很好地同时解决上述局限性。因此,在我们的工作中,我们提出了一种新颖的形状先验模块(SPM),它可以显式地引入形状先验来提高基于 UNet 的模型的分割性能。显式形状先验由全局形状先验和局部形状先验组成。前者具有粗糙的形状表示,为网络提供了对全局上下文进行建模的能力。后者具有更精细的形状信息,可作为额外的指导,以减轻对分割头中可学习原型的严重依赖。为了评估 SPM 的有效性,我们在三个具有挑战性的公共数据集上进行了实验。我们提出的模型实现了最先进的性能。此外,SPM 可以作为经典 CNN 和基于 Transformer 的主干的即插即用结构,促进不同数据集上的分割任务。源代码可在 https://github 上获取。 com/AlexYouXin/Explicit-Shape-Priors。顾云/0000-0002-4199-0675 ZB 0
ZS 0
ZR 0
TC 0
ZA 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-09-29
UT MEDLINE:39331543
PM 39331543
ER
ZS 0 ZR 0 TC 0 ZA 0 Z8 0 Z9 0 EI 1558-254X DA 2024-09-29 UT MEDLINE:39331543 PM 39331543 ER
PT J
AU Liu, Che
Cheng, Sibo
Shi, Miaojing
Shah, Anand
Bai, Wenjia
Arcucci, Rossella
PT J AU Liu、Che Cheng、Sibo Shi、Miaojing Shah、Anand Bai、Wenjia Arcucci、Rossella
TI IMITATE: Clinical Prior Guided Hierarchical Vision-Language
Pre-training.
TI IMITATE:临床先验引导分层视觉语言预训练。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3449690
DT Journal Article
PD 2024-Aug-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3449690 DT 期刊文章 PD 2024-Aug-26 PY 2024
AB In the field of medical Vision-Language Pretraining (VLP), significant
efforts have been devoted to deriving text and image features from both
clinical reports and associated medical images. However, most existing
methods may have overlooked the opportunity in leveraging the inherent
hierarchical structure of clinical reports, which are generally split
into 'findings' for descriptive content and 'impressions' for conclusive
observation. Instead of utilizing this rich, structured format, current
medical VLP approaches often simplify the report into either a unified
entity or fragmented tokens. In this work, we propose a novel clinical
prior guided VLP framework named IMITATE to learn the structure
information from medical reports with hierarchical vision-language
alignment. The framework derives multi-level visual features from the
chest X-ray (CXR) images and separately aligns these features with the
descriptive and the conclusive text encoded in the hierarchical medical
report. Furthermore, a new clinical-informed contrastive loss is
introduced for cross-modal learning, which accounts for clinical prior
knowledge in formulating sample correlations in contrastive learning.
The proposed model, IMITATE, outperforms baseline VLP methods across six
different datasets, spanning five medical imaging downstream tasks.
Comprehensive experimental results highlight the advantages of
integrating the hierarchical structure of medical reports for
vision-language alignment.
RI Arcucci, Rossella/; Shah, Anand/; Sibo, Cheng/; Bai, Wenjia/B-3377-2017
OI Arcucci, Rossella/0000-0002-9471-0585; Shah, Anand/0000-0001-5257-520X;
Sibo, Cheng/0000-0002-8707-2589; Bai, Wenjia/0000-0003-2943-7698
ZB 0
Z8 0
AB 在医学视觉语言预训练 (VLP) 领域,人们致力于从临床报告和相关医学图像中获取文本和图像特征。然而,大多数现有方法可能忽视了利用临床报告固有的层次结构的机会,临床报告通常分为描述性内容的“发现”和结论性观察的“印象”。当前的医疗 VLP 方法通常将报告简化为统一的实体或碎片化的标记,而不是利用这种丰富的结构化格式。在这项工作中,我们提出了一种名为 IMITATE 的新型临床先验引导 VLP 框架,用于通过分层视觉语言对齐从医学报告中学习结构信息。该框架从胸部 X 射线 (CXR) 图像中获取多级视觉特征,并将这些特征与分层医疗报告中编码的描述性和结论性文本分别对齐。此外,为跨模式学习引入了一种新的临床知情对比损失,它解释了对比学习中制定样本相关性的临床先验知识。所提出的模型 IMITATE 在六个不同的数据集上优于基线 VLP 方法,涵盖五个医学成像下游任务。综合实验结果凸显了整合医学报告的层次结构以实现视觉语言对齐的优势。 RI Arcucci,罗塞拉/;沙阿,阿南德/;程思博/;白文佳/B-3377-2017 OI Arcucci, Rossella/0000-0002-9471-0585;沙阿,阿南德/0000-0001-5257-520X;程思博/0000-0002-8707-2589;白文佳/0000-0003-2943-7698 ZB 0 Z8 0
ZS 0
TC 0
ZA 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-08-29
UT MEDLINE:39186435
PM 39186435
ER
ZS 0 TC 0 ZA 0 ZR 0 Z9 0 EI 1558-254X DA 2024-08-29 UT MEDLINE:39186435 PM 39186435 ER
PT J
AU Bian, Wanyu
Jang, Albert
Zhang, Liping
Yang, Xiaonan
Stewart, Zachary
Liu, Fang
PT J AU Bian、Wanyu Jang、Albert 张、Liping Yang、Xiaonan Stewart、Zachary Liu、Fang
TI Diffusion Modeling with Domain-conditioned Prior Guidance for
Accelerated MRI and qMRI Reconstruction.
具有域条件先验指导的 TI 扩散建模,用于加速 MRI 和 qMRI 重建。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3440227
DT Journal Article
PD 2024-Aug-08
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3440227 DT 期刊文章 PD 2024-Aug-08 PY 2024
AB This study introduces a novel image reconstruction technique based on a
diffusion model that is conditioned on the native data domain. Our
method is applied to multi-coil MRI and quantitative MRI (qMRI)
reconstruction, leveraging the domain-conditioned diffusion model within
the frequency and parameter domains. The prior MRI physics are used as
embeddings in the diffusion model, enforcing data consistency to guide
the training and sampling process, characterizing MRI k-space encoding
in MRI reconstruction, and leveraging MR signal modeling for qMRI
reconstruction. Furthermore, a gradient descent optimization is
incorporated into the diffusion steps, enhancing feature learning and
improving denoising. The proposed method demonstrates a significant
promise, particularly for reconstructing images at high acceleration
factors. Notably, it maintains great reconstruction accuracy for static
and quantitative MRI reconstruction across diverse anatomical
structures. Beyond its immediate applications, this method provides
potential generalization capability, making it adaptable to inverse
problems across various domains.
RI Liu, Fang/Q-5681-2018; Jang, Albert/
OI Liu, Fang/0000-0001-8032-6681; Jang, Albert/0000-0002-6375-4550
ZA 0
AB 这项研究介绍了一种基于扩散模型的新颖图像重建技术,该技术以本机数据域为条件。我们的方法应用于多线圈 MRI 和定量 MRI (qMRI) 重建,利用频率和参数域内的域条件扩散模型。先前的 MRI 物理学被用作扩散模型中的嵌入,强制数据一致性以指导训练和采样过程,表征 MRI 重建中的 MRI k 空间编码,并利用 MR 信号建模进行 qMRI 重建。此外,将梯度下降优化纳入扩散步骤中,增强特征学习并改善去噪。所提出的方法展现了巨大的前景,特别是在高加速因子下重建图像方面。值得注意的是,它在不同解剖结构的静态和定量 MRI 重建中保持了很高的重建精度。除了直接应用之外,该方法还提供了潜在的泛化能力,使其适用于跨各个领域的反演问题。刘芳/Q-5681-2018;张艾伯特/刘芳/0000-0001-8032-6681;张,阿尔伯特/0000-0002-6375-4550 ZA 0
ZS 0
ZR 0
ZB 0
TC 1
Z8 0
Z9 1
EI 1558-254X
DA 2024-08-10
UT MEDLINE:39115985
PM 39115985
ER
ZS 0 ZR 0 ZB 0 TC 1 Z8 0 Z9 1 EI 1558-254X DA 2024-08-10 UT MEDLINE:39115985 PM 39115985 ER
PT J AU Liu, Min Wu, Shuhan Chen, Runze Lin, Zhuangdian Wang, Yaonan Meijering, Erik
TI Brain Image Segmentation for Ultrascale Neuron Reconstruction via an
Adaptive Dual-Task Learning Network
通过自适应双任务学习网络进行 TI 大脑图像分割以实现超大规模神经元重建
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2574
EP 2586
DI 10.1109/TMI.2024.3367384
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2574 EP 2586 DI 10.1109/TMI.2024.3367384 DT 文章 PD 2024 年 7 月 PY 2024
AB Accurate morphological reconstruction of neurons in whole brain images
is critical for brain science research. However, due to the wide range
of whole brain imaging, uneven staining, and optical system
fluctuations, there are significant differences in image properties
between different regions of the ultrascale brain image, such as
dramatically varying voxel intensities and inhomogeneous distribution of
background noise, posing an enormous challenge to neuron reconstruction
from whole brain images. In this paper, we propose an adaptive dual-task
learning network (ADTL-Net) to quickly and accurately extract neuronal
structures from ultrascale brain images. Specifically, this framework
includes an External Features Classifier (EFC) and a Parameter Adaptive
Segmentation Decoder (PASD), which share the same Multi-Scale Feature
Encoder (MSFE). MSFE introduces an attention module named Channel Space
Fusion Module (CSFM) to extract structure and intensity distribution
features of neurons at different scales for addressing the problem of
anisotropy in 3D space. Then, EFC is designed to classify these feature
maps based on external features, such as foreground intensity
distributions and image smoothness, and select specific PASD parameters
to decode them of different classes to obtain accurate segmentation
results. PASD contains multiple sets of parameters trained by different
representative complex signal-to-noise distribution image blocks to
handle various images more robustly. Experimental results prove that
compared with other advanced segmentation methods for neuron
reconstruction, the proposed method achieves state-of-the-art results in
the task of neuron reconstruction from ultrascale brain images, with an
improvement of about 49% in speed and 12% in F1 score.
RI Chen, Runze/JVZ-2947-2024
AB 全脑图像中神经元的准确形态重建对于脑科学研究至关重要。然而,由于全脑成像范围广泛、染色不均匀和光学系统波动,超尺度脑图像不同区域之间的图像特性存在显着差异,例如体素强度差异巨大和背景噪声分布不均匀,构成从全脑图像重建神经元是一个巨大的挑战。在本文中,我们提出了一种自适应双任务学习网络(ADTL-Net),可以快速准确地从超大规模脑图像中提取神经元结构。具体来说,该框架包括外部特征分类器(EFC)和参数自适应分割解码器(PASD),它们共享相同的多尺度特征编码器(MSFE)。 MSFE引入了名为通道空间融合模块(CSFM)的注意力模块来提取不同尺度神经元的结构和强度分布特征,以解决3D空间中的各向异性问题。然后,EFC旨在根据外部特征(例如前景强度分布和图像平滑度)对这些特征图进行分类,并选择特定的PASD参数对不同类别的它们进行解码,以获得准确的分割结果。 PASD包含由不同代表性复杂信噪分布图像块训练的多组参数,以更鲁棒地处理各种图像。实验结果证明,与其他先进的神经元重建分割方法相比,该方法在超大规模脑图像神经元重建任务中取得了state-of-the-art的结果,速度提高了约49%,效率提高了12%。 F1成绩。日陈润泽/JVZ-2947-2024
ZS 0
ZB 0
TC 1
Z8 0
ZA 0
ZR 0
Z9 1
C1 Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
C1 Hunan Univ, Natl Engn Lab Robot Visual Percept & Control Techn, Changsha
410082, Peoples R China
C1 Int Sci & Technol Innovat Cooperat Base Biomed Ima, Changsha 410082,
Peoples R China
C1 Hunan Univ, Res Inst, Chongqing 401120, Peoples R China
C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
C3 Int Sci & Technol Innovat Cooperat Base Biomed Ima
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100011
PM 38373129
ER
ZS 0 ZB 0 TC 1 Z8 0 ZA 0 ZR 0 Z9 1 C1 湖南大学电子与信息工程学院,长沙 410082,人民共和国 C1 湖南大学,国家工程实验室机器人视觉感知与控制技术中心,人民共和国长沙 410082 C1 Int 科技创新合作基地 Biomed Ima, 长沙 410082, Peoples R China C1 湖南大学 Res Inst, 重庆 401120, Peoples R China C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia C3 Int科技创新合作基地 Biomed Ima SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100011 PM 38373129 ER
PT J
AU Thandiackal, Kevin
Piccinelli, Luigi
Gupta, Rajarsi
Pati, Pushpak
Goksel, Orcun
PT J AU Thandiackal、Kevin Piccinelli、Luigi Gupta、Rajarsi Pati、Pushpak Goksel、Orcun
TI Multi-Scale Feature Alignment for Continual Learning of Unlabeled
Domains
用于持续学习未标记域的 TI 多尺度特征对齐
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2599
EP 2609
DI 10.1109/TMI.2024.3368365
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2599 EP 2609 DI 10.1109/TMI.2024.3368365 DT 文章 PD 2024 年 7 月 PY 2024
AB Methods for unsupervised domain adaptation (UDA) help to improve the
performance of deep neural networks on unseen domains without any
labeled data. Especially in medical disciplines such as histopathology,
this is crucial since large datasets with detailed annotations are
scarce. While the majority of existing UDA methods focus on the
adaptation from a labeled source to a single unlabeled target domain,
many real-world applications with a long life cycle involve more than
one target domain. Thus, the ability to sequentially adapt to multiple
target domains becomes essential. In settings where the data from
previously seen domains cannot be stored, e.g., due to data protection
regulations, the above becomes a challenging continual learning problem.
To this end, we propose to use generative feature-driven image replay in
conjunction with a dual-purpose discriminator that not only enables the
generation of images with realistic features for replay, but also
promotes feature alignment during domain adaptation. We evaluate our
approach extensively on a sequence of three histopathological datasets
for tissue-type classification, achieving state-of-the-art results. We
present detailed ablation experiments studying our proposed method
components and demonstrate a possible use-case of our continual UDA
method for an unsupervised patch-based segmentation task given
high-resolution tissue images. Our code is available at:
https://github.com/histocartography/multi-scale-feature-alignment.
RI Goksel, Orcun/GYR-1638-2022; Thandiackal, Kevin/; Goksel, Orcun/D-1884-2018
OI Thandiackal, Kevin/0000-0001-6527-6823; Goksel,
Orcun/0000-0002-8639-7373
TC 0
ZB 0
ZR 0
AB 无监督域适应 (UDA) 方法有助于提高深度神经网络在未见域上的性能,无需任何标记数据。特别是在组织病理学等医学学科中,这一点至关重要,因为带有详细注释的大型数据集很少。虽然大多数现有 UDA 方法侧重于从标记源到单个未标记目标域的适应,但许多具有较长生命周期的实际应用程序涉及多个目标域。因此,顺序适应多个目标域的能力变得至关重要。在无法存储来自先前看到的域的数据的设置中,例如,由于数据保护法规,上述问题成为具有挑战性的持续学习问题。为此,我们建议将生成特征驱动的图像重放与双用途鉴别器结合使用,不仅能够生成具有真实特征的图像以进行重放,而且还可以在域适应期间促进特征对齐。我们在用于组织类型分类的三个组织病理学数据集序列上广泛评估我们的方法,取得了最先进的结果。我们提出了详细的消融实验,研究我们提出的方法组件,并展示了我们的连续 UDA 方法的可能用例,用于给定高分辨率组织图像的无监督的基于块的分割任务。我们的代码位于:https://github.com/histocartography/multi-scale-feature-alignment。 RI Goksel,Orcun/GYR-1638-2022;凯文·坦迪亚卡尔/; Goksel,Orcun/D-1884-2018 OI Thandiackal,Kevin/0000-0001-6527-6823;戈克塞尔,奥尔昆/0000-0002-8639-7373 TC 0 ZB 0 ZR 0
ZS 0
ZA 0
Z8 0
Z9 0
C1 Swiss Fed Inst Technol, Comp Assisted Applicat Med Grp, CH-8092 Zurich,
Switzerland
C1 IBM Res Europe, CH-8803 Zurich, Switzerland
C1 SUNY Stony Brook, Dept Biomed Informat, Stony Brook, NY 11794 USA
C1 Uppsala Univ, Dept Informat Technol, S-75237 Uppsala, Sweden
C3 IBM Res Europe
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100020
PM 38381642
ER
ZS 0 ZA 0 Z8 0 Z9 0 C1 Swiss Fed Inst Technol, Comp Assisted Applicat Med Grp, CH-8092 Zurich, Switzerland C1 IBM Res Europe, CH-8803 Zurich, Switzerland C1 SUNY Stony Brook, Dept Biomed Informat, Stony Brook, NY 11794 USA C1 Uppsala Univ,Dept Informat Technol,S-75237 Uppsala,瑞典 C3 IBM Res Europe SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100020 PM 38381642 ER
PT J
AU Zhu, Qi
Li, Shengrong
Meng, Xiangshui
Xu, Qiang
Zhang, Zhiqiang
Shao, Wei
Zhang, Daoqiang
PT J AU Zhu, Qi Li, 孟胜荣, 徐响水, 张强, 邵志强, 张伟, 道强
TI Spatio-Temporal Graph Hubness Propagation Model for Dynamic Brain
Network Classification
用于动态脑网络分类的 TI 时空图 Hubness 传播模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2381
EP 2394
DI 10.1109/TMI.2024.3363014
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2381 EP 2394 DI 10.1109/TMI.2024.3363014 DT 文章 PD 2024 年 6 月 PY 2024
AB Dynamic brain network has the advantage over static brain network in
characterizing the variation pattern of functional brain connectivity,
and it has attracted increasing attention in brain disease diagnosis.
However, most of the existing dynamic brain networks analysis methods
rely on extracting features from independent brain networks divided by
sliding windows, making them hard to reveal the high-order dynamic
evolution laws of functional brain networks. Additionally, they cannot
effectively extract the spatio-temporal topology features in dynamic
brain networks. In this paper, we propose to use optimal transport (OT)
theory to capture the topology evolution of the dynamic brain networks,
and develop a multi-channel spatio-temporal graph convolutional network
that collaboratively extracts the temporal and spatial features from the
evolution networks. Specifically, we first adaptively evaluate the graph
hubness of brain regions in the brain network of each time window, which
comprehensively models information transmission among multiple brain
regions. Second, the hubness propagation information across adjacent
time windows is captured by optimal transport, describing high-order
topology evolution of dynamic brain networks. Moreover, we develop a
spatio-temporal graph convolutional network with attention mechanism to
collaboratively extract the intrinsic temporal and spatial topology
information from the above networks. Finally, the multi-layer perceptron
is adopted for classifying the dynamic brain network. The extensive
experiment on the collected epilepsy dataset and the public ADNI dataset
show that our proposed method not only outperforms several
state-of-the-art methods in brain disease diagnosis, but also reveals
the key dynamic alterations of brain connectivities between patients and
healthy controls.
RI Zhang, Daoqiang/D-3754-2011; Zhang, Zhiqiang/; Zhu, Qi/
OI Zhang, Zhiqiang/0000-0002-3993-7330; Zhu, Qi/0000-0001-7740-292X
ZB 0
AB动态脑网络在表征大脑功能连接的变化模式方面比静态脑网络具有优势,在脑疾病诊断中越来越受到关注。然而,现有的动态脑网络分析方法大多依赖于从滑动窗口划分的独立脑网络中提取特征,难以揭示功能脑网络的高阶动态演化规律。此外,它们无法有效地提取动态大脑网络中的时空拓扑特征。在本文中,我们建议使用最优传输(OT)理论来捕获动态脑网络的拓扑演化,并开发一种多通道时空图卷积网络,从演化网络中协作提取时空特征。具体来说,我们首先自适应评估每个时间窗口的大脑网络中大脑区域的图中心度,从而综合建模多个大脑区域之间的信息传输。其次,通过最佳传输捕获相邻时间窗口的中心传播信息,描述动态大脑网络的高阶拓扑演化。此外,我们开发了一种具有注意机制的时空图卷积网络,以协作从上述网络中提取内在的时空拓扑信息。最后,采用多层感知器对动态脑网络进行分类。 对收集的癫痫数据集和公共 ADNI 数据集进行的广泛实验表明,我们提出的方法不仅优于脑部疾病诊断中的几种最先进的方法,而且揭示了患者和健康对照之间大脑连接的关键动态变化。张日道强/D-3754-2011;张志强/;朱琪/张志强/0000-0002-3993-7330;朱琪/0000-0001-7740-292X ZB 0
ZS 0
ZR 0
Z8 0
TC 1
ZA 0
Z9 1
C1 Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Minist Educ,
Nanjing 211106, Peoples R China
C1 Nanjing Univ Aeronaut & Astronaut, Key Lab Brain Machine Intelligence
Technol, Minist Educ, Nanjing 211106, Peoples R China
C1 Shandong Univ, Qilu Hosp Qingdao, Cheeloo Coll Med, Dept Radiol, Qingdao
266035, Peoples R China
C1 Nanjing Univ, Sch Med, Nanjing Jinling Hosp, Dept Med Imaging, Nanjing
210002, Peoples R China
C1 Nanjing Univ Aeronaut & Astronaut, Shenzhen Res Inst, Shenzhen 518038,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100014
PM 38319754
ER
ZS 0 ZR 0 Z8 0 TC 1 ZA 0 Z9 1 C1 南京航空航天大学 理工学院 教育部 南京 211106 人民 R 中国 C1 南京航空航天大学 脑机智能技术重点实验室 教育部,南京211106,人民R中国C1山东大学,齐鲁医院青岛,齐鲁医学院,放射科,青岛266035,人民R中国C1南京大学,Sch Med,南京金陵医院,医学影像科,南京210002,人民R中国C1南京航空航天大学深圳研究院, 深圳 518038, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100014 PM 38319754 ER
PT J
AU Li, Pengcheng
Gao, Chenqiang
Lian, Chunfeng
Meng, Deyu
PT J AU Li、高鹏程、连晨强、孟春风、德宇
TI Spatial Prior-Guided Bi-Directional Cross-Attention Transformers for
Tooth Instance Segmentation.
用于牙齿实例分割的 TI 空间先验引导双向交叉注意力变压器。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3406015
DT Journal Article
PD 2024-May-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3406015 DT 期刊文章 PD 2024-May-27 PY 2024
AB Tooth instance segmentation of dental panoramic X-ray images represents
a task of significant clinical importance. Teeth demonstrate symmetry
within the upper and lower jawbones and are arranged in a specific
order. However, previous studies frequently overlook this crucial
spatial prior information, resulting in misidentifications of tooth
categories for adjacent or similarly shaped teeth. In this paper, we
propose SPGTNet, a spatial prior-guided transformer method, designed to
both the extracted tooth positional features from CNNs and the
long-range contextual information from vision transformers for dental
panoramic X-ray image segmentation. Initially, a center-based spatial
prior perception module is employed to identify each tooth's centroid,
thereby enhancing the spatial prior information for the CNN sequence
features. Subsequently, a bi-directional cross-attention module is
designed to facilitate the interaction between the spatial prior
information of the CNN sequence features and the long-distance
contextual features of the vision transformer sequence features.
Finally, an instance identification head is employed to derive the tooth
segmentation results. Extensive experiments on three public benchmark
datasets have demonstrated the effectiveness and superiority of our
proposed method in comparison with other state-of-the-art approaches.
The proposed method demonstrates the capability to accurately identify
and analyze tooth structures, thereby providing crucial information for
dental diagnosis, treatment planning, and research.
OI Gao, Chenqiang/0000-0003-4174-4148; Li, Pengcheng/0000-0002-0691-7966;
Lian, Chunfeng/0000-0002-9319-6633
TC 0
Z8 0
ZB 0
ZA 0
ZR 0
AB 牙科全景 X 射线图像的牙齿实例分割代表了一项具有重要临床意义的任务。牙齿在上颌骨和下颌骨内表现出对称性,并按特定顺序排列。然而,以前的研究经常忽视这一重要的空间先验信息,导致对相邻或形状相似的牙齿的牙齿类别的错误识别。在本文中,我们提出了 SPGTNet,一种空间先验引导变换器方法,旨在从 CNN 提取的牙齿位置特征和来自视觉变换器的远程上下文信息进行牙科全景 X 射线图像分割。最初,采用基于中心的空间先验感知模块来识别每个牙齿的质心,从而增强 CNN 序列特征的空间先验信息。随后,设计了双向交叉注意力模块,以促进CNN序列特征的空间先验信息与视觉变换器序列特征的长距离上下文特征之间的交互。最后,采用实例识别头来得出牙齿分割结果。对三个公共基准数据集的广泛实验证明了我们提出的方法与其他最先进方法相比的有效性和优越性。该方法展示了准确识别和分析牙齿结构的能力,从而为牙科诊断、治疗计划和研究提供重要信息。高晨强/0000-0003-4174-4148;李鹏程/0000-0002-0691-7966;连春风/0000-0002-9319-6633 TC 0 Z8 0 ZB 0 ZA 0 ZR 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-05-31
UT MEDLINE:38801691
PM 38801691
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-05-31 UT MEDLINE:38801691 PM 38801691 ER
PT J
AU Chai, Zhizhong
Luo, Luyang
Lin, Huangjing
Heng, Pheng-Ann
Chen, Hao
PT J AU Chai, 罗志忠, 林路阳, 黄静恒, 陈鹏安, 郝
TI Deep Omni-Supervised Learning for Rib Fracture Detection From Chest
Radiology Images
TI 深度全监督学习从胸部放射图像检测肋骨骨折
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1972
EP 1982
DI 10.1109/TMI.2024.3353248
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1972 EP 1982 DI 10.1109/TMI.2024.3353248 DT 文章 PD 2024 年 5 月 PY 2024
AB Deep learning (DL)-based rib fracture detection has shown promise of
playing an important role in preventing mortality and improving patient
outcome. Normally, developing DL-based object detection models requires
a huge amount of bounding box annotation. However, annotating medical
data is time-consuming and expertise-demanding, making obtaining a large
amount of fine-grained annotations extremely infeasible. This poses a
pressing need for developing label-efficient detection models to
alleviate radiologists' labeling burden. To tackle this challenge, the
literature on object detection has witnessed an increase of
weakly-supervised and semi-supervised approaches, yet still lacks a
unified framework that leverages various forms of fully-labeled,
weakly-labeled, and unlabeled data. In this paper, we present a novel
omni-supervised object detection network, ORF-Netv2, to leverage as much
available supervision as possible. Specifically, a multi-branch
omni-supervised detection head is introduced with each branch trained
with a specific type of supervision. A co-training-based dynamic label
assignment strategy is then proposed to enable flexible and robust
learning from the weakly-labeled and unlabeled data. Extensive
evaluation was conducted for the proposed framework with three rib
fracture datasets on both chest CT and X-ray. By leveraging all forms of
supervision, ORF-Netv2 achieves mAPs of 34.7, 44.7, and 19.4 on the
three datasets, respectively, surpassing the baseline detector which
uses only box annotations by mAP gains of 3.8, 4.8, and 5.0,
respectively. Furthermore, ORF-Netv2 consistently outperforms other
competitive label-efficient methods over various scenarios, showing a
promising framework for label-efficient fracture detection. The code is
available at: https://github.com/zhizhongchai/ORF-Net.
RI Luo, Luyang/; Chen, Hao/V-4299-2019; Heng, Pheng Ann/; chai, zhizhong/
OI Luo, Luyang/0000-0002-7485-4151; Chen, Hao/0000-0002-8400-3780; Heng,
Pheng Ann/0000-0003-3055-5034; chai, zhizhong/0000-0002-7189-5708
基于 AB 深度学习 (DL) 的肋骨骨折检测有望在预防死亡和改善患者预后方面发挥重要作用。通常,开发基于深度学习的对象检测模型需要大量的边界框注释。然而,对医学数据进行注释既耗时又需要专业知识,使得获得大量细粒度的注释极其不可行。这就迫切需要开发标签高效的检测模型,以减轻放射科医生的标签负担。为了应对这一挑战,有关目标检测的文献见证了弱监督和半监督方法的增加,但仍然缺乏利用各种形式的全标记、弱标记和未标记数据的统一框架。在本文中,我们提出了一种新颖的全监督对象检测网络 ORF-Netv2,以尽可能多地利用可用的监督。具体来说,引入了多分支全监督检测头,每个分支都接受特定类型的监督训练。然后提出了一种基于协同训练的动态标签分配策略,以实现从弱标记和未标记数据中进行灵活且鲁棒的学习。使用胸部 CT 和 X 射线的三个肋骨骨折数据集对所提出的框架进行了广泛的评估。通过利用各种形式的监督,ORF-Netv2 在三个数据集上分别实现了 34.7、44.7 和 19.4 的 mAP,超过了仅使用框注释的基线检测器,mAP 增益分别为 3.8、4.8 和 5.0。此外,ORF-Netv2 在各种场景中始终优于其他竞争性标签高效方法,为标签高效断裂检测展示了一个有前景的框架。 代码位于:https://github.com/zhizhongchai/ORF-Net。日罗,鲁阳/;陈浩/V-4299-2019;恒,冯安/;柴志忠/罗爱鲁阳/0000-0002-7485-4151;陈浩/0000-0002-8400-3780;恒彭安/0000-0003-3055-5034;柴志忠/0000-0002-7189-5708
ZS 0
ZR 0
Z8 0
ZA 0
TC 1
ZB 0
Z9 1
C1 Imsight AI Res Lab, Shenzhen 518000, Peoples R China
C1 Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R
China
C1 Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
C1 Hong Kong Univ Sci & Technol, Dept Chem & Biol Engn, Hong Kong, Peoples
R China
C1 HKUST, Shenzhen Hong Kong Collaborat Innovat Res Inst, Shenzhen 518000,
Peoples R China
C3 Imsight AI Res Lab
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800001
PM 38215335
ER
ZS 0 ZR 0 Z8 0 ZA 0 TC 1 ZB 0 Z9 1 C1 Imsight AI Res Lab,深圳 518000,人民大学 C1 香港科技大学,计算科学与工程系,香港,人民大学 C1 香港大学香港,计算机科学与工程系,香港,人民 R 中国 C1 香港科技大学,化学与生物工程系,香港,人民 R 中国 C1 香港科技大学,深圳 香港合作创新研究中心,深圳 518000,人民 R中国 C3 Imsight AI Res Lab SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800001 PM 38215335 ER
PT J
AU Lin, Weiyuan
Gao, Zhifan
Liu, Hui
Zhang, Heye
PT J AU Lin、高伟远、刘志凡、张辉、Heye
TI A Deformable Constraint Transport Network for Optimal Aortic
Segmentation From CT Images
TI 可变形约束传输网络,用于根据 CT 图像进行最佳主动脉分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1462
EP 1475
DI 10.1109/TMI.2023.3339142
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1462 EP 1475 DI 10.1109/TMI.2023.3339142 DT 文章 PD 2024 年 4 月 PY 2024
AB Aortic segmentation from computed tomography (CT) is crucial for
facilitating aortic intervention, as it enables clinicians to visualize
aortic anatomy for diagnosis and measurement. However, aortic
segmentation faces the challenge of variable geometry in space, as the
geometric diversity of different diseases and the geometric
transformations that occur between raw and measured images. Existing
constraint-based methods can potentially solve the challenge, but they
are hindered by two key issues: inaccurate definition of properties and
inappropriate topology of transformation in space. In this paper, we
propose a deformable constraint transport network (DCTN). The DCTN
adaptively extracts aortic features to define intra-image constrained
properties and guides topological implementation in space to constrain
inter-image geometric transformation between raw and curved planar
reformation (CPR) images. The DCTN contains a deformable attention
extractor, a geometry-aware decoder and an optimal transport guider. The
extractor generates variable patches that preserve semantic integrity
and long-range dependency in long-sequence images. The decoder enhances
the perception of geometric texture and semantic features, particularly
for low-intensity aortic coarctation and false lumen, which removes
background interference. The guider explores the geometric discrepancies
between raw and CPR images, constructs probability distributions of
discrepancies, and matches them with inter-image transformation to guide
geometric topology in space. Experimental studies on 267 aortic subjects
and four public datasets show the superiority of our DCTN over 23
methods. The results demonstrate DCTN's advantages in aortic
segmentation for different types of aortic disease, for different aortic
segments, and in the measurement of clinical indexes.
RI lin, weiyuan/AAK-3447-2021; Gao, Zhifan/O-9082-2019; Gao, Zhifan/; Lin, Weiyuan/
OI Gao, Zhifan/0000-0002-1576-4439; Lin, Weiyuan/0000-0001-5679-141X
Z8 0
ZB 0
ZR 0
AB 计算机断层扫描 (CT) 的主动脉分割对于促进主动脉介入至关重要,因为它使临床医生能够可视化主动脉解剖结构以进行诊断和测量。然而,主动脉分割面临着空间几何可变的挑战,因为不同疾病的几何多样性以及原始图像和测量图像之间发生的几何变换。现有的基于约束的方法可以潜在地解决这一挑战,但它们受到两个关键问题的阻碍:属性定义不准确和空间变换拓扑不合适。在本文中,我们提出了一种可变形约束传输网络(DCTN)。 DCTN 自适应地提取主动脉特征来定义图像内约束属性,并指导空间中的拓扑实现以约束原始图像和弯曲平面重组 (CPR) 图像之间的图像间几何变换。 DCTN 包含可变形注意力提取器、几何感知解码器和最佳传输引导器。提取器生成可变补丁,以保留长序列图像中的语义完整性和远程依赖性。该解码器增强了对几何纹理和语义特征的感知,特别是对于低强度主动脉缩窄和假腔,从而消除了背景干扰。引导器探索原始图像和 CPR 图像之间的几何差异,构建差异的概率分布,并将其与图像间变换相匹配,以引导空间中的几何拓扑。对 267 名主动脉受试者和四个公共数据集的实验研究表明,我们的 DCTN 优于 23 种方法。 结果证明了DCTN在针对不同类型主动脉疾病、不同主动脉节段的主动脉分割以及临床指标测量方面的优势。林日威远/AAK-3447-2021;高志凡/O-9082-2019;高志帆/;林伟远/艾高志凡/0000-0002-1576-4439;林威远/0000-0001-5679-141X Z8 0 ZB 0 ZR 0
ZS 0
TC 1
ZA 0
Z9 1
C1 Sun Yat Sen Univ, Sch Biomed Engn, Shenzhen Campus, Shenzhen 518107,
Peoples R China
C1 Southern Med Univ, Guangdong Prov Peoples Hosp, Guangdong Acad Med Sci,
Dept Radiol, Guangzhou 510080, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-10
UT WOS:001196733400002
PM 38048241
ER
ZS 0 TC 1 ZA 0 Z9 1 C1 中山大学深圳校区生物医学工程学院,深圳 518107,人民 R 中国 C1 南方医科大学,广东省人民医院,广东省医学科学院,放射科,广州 510080,人民 R中国 SN 0278-0062 EI 1558-254X DA 2024-07-10 UT WOS:001196733400002 PM 38048241 ER
PT J
AU Liu, Jinhua
Desrosiers, Christian
Yu, Dexin
Zhou, Yuanfeng
PT J AU Liu, Jinhua Desrosiers, Christian Yu, 周德新, Yuanfeng
TI Semi-Supervised Medical Image Segmentation Using Cross-Style Consistency
With Shape-Aware and Local Context Constraints
TI 半监督医学图像分割使用具有形状感知和局部上下文约束的跨样式一致性
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1449
EP 1461
DI 10.1109/TMI.2023.3338269
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1449 EP 1461 DI 10.1109/TMI.2023.3338269 DT 文章 PD 2024 年 4 月 PY 2024
AB Despite the remarkable progress in semi-supervised medical image
segmentation methods based on deep learning, their application to
real-life clinical scenarios still faces considerable challenges. For
example, insufficient labeled data often makes it difficult for networks
to capture the complexity and variability of the anatomical regions to
be segmented. To address these problems, we design a new semi-supervised
segmentation framework that aspires to produce anatomically plausible
predictions. Our framework comprises two parallel networks:
shape-agnostic and shape-aware networks. These networks learn from each
other, enabling effective utilization of unlabeled data. Our shape-aware
network implicitly introduces shape guidance to capture shape
fine-grained information. Meanwhile, shape-agnostic networks employ
uncertainty estimation to further obtain reliable pseudo-labels for the
counterpart. We also employ a cross-style consistency strategy to
enhance the network's utilization of unlabeled data. It enriches the
dataset to prevent overfitting and further eases the coupling of the two
networks that learn from each other. Our proposed architecture also
incorporates a novel loss term that facilitates the learning of the
local context of segmentation by the network, thereby enhancing the
overall accuracy of prediction. Experiments on three different datasets
of medical images show that our method outperforms many excellent
semi-supervised segmentation methods and outperforms them in perceiving
shape. The code can be seen at https://github.com/igip-liu/SLC-Net.
RI LIU, Jinhua/ITU-1107-2023; Zhou, Yuanfeng/AAT-4670-2020; Yu, De-Xin/FDC-3283-2022
ZB 0
Z8 0
ZR 0
TC 1
AB 尽管基于深度学习的半监督医学图像分割方法取得了显着进展,但其在现实临床场景中的应用仍然面临着相当大的挑战。例如,标记数据不足通常会使网络难以捕获要分割的解剖区域的复杂性和可变性。为了解决这些问题,我们设计了一个新的半监督分割框架,旨在产生解剖学上合理的预测。我们的框架包含两个并行网络:形状不可知网络和形状感知网络。这些网络相互学习,从而能够有效利用未标记的数据。我们的形状感知网络隐式引入形状指导来捕获形状细粒度信息。同时,形状不可知网络采用不确定性估计来进一步获得对应对象的可靠伪标签。我们还采用跨风格一致性策略来增强网络对未标记数据的利用率。它丰富了数据集以防止过度拟合,并进一步简化了两个相互学习的网络的耦合。我们提出的架构还结合了一种新颖的损失项,有助于网络学习分割的局部上下文,从而提高预测的整体准确性。在三个不同的医学图像数据集上的实验表明,我们的方法优于许多优秀的半监督分割方法,并且在感知形状方面优于它们。代码可见https://github.com/igip-liu/SLC-Net。刘日,金华/ITU-1107-2023;周远峰/AAT-4670-2020;于德新/FDC-3283-2022 ZB 0 Z8 0 ZR 0 TC 1
ZS 0
ZA 0
Z9 1
C1 Shandong Univ, Sch Software, Jinan 250101, Peoples R China
C1 Ecole Technol Super ETS, Software & IT Dept, Montreal, PQ H3C 1K3,
Canada
C1 Shandong Univ, Qilu Hosp, Jinan 250012, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400012
PM 38032771
ER
ZS 0 ZA 0 Z9 1 C1 山东大学,Sch Software,济南 250101,人民 R 中国 C1 Ecole Technol Super ETS,软件与 IT 系,蒙特利尔,PQ H3C 1K3,加拿大 C1 山东大学,齐鲁医院,济南 250012,人民 R 中国SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400012 PM 38032771 ER
PT J
AU Stevens, Tristan S W
Meral, Faik C
Yu, Jason
Apostolakis, Iason Z
Robert, Jean-Luc
Van Sloun, Ruud J G
PT J AU Stevens、Tristan SW Meral、Faik C Yu、Jason Apostolakis、Iason Z Robert、Jean-Luc Van Sloun、Ruud JG
TI Dehazing Ultrasound using Diffusion Models.
TI 使用扩散模型进行超声除雾。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3363460
DT Journal Article
PD 2024-Feb-07
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3363460 DT 期刊文章 PD 2024 年 2 月 7 日 PY 2024
AB Echocardiography has been a prominent tool for the diagnosis of cardiac
disease. However, these diagnoses can be heavily impeded by poor image
quality. Acoustic clutter emerges due to multipath reflections imposed
by layers of skin, subcutaneous fat, and intercostal muscle between the
transducer and heart. As a result, haze and other noise artifacts pose a
real challenge to cardiac ultrasound imaging. In many cases, especially
with difficult-to-image patients such as patients with obesity, a
diagnosis from B-Mode ultrasound imaging is effectively rendered
unusable, forcing sonographers to resort to contrast-enhanced ultrasound
examinations or refer patients to other imaging modalities. Tissue
harmonic imaging has been a popular approach to combat haze, but in
severe cases is still heavily impacted by haze. Alternatively, denoising
algorithms are typically unable to remove highly structured and
correlated noise, such as haze. It remains a challenge to accurately
describe the statistical properties of structured haze, and develop an
inference method to subsequently remove it. Diffusion models have
emerged as powerful generative models and have shown their effectiveness
in a variety of inverse problems. In this work, we present a joint
posterior sampling framework that combines two separate diffusion models
to model the distribution of both clean ultrasound and haze in an
unsupervised manner. Furthermore, we demonstrate techniques for
effectively training diffusion models on radio-frequency ultrasound data
and highlight the advantages over image data. Experiments on both
in-vitro and in-vivo cardiac datasets show that the proposed dehazing
method effectively removes haze while preserving signals from weakly
reflected tissue.
OI Stevens, Tristan/0000-0002-8563-5931
AB 超声心动图一直是诊断心脏病的重要工具。然而,这些诊断可能会因图像质量差而受到严重阻碍。由于换能器和心脏之间的皮肤层、皮下脂肪和肋间肌施加的多路径反射,出现了声杂波。因此,雾霾和其他噪声伪影对心脏超声成像构成了真正的挑战。在许多情况下,特别是对于肥胖患者等难以成像的患者,B 型超声成像的诊断实际上变得无法使用,迫使超声检查人员诉诸对比增强超声检查或将患者转诊至其他成像方式。组织谐波成像一直是对抗雾霾的流行方法,但在严重的情况下仍然受到雾霾的严重影响。或者,去噪算法通常无法去除高度结构化和相关的噪声,例如雾霾。准确描述结构雾的统计特性并开发一种推理方法来随后消除它仍然是一个挑战。扩散模型已成为强大的生成模型,并在各种反问题中显示出其有效性。在这项工作中,我们提出了一个联合后验采样框架,该框架结合了两个独立的扩散模型,以无监督的方式对清洁超声和雾霾的分布进行建模。此外,我们展示了在射频超声数据上有效训练扩散模型的技术,并强调了相对于图像数据的优势。对体外和体内心脏数据集的实验表明,所提出的去雾方法可以有效去除雾气,同时保留来自弱反射组织的信号。 OI 史蒂文斯,特里斯坦/0000-0002-8563-5931
ZS 0
ZA 0
ZR 0
TC 1
ZB 0
Z8 0
Z9 1
EI 1558-254X
DA 2024-02-09
UT MEDLINE:38324427
PM 38324427
ER
ZS 0 ZA 0 ZR 0 TC 1 ZB 0 Z8 0 Z9 1 EI 1558-254X DA 2024-02-09 UT MEDLINE:38324427 PM 38324427 ER
PT J
AU Hu, Wentao
Cheng, Lianglun
Huang, Guoheng
Yuan, Xiaochen
Zhong, Guo
Pun, Chi-Man
Zhou, Jian
Cai, Muyan
PT J AU Hu, 程文涛, 黄良伦, 袁国恒, 钟晓晨, 郭朋, 周驰满, 蔡健, 穆岩
TI Learning From Incorrectness: Active Learning With Negative Pre-Training
and Curriculum Querying for Histological Tissue Classification
TI 从错误中学习:利用负预训练和课程查询进行组织学组织分类的主动学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 625
EP 637
DI 10.1109/TMI.2023.3313509
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 625 EP 637 DI 10.1109/TMI.2023.3313509 DT 文章 PD 2024 年 2 月 PY 2024
AB Patch-level histological tissue classification is an effective
pre-processing method for histological slide analysis. However, the
classification of tissue with deep learning requires expensive
annotation costs. To alleviate the limitations of annotation budgets,
the application of active learning (AL) to histological tissue
classification is a promising solution. Nevertheless, there is a large
imbalance in performance between categories during application, and the
tissue corresponding to the categories with relatively insufficient
performance are equally important for cancer diagnosis. In this paper,
we propose an active learning framework called ICAL, which contains
Incorrectness Negative Pre-training (INP) and Category-wise Curriculum
Querying (CCQ) to address the above problem from the perspective of
category-to-category and from the perspective of categories themselves,
respectively. In particular, INP incorporates the unique mechanism of
active learning to treat the incorrect prediction results that obtained
from CCQ as complementary labels for negative pre-training, in order to
better distinguish similar categories during the training process. CCQ
adjusts the query weights based on the learning status on each category
by the model trained by INP, and utilizes uncertainty to evaluate and
compensate for query bias caused by inadequate category performance.
Experimental results on two histological tissue classification datasets
demonstrate that ICAL achieves performance approaching that of fully
supervised learning with less than 16% of the labeled data. In
comparison to the state-of-the-art active learning algorithms, ICAL
achieved better and more balanced performance in all categories and
maintained robustness with extremely low annotation budgets. The source
code will be released at https://github.com/LactorHwt/ICAL.
RI Pun, Chi Man/GRJ-3703-2022; Cai, mu-yan/K-3230-2015; Huang, Guoheng/; Zhou, Jian/; Pun, Chi-Man/; Hu, Wentao/
OI Huang, Guoheng/0000-0002-3640-3229; Zhou, Jian/0000-0002-6868-9866; Pun,
Chi-Man/0000-0003-1788-3746; Hu, Wentao/0000-0001-7743-0541
TC 0
ZA 0
ZR 0
Z8 0
ZB 0
AB 贴片级组织学组织分类是组织学玻片分析的有效预处理方法。然而,利用深度学习对组织进行分类需要昂贵的注释成本。为了缓解注释预算的限制,将主动学习(AL)应用于组织学组织分类是一个有前途的解决方案。然而,应用过程中类别之间的性能存在较大不平衡,性能相对不足的类别对应的组织对于癌症诊断同样重要。在本文中,我们提出了一种名为 ICAL 的主动学习框架,其中包含不正确负预训练(INP)和类别明智的课程查询(CCQ),以从类别到类别的角度和从类别角度解决上述问题分别是类别本身。特别是,INP结合了主动学习的独特机制,将CCQ获得的错误预测结果视为负面预训练的补充标签,以便在训练过程中更好地区分相似类别。 CCQ根据INP训练的模型对每个类别的学习状况来调整查询权重,并利用不确定性来评估和补偿由于类别表现不足而导致的查询偏差。两个组织学组织分类数据集的实验结果表明,ICAL 使用少于 16% 的标记数据实现了接近完全监督学习的性能。与最先进的主动学习算法相比,ICAL 在所有类别中实现了更好、更平衡的性能,并以极低的注释预算保持了鲁棒性。 源代码将在 https://github.com/LactorHwt/ICAL 发布。潘志文/GRJ-3703-2022;蔡木彦/K-3230-2015;黄国恒/;周健/;双关,志文/;胡文涛/黄国恒/0000-0002-3640-3229;周健/0000-0002-6868-9866;潘志文/0000-0003-1788-3746;胡文涛/0000-0001-7743-0541 TC 0 ZA 0 ZR 0 Z8 0 ZB 0
ZS 0
Z9 0
C1 Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou 510006,
Peoples R China
C1 Macao Polytech Univ, Fac Appl Sci, Macau, Peoples R China
C1 Guangdong Univ Foreign Studies, Sch Informat Sci & Technol, Guangzhou
510420, Peoples R China
C1 Univ Macau, Dept Comp & Informat Sci, Macau, Peoples R China
C1 Sun Yat Sen Univ, Collaborat Innovat Ctr Canc Med, State Key Lab Oncol
South China,Canc Ctr, Guangdong Key Lab Nasopharyngeal Carcinoma Diag &,
Guangzhou 510060, Peoples R China
C1 Sun Yat sen Univ, Dept Radiol, Guangdong Key Lab Nasopharyngeal
Carcinoma Diag &, Canc Ctr, Guangzhou 510060, Peoples R China
C1 Shenzhen Univ, Med Sch, South China Hosp, Shenzhen 518116, Peoples R
China
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400016
PM 37682642
ER
ZS 0 Z9 0 C1 广东工业大学计算机科学与技术学院,广州 510006,人民共和国 C1 澳门理工大学应用科学学院,澳门,人民共和国 C1 广东外国语大学信息科学与技术学院,广州 510420澳门人民 R 中国 C1 大学,计算与信息科学系,澳门,人民 R 中国 C1 中山大学,协作创新中心癌症医学,华南肿瘤国家重点实验室,癌症中心,广东省鼻咽癌诊断重点实验室,广州 510060 ,人民 R 中国 C1 中山大学放射科广东省鼻咽癌诊断重点实验室,肿瘤中心,广州 510060,人民 R 中国 C1 深圳大学医学院,华南医院,深圳 518116,人民 R 中国 SN 0278- 0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400016 PM 37682642 ER
PT J
AU van Harten, Louis D.
Stoker, Jaap
Isgum, Ivana
PT J AU van Harten、Louis D. Stoker、Jaap Isgum、Ivana
TI Robust Deformable Image Registration Using Cycle-Consistent Implicit
Representations
使用循环一致隐式表示的 TI 鲁棒可变形图像配准
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 784
EP 793
DI 10.1109/TMI.2023.3321425
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 784 EP 793 DI 10.1109/TMI.2023.3321425 DT 文章 PD 2024 年 2 月 PY 2024
AB Recent works in medical image registration have proposed the use of
Implicit Neural Representations, demonstrating performance that rivals
state-of-the-art learning-based methods. However, these implicit
representations need to be optimized for each new image pair, which is a
stochastic process that may fail to converge to a global minimum. To
improve robustness, we propose a deformable registration method using
pairs of cycle-consistent Implicit Neural Representations: each implicit
representation is linked to a second implicit representation that
estimates the opposite transformation, causing each network to act as a
regularizer for its paired opposite. During inference, we generate
multiple deformation estimates by numerically inverting the paired
backward transformation and evaluating the consensus of the optimized
pair. This consensus improves registration accuracy over using a single
representation and results in a robust uncertainty metric that can be
used for automatic quality control. We evaluate our method with a 4D
lung CT dataset. The proposed cycle-consistent optimization method
reduces the optimization failure rate from 2.4% to 0.0% compared to the
current state-of-the-art. The proposed inference method improves
landmark accuracy by 4.5% and the proposed uncertainty metric detects
all instances where the registration method fails to converge to a
correct solution. We verify the generalizability of these results to
other data using a centerline propagation task in abdominal 4D MRI,
where our method achieves a 46% improvement in propagation consistency
compared with single-INR registration and demonstrates a strong
correlation between the proposed uncertainty metric and registration
accuracy.
RI Stoker, Jaap/AAH-7597-2019; Isgum, Ivana/H-8659-2014; van Harten, Louis/; Stoker, Jaap/
OI van Harten, Louis/0000-0002-0943-2825; Stoker, Jaap/0000-0002-9822-3784
TC 1
Z8 0
AB 最近在医学图像配准方面的工作提出了使用隐式神经表示,其性能可与最先进的基于学习的方法相媲美。然而,这些隐式表示需要针对每个新图像对进行优化,这是一个随机过程,可能无法收敛到全局最小值。为了提高鲁棒性,我们提出了一种使用循环一致隐式神经表示对的可变形配准方法:每个隐式表示都链接到估计相反变换的第二隐式表示,使每个网络充当其配对相反的正则化器。在推理过程中,我们通过对成对后向变换进行数值反转并评估优化对的一致性来生成多个变形估计。与使用单一表示相比,这种共识提高了配准准确性,并产生了可用于自动质量控制的强大的不确定性度量。我们使用 4D 肺部 CT 数据集评估我们的方法。与当前最先进的技术相比,所提出的循环一致优化方法将优化失败率从 2.4% 降低到 0.0%。所提出的推理方法将地标精度提高了 4.5%,并且所提出的不确定性度量可检测配准方法无法收敛到正确解决方案的所有实例。我们使用腹部 4D MRI 中的中心线传播任务验证了这些结果对其他数据的通用性,与单 INR 配准相比,我们的方法在传播一致性方面实现了 46% 的改进,并证明了所提出的不确定性度量和配准精度之间的强相关性。RI 斯托克,Jaap/AAH-7597-2019;伊凡娜·伊斯古姆/H-8659-2014;路易斯·范·哈滕/;雅普·斯托克/路易斯·OI van Harten/0000-0002-0943-2825;斯托克,雅普/0000-0002-9822-3784 TC 1 Z8 0
ZS 0
ZB 0
ZR 0
ZA 0
Z9 1
C1 Amsterdam UMC Locat Univ Amsterdam, Dept Biomed Engn & Phys, NL-1105 AZ
Amsterdam, Netherlands
C1 Univ Amsterdam, Informat Inst, NL-1098 XH Amsterdam, Netherlands
C1 Amsterdam UMC Locat Univ Amsterdam, Dept Radiol & Nucl Med, NL-1105 AZ
Amsterdam, Netherlands
C1 Amsterdam Gastroenterol Endocrinol Metab, NL-1105 AZ Amsterdam,
Netherlands
C1 Canc Ctr Amsterdam, NL-1080 HV Amsterdam, Netherlands
C1 Amsterdam UMC Locat Univ Amsterdam, Dept Biomed Engn, NL-1105 AZ
Amsterdam, Netherlands
C1 Amsterdam UMC Locat Univ Amsterdam, Dept Radiol & Nucl Med, NL-1105 AZ
Amsterdam, Netherlands
C3 Amsterdam UMC Locat Univ Amsterdam
C3 Amsterdam UMC Locat Univ Amsterdam
C3 Amsterdam Gastroenterol Endocrinol Metab
C3 Amsterdam UMC Locat Univ Amsterdam
C3 Amsterdam UMC Locat Univ Amsterdam
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400011
PM 37782589
ER
ZS 0 ZB 0 ZR 0 ZA 0 Z9 1 C1 阿姆斯特丹 UMC 位置大学阿姆斯特丹,生物医学工程与物理系,NL-1105 AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹大学,信息研究所,NL-1098 XH 阿姆斯特丹,荷兰 C1 阿姆斯特丹 UMC 位置大学阿姆斯特丹, Dept Radiol & Nucl Med, NL-1105 AZ 阿姆斯特丹, 荷兰 C1 阿姆斯特丹胃肠内分泌代谢, NL-1105 AZ 阿姆斯特丹, 荷兰 C1 Canc Ctr 阿姆斯特丹, NL-1080 HV 阿姆斯特丹, 荷兰 C1 阿姆斯特丹 UMC Locat Univ 阿姆斯特丹, 生物医学工程部, NL-1105 AZ 阿姆斯特丹,荷兰 C1 阿姆斯特丹 UMC Locat Univ 阿姆斯特丹,放射科和 Nucl Med,NL-1105 AZ 阿姆斯特丹,荷兰 C3 阿姆斯特丹 UMC Locat Univ 阿姆斯特丹 C3 阿姆斯特丹 UMC Locat Univ 阿姆斯特丹 C3 阿姆斯特丹胃肠内分泌代谢 C3 阿姆斯特丹 UMC Locat Univ 阿姆斯特丹C3 阿姆斯特丹 UMC 位置 阿姆斯特丹大学 SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400011 PM 37782589 ER
PT J
AU Li, Wei
Liu, Guang-Hai
Fan, Haoyi
Li, Zuoyong
Zhang, David
PT J AU Li, 刘伟, 范广海, 李浩毅, 张作勇, David
TI Self-Supervised Multi-Scale Cropping and Simple Masked Attentive
Predicting for Lung CT-Scan Anomaly Detection
TI 用于肺部 CT 扫描异常检测的自监督多尺度裁剪和简单屏蔽关注预测
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 594
EP 607
DI 10.1109/TMI.2023.3313778
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 594 EP 607 DI 10.1109/TMI.2023.3313778 DT 文章 PD 2024 年 1 月 PY 2024
AB Anomaly detection has been widely explored by training an
out-of-distribution detector with only normal data for medical images.
However, detecting local and subtle irregularities without prior
knowledge of anomaly types brings challenges for lung CT-scan image
anomaly detection. In this paper, we propose a self-supervised framework
for learning representations of lung CT-scan images via both multi-scale
cropping and simple masked attentive predicting, which is capable of
constructing a powerful out-of-distribution detector. Firstly, we
propose CropMixPaste, a self-supervised augmentation task for generating
density shadow-like anomalies that encourage the model to detect local
irregularities of lung CT-scan images. Then, we propose a
self-supervised reconstruction block, named simple masked attentive
predicting block (SMAPB), to better refine local features by predicting
masked context information. Finally, the learned representations by
self-supervised tasks are used to build an out-of-distribution detector.
The results on real lung CT-scan datasets demonstrate the effectiveness
and superiority of our proposed method compared with state-of-the-art
methods.
RI Zhang, David/O-9396-2016; Li, Zuoyong/; Li, Wei/; Fan, Haoyi/; Liu, Guang-Hai/
OI Li, Zuoyong/0000-0003-0952-9915; Li, Wei/0000-0001-5271-8469; Fan,
Haoyi/0000-0001-9428-7812; Liu, Guang-Hai/0000-0002-1558-2694
ZR 0
ZA 0
通过仅使用医学图像的正常数据训练分布外检测器,AB 异常检测已被广泛探索。然而,在事先不了解异常类型的情况下检测局部和细微的不规则现象给肺部 CT 扫描图像异常检测带来了挑战。在本文中,我们提出了一种自监督框架,用于通过多尺度裁剪和简单的掩模注意预测来学习肺部 CT 扫描图像的表示,该框架能够构建强大的分布外检测器。首先,我们提出了 CropMixPaste,这是一种自监督增强任务,用于生成密度阴影状异常,从而鼓励模型检测肺部 CT 扫描图像的局部不规则性。然后,我们提出了一种自监督重建块,称为简单掩蔽注意预测块(SMAPB),通过预测掩蔽上下文信息来更好地细化局部特征。最后,通过自监督任务学习到的表示用于构建分布外检测器。真实肺部 CT 扫描数据集的结果证明了我们提出的方法与最先进的方法相比的有效性和优越性。张日,大卫/O-9396-2016;李作勇/;李伟/;范浩仪/;刘广海/李作勇/0000-0003-0952-9915;李伟/0000-0001-5271-8469;范浩一/0000-0001-9428-7812;刘广海/0000-0002-1558-2694 ZR 0 ZA 0
ZS 0
ZB 0
Z8 0
TC 0
Z9 0
C1 Fujian Univ Technol, Coll Comp Sci & Math, Fuzhou 350118, Peoples R
China
C1 Guangxi Normal Univ, Coll Comp Sci & Engn, Guilin 541004, Peoples R
China
C1 Zhengzhou Univ, Sch Comp & Artificial Intelligence, Zhengzhou 450001,
Peoples R China
C1 Minjiang Univ, Coll Comp & Control Engn, Fujian Prov Key Lab Med Big
Data Engn, Fujian Prov Key Lab Informat Proc & Intelligent C, Fuzhou,
Peoples R China
C1 Chinese Univ Hong Kong Shenzhen, Sch Data Sci, Shenzhen 518172, Peoples
R China
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600023
PM 37695968
ER
ZS 0 ZB 0 Z8 0 TC 0 Z9 0 C1 福建工业大学理工学院,福州 350118,人民大学 C1 广西师范大学理工学院,桂林 541004,人民大学 C1 郑州大学理工学院郑州 450001,人民 R 中国 C1 闽江大学,计算与控制工程学院,福建省医学大数据工程重点实验室,福建省信息处理与智能 C 重点实验室,福州,人民 R 中国 C1 香港大学深圳, Sch Data Sci, 深圳 518172, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600023 PM 37695968 ER
PT J
AU Caudoux, Manon
Demeulenaere, Oscar
Poree, Jonathan
Sauvage, Jack
Mateo, Philippe
Ghaleh, Bijan
Flesch, Martin
Ferin, Guillaume
Tanter, Mickael
Deffieux, Thomas
Papadacci, Clement
Pernot, Mathieu
PT J AU Caudoux、Manon Demeulenaere、Oscar Poree、Jonathan Sauvage、Jack Mateo、Philippe Ghaleh、Bijan Flesch、Martin Ferin、Guillaume Tanter、Mickael Deffieux、Thomas Papadacci、Clement Pernot、Mathieu
TI Curved Toroidal Row Column Addressed Transducer for 3D Ultrafast
Ultrasound Imaging
用于 3D 超快超声成像的 TI 弯曲环形行列寻址传感器
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3279
EP 3291
DI 10.1109/TMI.2024.3391689
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3279 EP 3291 DI 10.1109/TMI.2024.3391689 DT 文章 PD 2024 年 SEP 2024 年 PY
AB 3D Imaging of the human heart at high frame rate is of major interest
for various clinical applications. Electronic complexity and cost has
prevented the dissemination of 3D ultrafast imaging into the clinic. Row
column addressed (RCA) transducers provide volumetric imaging at
ultrafast frame rate by using a low electronic channel count, but
current models are ill-suited for transthoracic cardiac imaging due to
field-of-view limitations. In this study, we proposed a mechanically
curved RCA with an aperture adapted for transthoracic cardiac imaging (
$24\times16$ mm2). The RCA has a toroidal curved surface of 96 elements
along columns (curvature radius rC = 4.47 cm) and 64 elements along rows
(curvature radius rR = 3 cm). We implemented delay and sum beamforming
with an analytical calculation of the propagation of a toroidal wave
which was validated using simulations (Field II). The imaging
performance was evaluated on a calibrated phantom. Experimental 3D
imaging was achieved up to 12 cm deep with a total angular aperture of
30 degrees for both lateral dimensions. The Contrast-to-Noise ratio
increased by 12 dB from 2 to 128 virtual sources. Then, 3D Ultrasound
Localization Microscopy (ULM) was characterized in a sub-wavelength tube
diameter. Finally, 3D ULM was demonstrated on a perfused ex-vivo swine
heart to image the coronary microcirculation.
RI Demeulenaere, Oscar/IXW-9022-2023; Papadacci, Clément/AAA-4938-2022; Caudoux, Manon/; SAUVAGE, Jack/; Tanter, Mickael/H-4657-2012; Demeulenaere, Oscar/
OI Caudoux, Manon/0009-0000-5405-0554; SAUVAGE, Jack/0000-0001-9751-7388;
Tanter, Mickael/0000-0001-7739-8051; Demeulenaere,
Oscar/0000-0003-4492-5024
ZB 0
ZR 0
高帧率下人类心脏的 AB 3D 成像对于各种临床应用来说是非常重要的。电子复杂性和成本阻碍了 3D 超快成像在临床中的传播。行列寻址 (RCA) 传感器通过使用低电子通道数以超快帧速率提供体积成像,但由于视场限制,当前模型不适合经胸心脏成像。在这项研究中,我们提出了一种机械弯曲的 RCA,其孔径适合经胸心脏成像($24\times16$mm2)。 RCA 具有沿列 96 个元件(曲率半径 rC = 4.47 cm)和沿行 64 个元件(曲率半径 rR = 3 cm)的环形曲面。我们通过对环形波传播的分析计算来实现延迟和求和波束形成,并使用模拟进行验证(领域 II)。成像性能在校准模型上进行评估。实验性 3D 成像深度可达 12 厘米,两个横向尺寸的总孔径角均为 30 度。对比度与噪声比从 2 个虚拟源增加到 128 个,增加了 12 dB。然后,3D 超声定位显微镜 (ULM) 表征了亚波长管直径。最后,在灌注的离体猪心脏上演示了 3D ULM,以对冠状动脉微循环进行成像。 RI Demeulenaere,奥斯卡/IXW-9022-2023;克莱门特·帕帕达奇/AAA-4938-2022;曼农·考杜/;索瓦奇,杰克/;迈克尔·坦特/H-4657-2012; Demeulenaere,奥斯卡/OI Caudoux,曼农/0009-0000-5405-0554;索瓦奇,杰克/0000-0001-9751-7388;迈克尔·坦特/0000-0001-7739-8051;奥斯卡·德穆勒奈尔/0000-0003-4492-5024 ZB 0 ZR 0
ZS 0
Z8 0
TC 0
ZA 0
Z9 0
C1 PSL Univ, INSERM, CNRS, Phys Med,ESPCI, F-75015 Paris, France
C1 Ecole Natl Veterinaire Alfort, Inserm, UPEC, U955,IMRB, F-94700 Maisons
Alfort, France
C1 Vermon, F-37000 Tours, France
C3 Vermon
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600004
PM 38640053
ER
ZS 0 Z8 0 TC 0 ZA 0 Z9 0 C1 PSL Univ、INSERM、CNRS、Phys Med、ESPCI、F-75015 巴黎,法国 C1 阿尔福特国立兽医学院、Inserm、UPEC、U955、IMRB、F-94700 Maisons Alfort,法国C1 弗蒙,F-37000 图尔,法国 C3 弗蒙 SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600004 PM 38640053 ER
PT J
AU Shaker, Abdelrahman
Maaz, Muhammad
Rasheed, Hanoona
Khan, Salman
Yang, Ming-Hsuan
Khan, Fahad Shahbaz
PT J AU Shaker、Abdelrahman Maaz、Muhammad Rasheed、Hanoona Khan、Salman Yang、Ming-Hsuan Khan、Fahad Shahbaz
TI UNETR plus plus : Delving Into Efficient and Accurate 3D Medical Image
Segmentation
TI UNETR plus plus:深入研究高效准确的 3D 医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3377
EP 3390
DI 10.1109/TMI.2024.3398728
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3377 EP 3390 DI 10.1109/TMI.2024.3398728 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Owing to the success of transformer models, recent works study their
applicability in 3D medical segmentation tasks. Within the transformer
models, the self-attention mechanism is one of the main building blocks
that strives to capture long-range dependencies, compared to the local
convolutional-based design. However, the self-attention operation has
quadratic complexity which proves to be a computational bottleneck,
especially in volumetric medical imaging, where the inputs are 3D with
numerous slices. In this paper, we propose a 3D medical image
segmentation approach, named UNETR++, that offers both high-quality
segmentation masks as well as efficiency in terms of parameters, compute
cost, and inference speed. The core of our design is the introduction of
a novel efficient paired attention (EPA) block that efficiently learns
spatial and channel-wise discriminative features using a pair of
inter-dependent branches based on spatial and channel attention. Our
spatial attention formulation is efficient and has linear complexity
with respect to the input. To enable communication between spatial and
channel-focused branches, we share the weights of query and key mapping
functions that provide a complimentary benefit (paired attention), while
also reducing the complexity. Our extensive evaluations on five
benchmarks, Synapse, BTCV, ACDC, BraTS, and Decathlon-Lung, reveal the
effectiveness of our contributions in terms of both efficiency and
accuracy. On Synapse, our UNETR++ sets a new state-of-the-art with a
Dice Score of 87.2%, while significantly reducing parameters and FLOPs
by over 71%, compared to the best method in the literature. Our code and
models are available at: https://tinyurl.com/2p87x5xn.
RI Yang, Ming-Hsuan/AAE-7350-2019; Maaz, Muhammad/GOK-1100-2022; Yang, Ming-Hsuan/T-9533-2019; Shaker, Abdelrahman/; Khan, Salman Hameed/M-4834-2016
OI Yang, Ming-Hsuan/0000-0003-4848-2304; Shaker,
Abdelrahman/0000-0001-7651-4057; Khan, Salman Hameed/0000-0002-9502-1749
ZR 0
ZB 0
Z8 0
ZA 0
TC 4
AB 由于 Transformer 模型的成功,最近的工作研究了它们在 3D 医学分割任务中的适用性。在 Transformer 模型中,与基于局部卷积的设计相比,自注意力机制是努力捕获远程依赖性的主要构建块之一。然而,自注意力操作具有二次复杂度,这被证明是一个计算瓶颈,特别是在体积医学成像中,其中输入是具有大量切片的 3D。在本文中,我们提出了一种名为 UNETR++ 的 3D 医学图像分割方法,该方法提供高质量的分割掩模以及参数、计算成本和推理速度方面的效率。我们设计的核心是引入一种新颖的高效配对注意力(EPA)块,该块使用一对基于空间和通道注意力的相互依赖的分支来有效地学习空间和通道方面的判别特征。我们的空间注意力公式是有效的,并且相对于输入具有线性复杂性。为了实现以空间和通道为中心的分支之间的通信,我们共享查询和关键映射函数的权重,这些函数提供了免费的好处(配对注意力),同时还降低了复杂性。我们对 Synapse、BTCV、ACDC、BraTS 和 Decathlon-Lung 五个基准进行了广泛的评估,揭示了我们在效率和准确性方面所做贡献的有效性。在 Synapse 上,我们的 UNETR++ 创下了新的最先进水平,Dice 得分为 87.2%,同时与文献中的最佳方法相比,参数和 FLOP 显着减少了 71% 以上。我们的代码和模型可在以下网址获取:https://tinyurl.com/2p87x5xn。日阳明轩/AAE-7350-2019;穆罕默德·马兹/GOK-1100-2022;杨明轩/T-9533-2019;沙克,阿卜杜勒拉赫曼/; Khan, Salman Hameed/M-4834-2016 OI Yang, Ming-Hsuan/0000-0003-4848-2304;沙克,阿卜杜勒拉赫曼/0000-0001-7651-4057;汗,萨尔曼·哈米德/0000-0002-9502-1749 ZR 0 ZB 0 Z8 0 ZA 0 TC 4
ZS 0
Z9 4
C1 Mohamed Bin Zayed Univ Artificial Intelligence, Comp Vis Dept, Abu
Dhabi, U Arab Emirates
C1 Univ Calif Merced, Elect Engn & Comp Sci Dept, Merced, CA 95343 USA
C1 Yonsei Univ, Coll Comp, Seoul 03722, South Korea
C1 Google, Mountain View, CA 95344 USA
C1 Mohamed Bin Zayed Univ, Abu Dhabi, U Arab Emirates
C1 Linkoping Univ, Elect Engn Dept, S-58183 Linkoping, Sweden
C3 Mohamed Bin Zayed Univ
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600002
PM 38722726
ER
ZS 0 Z9 4 C1 Mohamed Bin Zayed Univ 人工智能,Comp Vis Dept,阿布扎比,阿拉伯联合酋长国 C1 Univ Calif Merced,Elect Engn & Comp Sci Dept,Merced,CA 95343 USA C1 Yonsei Univ,Coll Comp,Seoul 03722,South韩国 C1 Google, Mountain View, CA 95344 美国 C1 Mohamed Bin Zayed Univ, Abu Dhabi, U 阿拉伯联合酋长国 C1 Linkoping Univ, Elect Engn Dept, S-58183 Linkoping, 瑞典 C3 Mohamed Bin Zayed Univ SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600002 PM 38722726 ER
PT J
AU Gao, Jun
Lao, Qicheng
Kang, Qingbo
Liu, Paul
Du, Chenlin
Li, Kang
Zhang, Le
PT J AU 高,Jun Lao,Qi Cheng Kang,Qingbo Liu,Paul Du,Chenlin Li,Kang Chang,Le
TI Boosting Your Context by Dual Similarity Checkup for In-Context Learning
Medical Image Segmentation.
TI 通过用于上下文学习医学图像分割的双重相似性检查来增强您的上下文。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3440311
DT Journal Article
PD 2024-Aug-08
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3440311 DT 期刊文章 PD 2024-Aug-08 PY 2024
AB The recent advent of in-context learning (ICL) capabilities in large
pre-trained models has yielded significant advancements in the
generalization of segmentation models. By supplying domain-specific
image-mask pairs, the ICL model can be effectively guided to produce
optimal segmentation outcomes, eliminating the necessity for model
fine-tuning or interactive prompting. However, current existing
ICL-based segmentation models exhibit significant limitations when
applied to medical segmentation datasets with substantial diversity. To
address this issue, we propose a dual similarity checkup approach to
guarantee the effectiveness of selected in-context samples so that their
guidance can be maximally leveraged during inference. We first employ
large pre-trained vision models for extracting strong semantic
representations from input images and constructing a feature embedding
memory bank for semantic similarity checkup during inference. Assuring
the similarity in the input semantic space, we then minimize the
discrepancy in the mask appearance distribution between the support set
and the estimated mask appearance prior through similarity-weighted
sampling and augmentation. We validate our proposed dual similarity
checkup approach on eight publicly available medical segmentation
datasets, and extensive experimental results demonstrate that our
proposed method significantly improves the performance metrics of
existing ICL-based segmentation models, particularly when applied to
medical image datasets characterized by substantial diversity.
RI Liu, Paul/; Li, Kang/; Du, Chenlin/; Jun, Gao/; Zhang, Le/AAD-9104-2019
OI Liu, Paul/0000-0001-8092-6353; Li, Kang/0000-0002-8136-9816; Du,
Chenlin/0000-0001-7039-8542; Jun, Gao/0009-0006-6830-5849; Zhang,
Le/0000-0002-3708-1727
AB 最近在大型预训练模型中出现的上下文学习 (ICL) 功能在分割模型的泛化方面取得了重大进展。通过提供特定领域的图像掩模对,可以有效地引导 ICL 模型产生最佳分割结果,从而消除模型微调或交互式提示的必要性。然而,当前现有的基于 ICL 的分割模型在应用于具有很大多样性的医学分割数据集时表现出很大的局限性。为了解决这个问题,我们提出了一种双重相似性检查方法来保证所选上下文样本的有效性,以便在推理过程中最大限度地利用它们的指导。我们首先采用大型预训练视觉模型从输入图像中提取强语义表示,并构建特征嵌入内存库以在推理过程中进行语义相似性检查。确保输入语义空间中的相似性,然后我们通过相似性加权采样和增强来最小化支持集和先验估计的掩模外观之间掩模外观分布的差异。我们在八个公开可用的医学分割数据集上验证了我们提出的双重相似性检查方法,并且广泛的实验结果表明,我们提出的方法显着提高了现有基于 ICL 的分割模型的性能指标,特别是当应用于具有大量多样性特征的医学图像数据集时。 RI 刘,保罗/;李康/;杜晨琳/;高俊/;张乐/AAD-9104-2019 OI 刘保罗/0000-0001-8092-6353;李康/0000-0002-8136-9816;杜晨林/0000-0001-7039-8542;高军/0009-0006-6830-5849;张乐/0000-0002-3708-1727
ZS 0
ZR 0
ZB 0
TC 0
Z8 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-08-10
UT MEDLINE:39115986
PM 39115986
ER
ZS 0 ZR 0 ZB 0 TC 0 Z8 0 ZA 0 Z9 0 EI 1558-254X DA 2024-08-10 UT MEDLINE:39115986 PM 39115986 ER
PT J
AU Zhang, Yikun
Hu, Dianlin
Li, Wangyao
Zhang, Weijie
Chen, Gaoyu
Chen, Ronald C
Chen, Yang
Gao, Hao
PT J AU 张, 胡逸琨, 李殿林, 张王耀, 陈伟杰, 陈高宇, Ronald C Chen, 高阳, 郝
TI 2V-CBCT: Two-Orthogonal-Projection based CBCT Reconstruction and Dose
Calculation for Radiation Therapy using Real Projection Data.
TI 2V-CBCT:基于两次正交投影的 CBCT 重建和使用真实投影数据的放射治疗剂量计算。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3439573
DT Journal Article
PD 2024-Aug-06
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3439573 DT 期刊文章 PD 2024-Aug-06 PY 2024
AB This work demonstrates the feasibility of
two-orthogonal-projection-based CBCT (2V-CBCT) reconstruction and dose
calculation for radiation therapy (RT) using real projection data, which
is the first 2V-CBCT feasibility study with real projection data, to the
best of our knowledge. RT treatments are often delivered in multiple
fractions, for which on-board CBCT is desirable to calculate the
delivered dose per fraction for the purpose of RT delivery quality
assurance and adaptive RT. However, not all RT treatments/fractions have
CBCT acquired, but two orthogonal projections are always available. The
question to be addressed in this work is the feasibility of 2V-CBCT for
the purpose of RT dose calculation. 2V-CBCT is a severely ill-posed
inverse problem for which we propose a coarse-to-fine learning strategy.
First, a 3D deep neural network that can extract and exploit the
inter-slice and intra-slice information is adopted to predict the
initial 3D volumes. Then, a 2D deep neural network is utilized to
fine-tune the initial 3D volumes slice-by-slice. During the fine-tuning
stage, a perceptual loss based on multi-frequency features is employed
to enhance the image reconstruction. Dose calculation results from both
photon and proton RT demonstrate that 2V-CBCT provides comparable
accuracy with full-view CBCT based on real projection data.
OI Hu, Dianlin/0000-0003-4857-9878; Zhang, Yikun/0000-0002-4048-4869
ZA 0
TC 0
AB 这项工作展示了使用真实投影数据进行基于二次正交投影的 CBCT (2V-CBCT) 重建和放射治疗 (RT) 剂量计算的可行性,这是第一个使用真实投影数据的 2V-CBCT 可行性研究,以据我们所知。 RT 治疗通常分多次进行,因此需要机载 CBCT 来计算每次分次的递送剂量,以保证 RT 递送质量和适应性 RT。然而,并非所有 RT 治疗/分次都获得了 CBCT,但始终可以获得两个正交投影。本工作要解决的问题是 2V-CBCT 用于 RT 剂量计算的可行性。 2V-CBCT 是一个严重不适定的逆问题,我们为此提出了一种从粗到精的学习策略。首先,采用可以提取和利用切片间和切片内信息的 3D 深度神经网络来预测初始 3D 体积。然后,利用 2D 深度神经网络逐片微调初始 3D 体积。在微调阶段,采用基于多频率特征的感知损失来增强图像重建。光子和质子 RT 的剂量计算结果表明,2V-CBCT 提供的精度与基于真实投影数据的全视图 CBCT 相当。胡爱殿林/0000-0003-4857-9878;张亦坤/0000-0002-4048-4869 ZA 0 TC 0
ZS 0
ZB 0
Z8 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-08-09
UT MEDLINE:39106129
PM 39106129
ER
ZS 0 ZB 0 Z8 0 ZR 0 Z9 0 EI 1558-254X DA 2024-08-09 UT MEDLINE:39106129 PM 39106129 ER
PT J AU Hou, Qingshan Wang, Yaqi Cao, Peng Cheng, Shuai Lan, Linqi Yang, Jinzhu Liu, Xiaoli Zaiane, Osmar R.
TI A Collaborative Self-Supervised Domain Adaptation for Low-Quality
Medical Image Enhancement
TI 用于低质量医学图像增强的协作自监督域适应
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2479
EP 2494
DI 10.1109/TMI.2024.3367367
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2479 EP 2494 DI 10.1109/TMI.2024.3367367 DT 文章 PD 2024 年 7 月 PY 2024
AB Medical image analysis techniques have been employed in diagnosing and
screening clinical diseases. However, both poor medical image quality
and illumination style inconsistency increase uncertainty in clinical
decision-making, potentially resulting in clinician misdiagnosis. The
majority of current image enhancement methods primarily concentrate on
enhancing medical image quality by leveraging high-quality reference
images, which are challenging to collect in clinical applications. In
this study, we address image quality enhancement within a fully
self-supervised learning setting, wherein neither high-quality images
nor paired images are required. To achieve this goal, we investigate the
potential of self-supervised learning combined with domain adaptation to
enhance the quality of medical images without the guidance of
high-quality medical images. We design a Domain Adaptation
Self-supervised Quality Enhancement framework, called DASQE. More
specifically, we establish multiple domains at the patch level through a
designed rule-based quality assessment scheme and style clustering. To
achieve image quality enhancement and maintain style consistency, we
formulate the image quality enhancement as a collaborative
self-supervised domain adaptation task for disentangling the low-quality
factors, medical image content, and illumination style characteristics
by exploring intrinsic supervision in the low-quality medical images.
Finally, we perform extensive experiments on six benchmark datasets of
medical images, and the experimental results demonstrate that DASQE
attains state-of-the-art performance. Furthermore, we explore the impact
of the proposed method on various clinical tasks, such as retinal fundus
vessel/lesion segmentation, nerve fiber segmentation, polyp
segmentation, skin lesion segmentation, and disease classification. The
results demonstrate that DASQE is advantageous for diverse downstream
image analysis tasks.
RI Zaiane, Osmar/KIA-3435-2024; Hou, Qingshan/
OI Hou, Qingshan/0000-0002-3839-1399
ZB 0
Z8 0
TC 0
ZR 0
AB医学图像分析技术已应用于临床疾病的诊断和筛查。然而,较差的医学图像质量和照明风格的不一致增加了临床决策的不确定性,可能导致临床医生误诊。当前大多数图像增强方法主要集中在通过利用高质量参考图像来增强医学图像质量,而在临床应用中收集这些图像具有挑战性。在这项研究中,我们在完全自我监督的学习环境中解决图像质量增强问题,其中既不需要高质量图像也不需要配对图像。为了实现这一目标,我们研究了自我监督学习与领域适应相结合的潜力,以在没有高质量医学图像指导的情况下提高医学图像的质量。我们设计了一个领域适应自监督质量增强框架,称为 DASQE。更具体地说,我们通过设计的基于规则的质量评估方案和风格聚类在补丁级别建立多个域。为了实现图像质量增强并保持风格一致性,我们将图像质量增强制定为协作自监督域适应任务,通过探索低质量中的内在监督来解开低质量因素、医学图像内容和照明风格特征。医学图像。最后,我们对六个医学图像基准数据集进行了广泛的实验,实验结果表明 DASQE 达到了最先进的性能。 此外,我们探讨了所提出的方法对各种临床任务的影响,例如视网膜眼底血管/病变分割、神经纤维分割、息肉分割、皮肤病变分割和疾病分类。结果表明,DASQE 对于各种下游图像分析任务具有优势。RI Zaiane,奥斯玛/KIA-3435-2024;侯青山/ OI 侯青山/0000-0002-3839-1399 ZB 0 Z8 0 TC 0 ZR 0
ZS 0
ZA 0
Z9 0
C1 Northeastern Univ, Sch Comp Sci & Engn, Shenyang 110000, Peoples R China
C1 Northeastern Univ, Minist Educ, Key Lab Intelligent Comp Med Image,
Shenyang 110000, Peoples R China
C1 Alibaba Grp, DAMO Acad, Hangzhou 311121, Peoples R China
C1 Univ Alberta, Amii, Edmonton, AB T5J 3B1, Canada
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100007
PM 38373127
ER
ZS 0 ZA 0 Z9 0 C1 东北大学,科学计算与工程学院,沉阳 110000,人民 R 中国 C1 东北大学,教育部,智能医学影像重点实验室,沉阳 110000,人民 R 中国 C1 阿里巴巴集团,达摩院,杭州311121, Peoples R China C1 Univ Alberta, Amii, Edmonton, AB T5J 3B1, 加拿大 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100007 PM 38373127 ER
PT J
AU Li, Lei
Camps, Julia
Wang, Zhinuo (Jenny)
Beetz, Marcel
Banerjee, Abhirup
Rodriguez, Blanca
Grau, Vicente
PT J AU Li、Lei Camps、Julia Wang、Zhinuo (Jenny) Beetz、Marcel Banerjee、Abhirup Rodriguez、Blanca Grau、Vicente
TI Toward Enabling Cardiac Digital Twins of Myocardial Infarction Using
Deep Computational Models for Inverse Inference
TI 利用深度计算模型进行逆向推理,实现心肌梗死的心脏数字孪生
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2466
EP 2478
DI 10.1109/TMI.2024.3367409
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2466 EP 2478 DI 10.1109/TMI.2024.3367409 DT 文章 PD 2024 年 7 月 PY 2024
AB Cardiac digital twins (CDTs) have the potential to offer individualized
evaluation of cardiac function in a non-invasive manner, making them a
promising approach for personalized diagnosis and treatment planning of
myocardial infarction (MI). The inference of accurate myocardial tissue
properties is crucial in creating a reliable CDT of MI. In this work, we
investigate the feasibility of inferring myocardial tissue properties
from the electrocardiogram (ECG) within a CDT platform. The platform
integrates multi-modal data, such as cardiac MRI and ECG, to enhance the
accuracy and reliability of the inferred tissue properties. We perform a
sensitivity analysis based on computer simulations, systematically
exploring the effects of infarct location, size, degree of
transmurality, and electrical activity alteration on the simulated QRS
complex of ECG, to establish the limits of the approach. We subsequently
present a novel deep computational model, comprising a dual-branch
variational autoencoder and an inference model, to infer infarct
location and distribution from the simulated QRS. The proposed model
achieves mean Dice scores of $ {0}.{457} \pm {0}.{317} $ and $ {0}.{302}
\pm {0}.{273} $ for the inference of left ventricle scars and border
zone, respectively. The sensitivity analysis enhances our understanding
of the complex relationship between infarct characteristics and
electrophysiological features. The in silico experimental results show
that the model can effectively capture the relationship for the inverse
inference, with promising potential for clinical application in the
future. The code is available at
https://github.com/lileitech/MI_inverse_inference.
RI Banerjee, Abhirup/HDM-3126-2022; Grau, Vicente/; Wang, Zhinuo Jenny/; Beetz, Marcel/; Li, Lei/; Banerjee, Abhirup/
OI Grau, Vicente/0000-0001-8139-3480; Wang, Zhinuo
Jenny/0000-0001-5325-909X; Beetz, Marcel/0009-0004-5239-9313; Li,
Lei/0000-0003-1281-6472; Banerjee, Abhirup/0000-0001-8198-5128
AB Cardiac 数字孪生 (CDT) 具有以非侵入性方式提供心脏功能个性化评估的潜力,使其成为心肌梗死 (MI) 个性化诊断和治疗计划的有前途的方法。准确的心肌组织特性的推断对于创建可靠的 MI 的 CDT 至关重要。在这项工作中,我们研究了在 CDT 平台内从心电图 (ECG) 推断心肌组织特性的可行性。该平台集成了心脏 MRI 和 ECG 等多模态数据,以提高推断组织特性的准确性和可靠性。我们基于计算机模拟进行敏感性分析,系统地探索梗塞位置、大小、透壁程度和电活动改变对模拟心电图 QRS 波群的影响,以确定该方法的局限性。随后,我们提出了一种新颖的深度计算模型,包括双分支变分自动编码器和推理模型,以从模拟的 QRS 推断梗塞位置和分布。所提出的模型对于 left 的推理实现了 $ {0}.{457} \pm {0}.{317} $ 和 $ {0}.{302} \pm {0}.{273} $ 的平均 Dice 分数分别是心室疤痕和边界区。敏感性分析增强了我们对梗塞特征和电生理特征之间复杂关系的理解。计算机实验结果表明,该模型能够有效捕捉逆向推理的关系,未来具有良好的临床应用潜力。代码可在 https://github.com/lileitech/MI_inverse_inference 获取。RI 巴纳吉,阿比鲁普/HDM-3126-2022;格劳,维森特/;王志诺Jenny/;马塞尔·贝茨/;李磊/;巴纳吉,阿比鲁普/OI Grau,维森特/0000-0001-8139-3480;王志诺/0000-0001-5325-909X;马塞尔·贝茨/0009-0004-5239-9313;李雷/0000-0003-1281-6472;班纳吉,阿比鲁普/0000-0001-8198-5128
ZS 0
TC 0
Z8 0
ZB 0
ZA 0
ZR 0
Z9 0
C1 Univ Oxford, Inst Biomed Engn, Dept Engn Sci, Oxford OX3 7DQ, England
C1 Univ Oxford, Dept Comp Sci, Oxford OX1 3QD, England
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100019
PM 38373128
ER
ZS 0 TC 0 Z8 0 ZB 0 ZA 0 ZR 0 Z9 0 C1 牛津大学,生物医学工程学院,工程科学系,牛津 OX3 7DQ,英格兰 C1 牛津大学,计算科学系,牛津 OX1 3QD,英格兰 SN 0278-0062 EI 1558 -254X DA 2024-07-22 UT WOS:001263692100019 PM 38373128 ER
PT J
AU Schmidt, Adam
Mohareri, Omid
DiMaio, Simon P.
Salcudean, Septimiu E.
PT J AU Schmidt、Adam Mohareri、Omid DiMaio、Simon P. Salcudean、Septimiu E.
TI Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking
and Mapping
TI 红外线外科纹身:用于量化组织跟踪和绘图的数据集
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2634
EP 2645
DI 10.1109/TMI.2024.3372828
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2634 EP 2645 DI 10.1109/TMI.2024.3372828 DT 文章 PD 2024 年 7 月 PY 2024
AB Quantifying performance of methods for tracking and mapping tissue in
endoscopic environments is essential for enabling image guidance and
automation of medical interventions and surgery. Datasets developed so
far either use rigid environments, visible markers, or require
annotators to label salient points in videos after collection. These are
respectively: not general, visible to algorithms, or costly and
error-prone. We introduce a novel labeling methodology along with a
dataset that uses said methodology, Surgical Tattoos in Infrared (STIR).
STIR has labels that are persistent but invisible to visible spectrum
algorithms. This is done by labelling tissue points with IR-fluorescent
dye, indocyanine green (ICG), and then collecting visible light video
clips. STIR comprises hundreds of stereo video clips in both in vivo and
ex vivo scenes with start and end points labelled in the IR spectrum.
With over 3,000 labelled points, STIR will help to quantify and enable
better analysis of tracking and mapping methods. After introducing STIR,
we analyze multiple different frame-based tracking methods on STIR using
both 3D and 2D endpoint error and accuracy metrics. STIR is available at
https://dx.doi.org/10.21227/w8g4-g548
OI Salcudean, Septimiu/0000-0001-8826-8025; Schmidt,
Adam/0000-0003-4769-4313
ZR 0
TC 0
ZA 0
Z8 0
ZB 0
AB 量化内窥镜环境中跟踪和绘制组织的方法的性能对于实现医疗干预和手术的图像引导和自动化至关重要。迄今为止开发的数据集要么使用严格的环境、可见的标记,要么要求注释者在收集后标记视频中的显着点。这些分别是:不通用、算法可见、成本高昂且容易出错。我们引入了一种新颖的标记方法以及使用该方法的数据集,即红外纹身手术(STIR)。 STIR 具有持久性但对可见光谱算法不可见的标签。这是通过用红外荧光染料吲哚菁绿 (ICG) 标记组织点,然后收集可见光视频剪辑来完成的。 STIR 包含数百个体内和离体场景的立体视频剪辑,并在红外光谱中标记了起点和终点。 STIR 拥有 3,000 多个标记点,将有助于量化并更好地分析跟踪和绘图方法。介绍 STIR 后,我们使用 3D 和 2D 端点误差和准确度指标来分析 STIR 上多种不同的基于帧的跟踪方法。 STIR 可在 https://dx.doi.org/10.21227/w8g4-g548 OI Salcudean, Septimiu/0000-0001-8826-8025 获取;亚当·施密特/0000-0003-4769-4313 ZR 0 TC 0 ZA 0 Z8 0 ZB 0
ZS 0
Z9 0
C1 Univ British Columbia, Dept Elect Engn, Vancouver, BC V5T 2M9, Canada
C1 Intuit Surg, Adv Res, Sunnyvale, CA 94086 USA
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100012
PM 38437151
ER
ZS 0 Z9 0 C1 不列颠哥伦比亚大学,Dept Elect Engn,温哥华,BC V5T 2M9,加拿大 C1 Intuit Surg,Adv Res,桑尼维尔,CA 94086 美国 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100012下午 38437151 ER
PT J
AU Shen, Chengkang
Zhu, Hao
Zhou, You
Liu, Yu
Yi, Si
Dong, Lili
Zhao, Weipeng
Brady, David J
Cao, Xun
Ma, Zhan
Lin, Yi
PT J AU Shen, 朱成康, 周浩, 刘友, 于毅, 董思, 赵丽丽, 韦鹏 Brady, 曹大卫, 马迅, 林占, 易
TI Continuous 3D Myocardial Motion Tracking via Echocardiography.
TI 通过超声心动图进行连续 3D 心肌运动跟踪。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3419780
DT Journal Article
PD 2024-Jun-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3419780 DT 期刊文章 PD 2024-Jun-27 PY 2024
AB Myocardial motion tracking stands as an essential clinical tool in the
prevention and detection of cardiovascular diseases (CVDs), the foremost
cause of death globally. However, current techniques suffer from
incomplete and inaccurate motion estimation of the myocardium in both
spatial and temporal dimensions, hindering the early identification of
myocardial dysfunction. To address these challenges, this paper
introduces the Neural Cardiac Motion Field (NeuralCMF). NeuralCMF
leverages implicit neural representation (INR) to model the 3D structure
and the comprehensive 6D forward/backward motion of the heart. This
method surpasses pixel-wise limitations by offering the capability to
continuously query the precise shape and motion of the myocardium at any
specific point throughout the cardiac cycle, enhancing the detailed
analysis of cardiac dynamics beyond traditional speckle tracking.
Notably, NeuralCMF operates without the need for paired datasets, and
its optimization is self-supervised through the physics knowledge priors
in both space and time dimensions, ensuring compatibility with both 2D
and 3D echocardiogram video inputs. Experimental validations across
three representative datasets support the robustness and innovative
nature of the NeuralCMF, marking significant advantages over existing
state-of-the-art methods in cardiac imaging and motion tracking. Code is
available at: https://njuvision.github.io/NeuralCMF.
OI Zhu, Hao/0000-0002-6756-9571; Zhou, You/0000-0001-5319-0934
ZB 0
TC 0
AB 心肌运动跟踪是预防和检测心血管疾病 (CVD) 的重要临床工具,心血管疾病是全球最主要的死亡原因。然而,当前的技术在空间和时间维度上对心肌的运动估计不完整且不准确,阻碍了心肌功能障碍的早期识别。为了应对这些挑战,本文介绍了神经心脏运动场(NeuralCMF)。 NeuralCMF 利用隐式神经表示 (INR) 对心脏的 3D 结构和全面的 6D 向前/向后运动进行建模。该方法超越了像素方面的限制,能够在整个心动周期的任何特定点连续查询心肌的精确形状和运动,从而增强对心脏动力学的详细分析,超越传统的斑点跟踪。值得注意的是,NeuralCMF 的运行不需要配对数据集,其优化是通过空间和时间维度的物理知识先验进行自我监督的,确保与 2D 和 3D 超声心动图视频输入兼容。三个代表性数据集的实验验证支持 NeuralCMF 的稳健性和创新性,与心脏成像和运动跟踪领域现有的最先进方法相比具有显着优势。代码位于:https://njuvision.github.io/NeuralCMF。朱爱浩/0000-0002-6756-9571;周游/0000-0001-5319-0934 ZB 0 TC 0
ZS 0
ZA 0
ZR 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-06-29
UT MEDLINE:38935475
PM 38935475
ER
ZS 0 ZA 0 ZR 0 Z8 0 Z9 0 EI 1558-254X DA 2024-06-29 UT MEDLINE:38935475 PM 38935475 ER
PT J
AU Yang, Wenhui
Gao, Shuo
Zhang, Hao
Yu, Hong
Xu, Menglei
Chong, Puimun
Zhang, Weijie
Wang, Hong
Zhang, Wenjuan
Qian, Airong
PT J AU Yang, 高文辉, 张硕, 于浩, 徐红, 冲梦蕾, 张佩门, 王伟杰, 张红, 钱文娟, 艾荣
TI PtbNet: Based on Local Few-Shot Classes and Small Objects to accurately
detect PTB.
TI PtbNet:基于局部少样本类和小对象来准确检测 PTB。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3419134
DT Journal Article
PD 2024-Jun-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3419134 DT 期刊文章 PD 2024-Jun-26 PY 2024
AB Pulmonary Tuberculosis (PTB) is one of the world's most infectious
illnesses, and its early detection is critical for preventing PTB.
Digital Radiography (DR) has been the most common and effective
technique to examine PTB. However, due to the variety and weak
specificity of phenotypes on DR chest X-ray (DCR), it is difficult to
make reliable diagnoses for radiologists. Although artificial
intelligence technology has made considerable gains in assisting the
diagnosis of PTB, it lacks methods to identify the lesions of PTB with
few-shot classes and small objects. To solve these problems, geometric
data augmentation was used to increase the size of the DCRs. For this
purpose, a diffusion probability model was implemented for six few-shot
classes. Importantly, we propose a new multi-lesion detector PtbNet
based on RetinaNet, which was constructed to detect small objects of PTB
lesions. The results showed that by two data augmentations, the number
of DCRs increased by 80% from 570 to 2,859. In the pre-evaluation
experiments with the baseline, RetinaNet, the AP improved by 9.9 for six
few-shot classes. Our extensive empirical evaluation showed that the AP
of PtbNet achieved 28.2, outperforming the other 9 state-of-the-art
methods. In the ablation study, combined with BiFPN+ and PSPD-Conv, the
AP increased by 2.1, APs increased by 5.0, and grew by an average of 9.8
in APm and APl. In summary, PtbNet not only improves the detection of
small-object lesions but also enhances the ability to detect different
types of PTB uniformly, which helps physicians diagnose PTB lesions
accurately. The code is available at
https://github.com/Wenhui-person/PtbNet/tree/master.
ZA 0
ZB 0
TC 0
AB 肺结核 (PTB) 是世界上传染性最强的疾病之一,早期发现对于预防 PTB 至关重要。数字放射线摄影 (DR) 是检查 PTB 的最常见和最有效的技术。然而,由于DR胸部X线(DCR)表型的多样性和特异性弱,放射科医生很难做出可靠的诊断。尽管人工智能技术在辅助PTB诊断方面取得了相当大的成果,但缺乏识别少镜头类、小物体的PTB病灶的方法。为了解决这些问题,使用几何数据增强来增加 DCR 的大小。为此,针对六个小样本类别实施了扩散概率模型。重要的是,我们提出了一种基于 RetinaNet 的新型多病灶检测器 PtbNet,其构建用于检测 PTB 病灶的小物体。结果显示,通过两次数据增强,DCR 的数量增加了 80%,从 570 个增加到 2,859 个。在使用基线 RetinaNet 的预评估实验中,六个小样本类别的 AP 提高了 9.9。我们广泛的实证评估表明,PtbNet 的 AP 达到了 28.2,优于其他 9 种最先进的方法。在消融研究中,结合BiFPN+和PSPD-Conv,AP增加了2.1,APs增加了5.0,APm和APl平均增长了9.8。综上所述,PtbNet不仅提高了小物体病灶的检测能力,还增强了统一检测不同类型PTB的能力,有助于医生准确诊断PTB病灶。代码可在 https://github.com/Wenhui-person/PtbNet/tree/master 获取。 ZA 0 ZB 0 TC 0
ZS 0
Z8 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-06-29
UT MEDLINE:38923480
PM 38923480
ER
ZS 0 Z8 0 ZR 0 Z9 0 EI 1558-254X DA 2024-06-29 UT MEDLINE:38923480 PM 38923480 ER
PT J
AU Ching-Roa, Vincent D.
Huang, Chi Z.
Giacomelli, Michael G.
PT J AU Ching-Roa、Vincent D. Huang、Chi Z. Giacomelli、Michael G.
TI Suppression of Subpixel Jitter in Resonant Scanning Systems With
Phase-locked Sampling
TI 通过锁相采样抑制谐振扫描系统中的子像素抖动
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2159
EP 2168
DI 10.1109/TMI.2024.3358191
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2159 EP 2168 DI 10.1109/TMI.2024.3358191 DT 文章 PD 2024 年 6 月 PY 2024
AB Resonant scanning is critical to high speed and in vivo imaging in many
applications of laser scanning microscopy. However, resonant scanning
suffers from well-known image artifacts due to scanner jitter, limiting
adoption of high-speed imaging technologies. Here, we introduce a
real-time, inexpensive and all electrical method to suppress jitter more
than an order of magnitude below the diffraction limit that can be
applied to most existing microscope systems with no software changes. By
phase-locking imaging to the resonant scanner period, we demonstrate an
86% reduction in pixel jitter, a 15% improvement in point spread
function with resonant scanning and show that this approach enables two
widely used models of resonant scanners to achieve comparable accuracy
to galvanometer scanners running two orders of magnitude slower.
Finally, we demonstrate the versatility of this method by retrofitting a
commercial two photon microscope and show that this approach enables
significant quantitative and qualitative improvements in biological
imaging.
OI Giacomelli, Michael/0000-0002-2570-0770
ZR 0
TC 0
在激光扫描显微镜的许多应用中,AB 共振扫描对于高速体内成像至关重要。然而,共振扫描由于扫描仪抖动而存在众所周知的图像伪影,限制了高速成像技术的采用。在这里,我们介绍了一种实时、廉价且全电气的方法,可以将抖动抑制在衍射极限以下一个数量级以上,该方法可以应用于大多数现有的显微镜系统,而无需更改软件。通过将成像锁相到共振扫描仪周期,我们证明了共振扫描的像素抖动减少了 86%,点扩散函数提高了 15%,并表明这种方法使两种广泛使用的共振扫描仪模型能够达到与检流计扫描仪的运行速度慢两个数量级。最后,我们通过改装商用双光子显微镜证明了该方法的多功能性,并表明该方法可以在生物成像方面实现显着的定量和定性改进。 OI 贾科梅利,迈克尔/0000-0002-2570-0770 ZR 0 TC 0
ZS 0
ZA 0
Z8 0
ZB 0
Z9 0
C1 Univ Rochester, Dept Biomed Engn, Rochester, NY 14627 USA
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100017
PM 38265914
ER
ZS 0 ZA 0 Z8 0 ZB 0 Z9 0 C1 罗切斯特大学生物医学工程系,罗切斯特,NY 14627 美国 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100017 PM 38265914 ER
PT J
AU Wang, Pengyu
Zhang, Huaqi
Zhu, Meilu
Jiang, Xi
Qin, Jing
Yuan, Yixuan
PT J AU Wang, 张鹏宇, 朱华琪, 蒋美露, 秦曦, 袁静, 逸轩
TI MGIML: Cancer Grading With Incomplete Radiology-Pathology Data via
Memory Learning and Gradient Homogenization
TI MGIML:通过记忆学习和梯度均质化使用不完整的放射病理学数据进行癌症分级
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2113
EP 2124
DI 10.1109/TMI.2024.3355142
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2113 EP 2124 DI 10.1109/TMI.2024.3355142 DT 文章 PD 2024 年 6 月 PY 2024
AB Taking advantage of multi-modal radiology-pathology data with
complementary clinical information for cancer grading is helpful for
doctors to improve diagnosis efficiency and accuracy. However, radiology
and pathology data have distinct acquisition difficulties and costs,
which leads to incomplete-modality data being common in applications. In
this work, we propose a Memory- and Gradient-guided Incomplete
Modal-modal Learning (MGIML) framework for cancer grading with
incomplete radiology-pathology data. Firstly, to remedy missing-modality
information, we propose a Memory-driven Hetero-modality Complement
(MH-Complete) scheme, which constructs modal-specific memory banks
constrained by a coarse-grained memory boosting (CMB) loss to record
generic radiology and pathology feature patterns, and develops a
cross-modal memory reading strategy enhanced by a fine-grained memory
consistency (FMC) loss to take missing-modality information from
well-stored memories. Secondly, as gradient conflicts exist between
missing-modality situations, we propose a Rotation-driven Gradient
Homogenization (RG-Homogenize) scheme, which estimates instance-specific
rotation matrices to smoothly change the feature-level gradient
directions, and computes confidence-guided homogenization weights to
dynamically balance gradient magnitudes. By simultaneously mitigating
gradient direction and magnitude conflicts, this scheme well avoids the
negative transfer and optimization imbalance problems. Extensive
experiments on CPTAC-UCEC and CPTAC-PDA datasets show that the proposed
MGIML framework performs favorably against state-of-the-art multi-modal
methods on missing-modality situations.
RI Jiang, Xi/ADT-2238-2022; Zhu, Meilu/CAI-0127-2022; Wang, Pengyu/ISA-9451-2023; WANG, Pengyu/; Yuan, Yixuan/KSL-8440-2024; Zhu, Meilu/; Jiang, Xi/
OI WANG, Pengyu/0000-0003-0997-9887; Yuan, Yixuan/0000-0002-0853-6948; Zhu,
Meilu/0000-0002-5563-7282; Jiang, Xi/0000-0003-3711-0847
AB 利用多模态放射病理数据与互补的临床信息进行癌症分级有助于医生提高诊断效率和准确性。然而,放射学和病理学数据具有明显的获取难度和成本,这导致不完整模态数据在应用中很常见。在这项工作中,我们提出了一种记忆和梯度引导的不完整模态学习(MGIML)框架,用于使用不完整的放射病理学数据进行癌症分级。首先,为了弥补缺失的模态信息,我们提出了一种记忆驱动的异模态补充(MH-Complete)方案,该方案构建了受粗粒度记忆增强(CMB)损失约束的模态特定记忆库,以记录通用放射学和病理学特征模式,并开发了一种跨模态记忆读取策略,该策略通过细粒度记忆一致性(FMC)损失来增强,以从存储良好的记忆中获取缺失模态信息。其次,由于缺失模态情况之间存在梯度冲突,我们提出了一种旋转驱动的梯度均质化(RG-Homogenize)方案,该方案估计特定于实例的旋转矩阵以平滑地改变特征级梯度方向,并计算置信引导的均质化权重来动态平衡梯度大小。通过同时缓解梯度方向和幅度冲突,该方案很好地避免了负传递和优化不平衡问题。对 CPTAC-UCEC 和 CPTAC-PDA 数据集的大量实验表明,所提出的 MGIML 框架在模态缺失的情况下优于最先进的多模态方法。日江习/ADT-2238-2022;朱美露/CAI-0127-2022;王鹏宇/ISA-9451-2023;王鹏宇/;袁亦轩/KSL-8440-2024;朱美露/;姜曦/王爱鹏宇/0000-0003-0997-9887;袁逸轩/0000-0002-0853-6948;朱美露/0000-0002-5563-7282;姜曦/0000-0003-3711-0847
ZS 0
TC 0
ZB 0
ZA 0
ZR 0
Z8 0
Z9 0
C1 Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
C1 Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing 100091, Peoples R
China
C1 Univ Elect Sci & Technol China, Sch Life Sci & Technol, MOE Key Lab
Neuroinformat, Chengdu 611731, Peoples R China
C1 Hong Kong Polytech Univ, Sch Nursing, Hong Kong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100002
PM 38231819
ER
ZS 0 TC 0 ZB 0 ZA 0 ZR 0 Z8 0 Z9 0 C1 香港中文大学,工程系,香港,中国人民大学 C1 北京交通大学,科学技术学院,北京 100091,中国人民大学 C1 选修大学中国科学与技术学院,生命科技学院,神经信息学教育部重点实验室,成都 611731,人民 R 中国 C1 香港理工大学,中国护理学院,香港,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-06- 23 UT WOS:001243728100002 PM 38231819 ER
PT J
AU Huang, Wei
Zhang, Lei
Wang, Zizhou
Wang, Lituan
PT J AU Huang, 张伟, 王雷, 王子洲, Lituan
TI Exploring Inherent Consistency for Semi-supervised Anatomical Structure
Segmentation in Medical Imaging.
TI 探索医学成像中半监督解剖结构分割的固有一致性。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3400840
DT Journal Article
PD 2024-May-14
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3400840 DT 期刊文章 PD 2024-May-14 PY 2024
AB Due to the exorbitant expense of obtaining labeled data in the field of
medical image analysis, semi-supervised learning has emerged as a
favorable method for the segmentation of anatomical structures. Although
semi-supervised learning techniques have shown great potential in this
field, existing methods only utilize image-level spatial consistency to
impose unsupervised regularization on data in label space. Considering
that anatomical structures often possess inherent anatomical properties
that have not been focused on in previous works, this study introduces
the inherent consistency into semi-supervised anatomical structure
segmentation. First, the prediction and the ground-truth are projected
into an embedding space to obtain latent representations that
encapsulate the inherent anatomical properties of the structures. Then,
two inherent consistency constraints are designed to leverage these
inherent properties by aligning these latent representations. The
proposed method is plug-and-play and can be seamlessly integrated with
existing methods, thereby collaborating to improve segmentation
performance and enhance the anatomical plausibility of the results. To
evaluate the effectiveness of the proposed method, experiments are
conducted on three public datasets (ACDC, LA, and Pancreas). Extensive
experimental results demonstrate that the proposed method exhibits good
generalizability and outperforms several state-of-the-art methods.
OI Huang, Wei/0000-0003-2700-4928
ZB 0
TC 0
ZR 0
Z8 0
AB 由于在医学图像分析领域获取标记数据的费用过高,半监督学习已成为解剖结构分割的有利方法。尽管半监督学习技术在该领域显示出巨大的潜力,但现有方法仅利用图像级空间一致性对标签空间中的数据施加无监督正则化。考虑到解剖结构往往具有先前工作中未关注的固有解剖特性,本研究将固有一致性引入半监督解剖结构分割。首先,将预测和真实情况投影到嵌入空间中,以获得封装结构固有解剖特性的潜在表示。然后,设计两个固有的一致性约束,通过对齐这些潜在表示来利用这些固有属性。所提出的方法是即插即用的,可以与现有方法无缝集成,从而协作提高分割性能并增强结果的解剖合理性。为了评估所提出方法的有效性,在三个公共数据集(ACDC、LA 和 Pancreas)上进行了实验。大量的实验结果表明,所提出的方法具有良好的通用性,并且优于几种最先进的方法。爱黄伟/0000-0003-2700-4928 ZB 0 TC 0 ZR 0 Z8 0
ZS 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-05-16
UT MEDLINE:38743533
PM 38743533
ER
ZS 0 ZA 0 Z9 0 EI 1558-254X DA 2024-05-16 UT MEDLINE:38743533 PM 38743533 ER
PT J
AU Fu, Wenli
Hu, Huijun
Li, Xinyue
Guo, Rui
Chen, Tao
Qian, Xiaohua
PT J AU Fu, 胡文丽, 李慧君, 郭新月, 陈锐, 钱涛, 小华
TI A Generalizable Causal-Invariance-Driven Segmentation Model for
Peripancreatic Vessels.
TI 胰周血管的通用因果不变性驱动分割模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3400528
DT Journal Article
PD 2024-May-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3400528 DT 期刊文章 PD 2024-May-13 PY 2024
AB Segmenting peripancreatic vessels in CT, including the superior
mesenteric artery (SMA), the coeliac artery (CA), and the partial portal
venous system (PPVS), is crucial for preoperative resectability analysis
in pancreatic cancer. However, the clinical applicability of vessel
segmentation methods is impeded by the low generalizability on
multi-center data, mainly attributed to the wide variations in image
appearance, namely the spurious correlation factor. Therefore, we
propose a causal-invariance-driven generalizable segmentation model for
peripancreatic vessels. It incorporates interventions at both image and
feature levels to guide the model to capture causal information by
enforcing consistency across datasets, thus enhancing the generalization
performance. Specifically, firstly, a contrast-driven image intervention
strategy is proposed to construct image-level interventions by
generating images with various contrast-related appearances and seeking
invariant causal features. Secondly, the feature intervention strategy
is designed, where various patterns of feature bias across different
centers are simulated to pursue invariant prediction. The proposed model
achieved high DSC scores (79.69%, 82.62%, and 83.10%) for the three
vessels on a cross-validation set containing 134 cases. Its
generalizability was further confirmed on three independent test sets of
233 cases. Overall, the proposed method provides an accurate and
generalizable segmentation model for peripancreatic vessels and offers a
promising paradigm for increasing the generalizability of segmentation
models from a causality perspective. Our source codes will be released
at https://github.com/SJTUBME-QianLab/PC_VesselSeg.
OI Li, Xinyue/0000-0002-9562-3440; CHEN, TAO/0000-0002-1797-0359
ZR 0
ZA 0
ZB 0
AB 在 CT 中分割胰周血管,包括肠系膜上动脉 (SMA)、腹腔动脉 (CA) 和部分门静脉系统 (PPVS),对于胰腺癌术前可切除性分析至关重要。然而,血管分割方法的临床适用性因多中心数据的通用性较低而受到阻碍,这主要归因于图像外观的巨大变化,即虚假相关因子。因此,我们提出了一种因果不变性驱动的胰周血管广义分割模型。它结合了图像和特征级别的干预措施,通过强制数据集之间的一致性来指导模型捕获因果信息,从而提高泛化性能。具体来说,首先,提出了一种对比度驱动的图像干预策略,通过生成具有各种对比度相关外观的图像并寻求不变的因果特征来构建图像级干预。其次,设计了特征干预策略,模拟不同中心的各种特征偏差模式以追求不变的预测。所提出的模型在包含 134 个案例的交叉验证集上为三艘船取得了高 DSC 分数(79.69%、82.62% 和 83.10%)。其普遍性在包含 233 个案例的三个独立测试集上得到进一步证实。总体而言,所提出的方法为胰周血管提供了准确且可概括的分割模型,并为从因果关系角度提高分割模型的概括性提供了一个有前景的范例。我们的源代码将在 https://github.com/SJTUBME-QianLab/PC_VesselSeg 发布。李爱新月/0000-0002-9562-3440;陈涛/0000-0002-1797-0359 ZR 0 ZA 0 ZB 0
ZS 0
TC 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-05-16
UT MEDLINE:38739508
PM 38739508
ER
ZS 0 TC 0 Z8 0 Z9 0 EI 1558-254X DA 2024-05-16 UT MEDLINE:38739508 PM 38739508 ER
PT J
AU Wang, Jinhong
Xu, Zhe
Zheng, Wenhao
Ying, Haochao
Chen, Tingting
Liu, Zuozhu
Chen, Danny Z.
Yao, Ke
Wu, Jian
PT J AU Wang, Jinhong Xu, Zhe Cheng, Wenhao Ying, Haocao Chen, Tingting Liu, Zuozhu Chen, Danny Z. Yao, Ke Wu,jian
TI A Transformer-Based Knowledge Distillation Network for Cortical Cataract
Grading
TI 基于变压器的知识蒸馏网络,用于皮质白内障分级
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1089
EP 1101
DI 10.1109/TMI.2023.3327274
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1089 EP 1101 DI 10.1109/TMI.2023.3327274 DT 文章 PD 2024 年 3 月 PY 2024
AB Cortical cataract, a common type of cataract, is particularly difficult
to be diagnosed automatically due to the complex features of the
lesions. Recently, many methods based on edge detection or deep learning
were proposed for automatic cataract grading. However, these methods
suffer a large performance drop in cortical cataract grading due to the
more complex cortical opacities and uncertain data. In this paper, we
propose a novel Transformer-based Knowledge Distillation Network, called
TKD-Net, for cortical cataract grading. To tackle the complex opacity
problem, we first devise a zone decomposition strategy to extract more
refined features and introduce special sub-scores to consider critical
factors of clinical cortical opacity assessment (location, area,
density) for comprehensive quantification. Next, we develop a
multi-modal mix-attention Transformer to efficiently fuse sub-scores and
image modality for complex feature learning. However, obtaining the
sub-score modality is a challenge in the clinic, which could cause the
modality missing problem instead. To simultaneously alleviate the issues
of modality missing and uncertain data, we further design a
Transformer-based knowledge distillation method, which uses a teacher
model with perfect data to guide a student model with modality-missing
and uncertain data. We conduct extensive experiments on a dataset of
commonly-used slit-lamp images annotated by the LOCS III grading system
to demonstrate that our TKD-Net outperforms state-of-the-art methods, as
well as the effectiveness of its key components.
RI Liu, Zuozhu/IUN-8338-2023; Zheng, Wenhao/KBR-3871-2024; Ying, Haochao/; Zheng, Wenhao/; Liu, Zuozhu/; Jinhong, Wang/; Wu, Jian/
OI Ying, Haochao/0000-0001-7832-2518; Zheng, Wenhao/0000-0002-7108-370X;
Liu, Zuozhu/0000-0002-7816-502X; Jinhong, Wang/0000-0002-3684-2002; Wu,
Jian/0000-0002-3230-6392
ZB 0
Z8 0
ZA 0
ZR 0
TC 0
AB 皮质性白内障是白内障的一种常见类型,由于病变特征复杂,自动诊断尤其困难。最近,提出了许多基于边缘检测或深度学习的方法用于自动白内障分级。然而,由于更复杂的皮质混浊和不确定的数据,这些方法在皮质白内障分级方面的性能大幅下降。在本文中,我们提出了一种新颖的基于 Transformer 的知识蒸馏网络,称为 TKD-Net,用于皮质白内障分级。为了解决复杂的不透明问题,我们首先设计了一种区域分解策略来提取更精细的特征,并引入特殊的子分数来考虑临床皮质不透明评估的关键因素(位置、面积、密度)以进行全面量化。接下来,我们开发了一个多模态混合注意力 Transformer,以有效地融合子分数和图像模态以进行复杂的特征学习。然而,获得子评分模态在临床中是一个挑战,这可能会导致模态缺失问题。为了同时缓解模态缺失和不确定数据的问题,我们进一步设计了一种基于 Transformer 的知识蒸馏方法,该方法使用具有完美数据的教师模型来指导具有模态缺失和不确定数据的学生模型。我们对由 LOCS III 分级系统注释的常用裂隙灯图像数据集进行了广泛的实验,以证明我们的 TKD-Net 优于最先进的方法及其关键组件的有效性。RI 刘作主/IUN-8338-2023;郑文浩/KBR-3871-2024;英浩超/;郑文浩/;刘作柱/;王金红/;吴健/爱英浩超/0000-0001-7832-2518;郑文浩/0000-0002-7108-370X;刘作柱/0000-0002-7816-502X;王金红/0000-0002-3684-2002;吴健/0000-0002-3230-6392 ZB 0 Z8 0 ZA 0 ZR 0 TC 0
ZS 0
Z9 0
C1 Zhejiang Univ, Affiliated Hosp 2, Coll Comp Sci & Technol, Hangzhou
310027, Peoples R China
C1 Zhejiang Univ, Affiliated Hosp 2, Eye Ctr, Hangzhou 310027, Peoples R
China
C1 Zhejiang Univ, Affiliated Hosp 2, Eye Ctr, Sch Med, Hangzhou 310009,
Zhejiang, Peoples R China
C1 Zhejiang Univ, Sch Publ Hlth, Hangzhou 310058, Peoples R China
C1 Zhejiang Univ, ZJU UIUC Inst, Res & Dev Ctr Intelligent Healthcare, ZJU
Angelalign Inc, Haining 310058, Peoples R China
C1 Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
C1 Zhejiang Univ, Affiliated Hosp 2, Sch Med, Sch Publ Hlth, Hangzhou
310058, Peoples R China
C1 Zhejiang Univ, Inst Wenzhou, Hangzhou 310058, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600023
PM 37874703
ER
ZS 0 Z9 0 C1 浙江大学附属第二医院眼科中心,杭州 310027,人民 R 中国 C1 浙江大学附属第二医院眼科中心,杭州 310027,人民 R 中国 C1 浙江大学附属第二医院眼科浙江大学医学中心,杭州 310009,浙江,人民 R 中国 C1 浙江大学,医学院,杭州 310058,人民 R 中国 C1 浙江大学,浙江大学 UIUC 研究所,研究与开发中心智能医疗,浙江大学时代天使股份有限公司,海宁 310058,人民R China C1 Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA C1 浙江大学附属第二医院,Sch Med,Sch Publ Hlth,杭州 310058,人民 R China C1 浙江大学,温州研究所,杭州 310058, Peoples R 中国 SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600023 PM 37874703 ER
PT J
AU Vu, Tri
Klippel, Paul
Canning, Aidan J.
Ma, Chenshuo
Zhang, Huijuan
Kasatkina, Ludmila A.
Tang, Yuqi
Xia, Jun
Verkhusha, Vladislav V.
Tuan Vo-Dinh
Jing, Yun
Yao, Junjie
PT J AU Vu、Tri Klippel、Paul Canning、Aidan J. Ma、Chenshuo 张、Huijuan Kasatkina、Ludmila A. Tang、Yuqi Xia、Jun Verkhusha、Vladislav V. Tuan Vo-Dinh Jing、Yun Yao、Junjie
TI On the Importance of Low-Frequency Signals in Functional and Molecular
Photoacoustic Computed Tomography
TI 关于低频信号在功能和分子光声计算机断层扫描中的重要性
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 771
EP 783
DI 10.1109/TMI.2023.3320668
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 771 EP 783 DI 10.1109/TMI.2023.3320668 DT 文章 PD 2024 年 2 月 PY 2024
AB In photoacoustic computed tomography (PACT) with short-pulsed laser
excitation, wideband acoustic signals are generated in biological
tissues with frequencies related to the effective shapes and sizes of
the optically absorbing targets. Low-frequency photoacoustic signal
components correspond to slowly varying spatial features and are often
omitted during imaging due to the limited detection bandwidth of the
ultrasound transducer, or during image reconstruction as undesired
background that degrades image contrast. Here we demonstrate that
low-frequency photoacoustic signals, in fact, contain functional and
molecular information, and can be used to enhance structural visibility,
improve quantitative accuracy, and reduce spare-sampling artifacts. We
provide an in-depth theoretical analysis of low-frequency signals in
PACT, and experimentally evaluate their impact on several representative
PACT applications, such as mapping temperature in photothermal
treatment, measuring blood oxygenation in a hypoxia challenge, and
detecting photoswitchable molecular probes in deep organs. Our results
strongly suggest that low-frequency signals are important for functional
and molecular PACT.
RI yao, junjie/G-9226-2014; Canning, Aidan/KGK-6921-2024; Kasatkina, Ludmila/O-3953-2017; Canning, Aidan/; Yao, Junjie/; Vu, Tri/; Zhang, Huijuan/
OI Kasatkina, Ludmila/0000-0003-3935-5389; Canning,
Aidan/0000-0002-8001-6496; Yao, Junjie/0000-0002-2381-706X; Vu,
Tri/0000-0002-1384-8647; Zhang, Huijuan/0000-0002-5139-1430
ZB 0
AB 在采用短脉冲激光激发的光声计算机断层扫描 (PACT) 中,生物组织中会产生宽带声信号,其频率与光学吸收目标的有效形状和尺寸相关。低频光声信号分量对应于缓慢变化的空间特征,并且由于超声换能器的有限检测带宽而在成像期间经常被忽略,或者在图像重建期间作为降低图像对比度的不需要的背景而被忽略。在这里,我们证明低频光声信号实际上包含功能和分子信息,可用于增强结构可视性、提高定量精度并减少备用采样伪影。我们对 PACT 中的低频信号进行了深入的理论分析,并通过实验评估了它们对几种代表性 PACT 应用的影响,例如绘制光热治疗中的温度图、测量缺氧挑战中的血氧饱和度以及检测深部光开关分子探针。器官。我们的结果强烈表明低频信号对于功能和分子 PACT 很重要。日耀俊杰/G-9226-2014;坎宁,艾丹/KGK-6921-2024;卡萨特金纳,卢德米拉/O-3953-2017;坎宁,艾丹/;姚俊杰/; Vu、Tri/;张慧娟/ OI Kasatkina, Ludmila/0000-0003-3935-5389;坎宁,艾丹/0000-0002-8001-6496;姚俊杰/0000-0002-2381-706X;武,特里/0000-0002-1384-8647;张慧娟/0000-0002-5139-1430 ZB 0
ZS 0
Z8 0
ZR 0
ZA 0
TC 2
Z9 2
C1 Duke Univ, Photoacoust Imaging Lab, Durham, NC 27708 USA
C1 Penn State Univ, Grad Program Acoust, University Pk, PA 16802 USA
C1 Duke Univ, Dept Biomed Engn, Dept Chem, Durham, NC 27708 USA
C1 Duke Univ, Fitzpatrick Inst Photon, Durham, NC 27708 USA
C1 SUNY Buffalo, Dept Biomed Engn, Buffalo, NY 14260 USA
C1 Albert Einstein Coll Med, Dept Genet, Bronx, NY 10461 USA
C1 Albert Einstein Coll Med, Gruss Lipper Biophoton Ctr, Bronx, NY 10461
USA
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400002
PM 37773898
ER
ZS 0 Z8 0 ZR 0 ZA 0 TC 2 Z9 2 C1 杜克大学,光声成像实验室,达勒姆,北卡罗来纳州 27708 美国 C1 宾夕法尼亚州立大学,研究生项目声学,大学 Pk,宾夕法尼亚州 16802 美国 C1 杜克大学,生物医学工程系,化学系, Durham, NC 27708 USA C1 Duke Univ, Fitzpatrick Inst Photon, Durham, NC 27708 USA C1 SUNY Buffalo, Dept Biomed Engn, Buffalo, NY 14260 USA C1 Albert Einstein Coll Med, Dept Genet, Bronx, NY 10461 USA C1 Albert Einstein Coll Med, Gruss Lipper Biophoton Ctr, Bronx, NY 10461 USA SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400002 PM 37773898 ER
PT J
AU Xie, Jiaming
Zhang, Qing
Cui, Zhiming
Ma, Chong
Zhou, Yan
Wang, Wenping
Shen, Dinggang
PT J AU Xie, 张家明, 崔青, 马志明, 周冲, 王岩, 沉文平, 丁刚
TI Integrating Eye Tracking with Grouped Fusion Networks for Semantic
Segmentation on Mammogram Images.
TI 将眼动追踪与分组融合网络相集成,以对乳房 X 光图像进行语义分割。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3468404
DT Journal Article
PD 2024-Sep-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3468404 DT 期刊文章 PD 2024-Sep-27 PY 2024
AB Medical image segmentation has seen great progress in recent years,
largely due to the development of deep neural networks. However, unlike
in computer vision, high-quality clinical data is relatively scarce, and
the annotation process is often a burden for clinicians. As a result,
the scarcity of medical data limits the performance of existing medical
image segmentation models. In this paper, we propose a novel framework
that integrates eye tracking information from experienced radiologists
during the screening process to improve the performance of deep neural
networks with limited data. Our approach, a grouped hierarchical
network, guides the network to learn from its faults by using gaze
information as weak supervision. We demonstrate the effectiveness of our
framework on mammogram images, particularly for handling segmentation
classes with large scale differences.We evaluate the impact of gaze
information on medical image segmentation tasks and show that our method
achieves better segmentation performance compared to state-of-the-art
models. A robustness study is conducted to investigate the influence of
distraction or inaccuracies in gaze collection. We also develop a
convenient system for collecting gaze data without interrupting the
normal clinical workflow. Our work offers novel insights into the
potential benefits of integrating gaze information into medical image
segmentation tasks.
OI Ma, Chong/0000-0002-5068-8814
AB 医学图像分割近年来取得了长足的进步,很大程度上得益于深度神经网络的发展。然而,与计算机视觉不同的是,高质量的临床数据相对稀缺,并且注释过程往往是临床医生的负担。因此,医学数据的稀缺限制了现有医学图像分割模型的性能。在本文中,我们提出了一种新颖的框架,该框架在筛选过程中集成了经验丰富的放射科医生的眼动追踪信息,以提高深度神经网络在有限数据下的性能。我们的方法是一个分组分层网络,通过使用注视信息作为弱监督来引导网络从错误中学习。我们展示了我们的框架在乳房X光图像上的有效性,特别是在处理具有大尺度差异的分割类方面。我们评估了注视信息对医学图像分割任务的影响,并表明我们的方法与现有技术相比实现了更好的分割性能艺术模型。进行稳健性研究来调查视线收集中注意力分散或不准确的影响。我们还开发了一种方便的系统,用于在不中断正常临床工作流程的情况下收集眼动数据。我们的工作为将注视信息集成到医学图像分割任务中的潜在好处提供了新颖的见解。爱马庄/0000-0002-5068-8814
ZS 0
ZA 0
TC 0
Z8 0
ZR 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-29
UT MEDLINE:39331544
PM 39331544
ER
ZS 0 ZA 0 TC 0 Z8 0 ZR 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-29 UT MEDLINE:39331544 PM 39331544 ER
PT J
AU Gao, Bin
Yu, Aiju
Qiao, Chen
Calhoun, Vince D
Stephen, Julia M
Wilson, Tony W
Wang, Yu-Ping
PT J AU Gau, Bin Yu, Aiju Qiao, Chen Calhoun, Vince D Stephen, Julia M Wilson, Tony W Wang, Yu-Ping
TI An Explainable Unified Framework of Spatio-Temporal Coupling Learning
with Application to Dynamic Brain Functional Connectivity Analysis.
TI 可解释的时空耦合学习统一框架,应用于动态大脑功能连接分析。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3467384
DT Journal Article
PD 2024-Sep-25
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3467384 DT 期刊文章 PD 2024 年 9 月 25 日 PY 2024
AB Time-series data such as fMRI and MEG carry a wealth of inherent
spatio-temporal coupling relationship, and their modeling via deep
learning is essential for uncovering biological mechanisms. However,
current machine learning models for mining spatio-temporal information
usually overlook this intrinsic coupling association, in addition to
poor explainability. In this paper, we present an explainable learning
framework for spatio-temporal coupling. Specifically, this framework
constructs a deep learning network based on spatio-temporal correlation,
which can well integrate the time-varying coupled relationships between
node representation and inter-node connectivity. Furthermore, it
explores spatio-temporal evolution at each time step, providing a better
explainability of the analysis results. Finally, we apply the proposed
framework to brain dynamic functional connectivity (dFC) analysis.
Experimental results demonstrate that it can effectively capture the
variations in dFC during brain development and the evolution of
spatio-temporal information at the resting state. Two distinct
developmental functional connectivity (FC) patterns are identified.
Specifically, the connectivity among regions related to emotional
regulation decreases, while the connectivity associated with cognitive
activities increases. In addition, children and young adults display
notable cyclic fluctuations in resting-state brain dFC.
RI Wilson, Tony W/; Yu, Aiju/; Stephen, Julia/; Calhoun, Vince/H-7146-2013; Qiao, Chen/; Wang, Yu-Ping/
OI Wilson, Tony W/0000-0002-5053-8306; Yu, Aiju/0009-0008-9348-2454;
Stephen, Julia/0000-0003-2486-747X; Calhoun, Vince/0000-0001-9058-0747;
Qiao, Chen/0000-0002-7103-1942; Wang, Yu-Ping/0000-0001-9340-5864
ZR 0
AB fMRI和MEG等时间序列数据具有丰富的固有时空耦合关系,通过深度学习对其进行建模对于揭示生物机制至关重要。然而,当前用于挖掘时空信息的机器学习模型除了可解释性差之外,通常忽视了这种内在的耦合关联。在本文中,我们提出了一个可解释的时空耦合学习框架。具体来说,该框架构建了一个基于时空相关性的深度学习网络,可以很好地融合节点表示和节点间连接性之间的时变耦合关系。此外,它还探索了每个时间步骤的时空演化,为分析结果提供了更好的可解释性。最后,我们将所提出的框架应用于大脑动态功能连接(dFC)分析。实验结果表明,它可以有效捕捉大脑发育过程中dFC的变化以及静息状态下时空信息的演化。确定了两种不同的发育功能连接(FC)模式。具体来说,与情绪调节相关的区域之间的连通性减少,而与认知活动相关的区域之间的连通性增加。此外,儿童和年轻人的静息态大脑 dFC 表现出显着的周期性波动。 RI 威尔逊,托尼 W/;于爱菊/;斯蒂芬,朱莉娅/;文斯·卡尔霍恩/H-7146-2013;乔陈/;王玉平/ OI Wilson, Tony W/0000-0002-5053-8306;于爱菊/0009-0008-9348-2454;斯蒂芬,朱莉娅/0000-0003-2486-747X;卡尔霍恩,文斯/0000-0001-9058-0747;乔陈/0000-0002-7103-1942;王玉萍/0000-0001-9340-5864 ZR 0
ZS 0
TC 0
ZA 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-28
UT MEDLINE:39320999
PM 39320999
ER
ZS 0 TC 0 ZA 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-28 UT MEDLINE:39320999 PM 39320999 ER
PT J
AU Huang, Wenhao
Gong, Haifan
Zhang, Huan
Wang, Yu
Wan, Xiang
Li, Guanbin
Li, Haofeng
Shen, Hong
PT J AU Huang、龚文豪、张海帆、王欢、万宇、李翔、李冠斌、沉浩峰、洪
TI BCNet: Bronchus Classification via Structure Guided Representation
Learning.
TI BCNet:通过结构引导表示学习进行支气管分类。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3448468
DT Journal Article
PD 2024-Aug-23
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3448468 DT 期刊文章 PD 2024-Aug-23 PY 2024
AB CT-based bronchial tree analysis is a key step for the diagnosis of lung
and airway diseases. However, the topology of bronchial trees varies
across individuals, which presents a challenge to the automatic bronchus
classification. To solve this issue, we propose the Bronchus
Classification Network (BCNet), a structure-guided framework that
exploits the segment-level topological information using point clouds to
learn the voxel-level features. BCNet has two branches, a Point-Voxel
Graph Neural Network (PV-GNN) for segment classification, and a
Convolutional Neural Network (CNN) for voxel labeling. The two branches
are simultaneously trained to learn topology-aware features for their
shared backbone while it is feasible to run only the CNN branch for the
inference. Therefore, BCNet maintains the same inference efficiency as
its CNN baseline. Experimental results show that BCNet significantly
exceeds the state-of-the-art methods by over 8.0% both on F1-score for
classifying bronchus. Furthermore, we contribute BronAtlas: an
open-access benchmark of bronchus imaging analysis with high-quality
voxel-wise annotations of both anatomical and abnormal bronchial
segments. The benchmark is available at link1.
OI Gong, Haifan/0000-0002-2749-6830; Li, Haofeng/0000-0001-9120-9843
ZB 0
Z8 0
TC 0
ZR 0
ZA 0
基于AB CT的支气管树分析是诊断肺部和气道疾病的关键步骤。然而,支气管树的拓扑结构因个体而异,这对支气管自动分类提出了挑战。为了解决这个问题,我们提出了支气管分类网络(BCNet),这是一种结构引导框架,利用点云利用段级拓扑信息来学习体素级特征。 BCNet 有两个分支,用于分段分类的点体素图神经网络(PV-GNN)和用于体素标记的卷积神经网络(CNN)。这两个分支同时接受训练,以学习其共享主干的拓扑感知特征,同时仅运行 CNN 分支进行推理也是可行的。因此,BCNet 保持了与其 CNN 基线相同的推理效率。实验结果表明,BCNet 在支气管分类的 F1 分数上均显着超过最先进的方法 8.0% 以上。此外,我们还贡献了 BronAtlas:支气管成像分析的开放获取基准,对解剖和异常支气管段进行高质量的体素注释。该基准测试可在 link1 上找到。龚海帆/0000-0002-2749-6830;李浩峰/0000-0001-9120-9843 ZB 0 Z8 0 TC 0 ZR 0 ZA 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-08-25
UT MEDLINE:39178085
PM 39178085
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-08-25 UT MEDLINE:39178085 PM 39178085 IS
PT J
AU Caravaca, Javier
Bobba, Kondapa Naidu
Du, Shixian
Peter, Robin
Gullberg, Grant T.
Bidkar, Anil P.
Flavell, Robert R.
Seo, Youngho
PT J AU Caravaca、Javier Bobba、Kondapa Naidu Du、Shixian Peter、Robin Gullberg、Grant T. Bidkar、Anil P. Flavell、Robert R. Seo、Youngho
TI A Technique to Quantify Very Low Activities in Regions of Interest With
a Collimatorless Detector
TI 一种使用无准直仪探测器量化感兴趣区域中极低活动的技术
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2745
EP 2757
DI 10.1109/TMI.2024.3377142
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2745 EP 2757 DI 10.1109/TMI.2024.3377142 DT 文章 PD 2024 年 8 月 PY 2024
AB We present a new method to measure sub-microcurie activities of
photon-emitting radionuclides in organs and lesions of small animals in
vivo. Our technique, named the collimator-less likelihood fit, combines
a very high sensitivity collimatorless detector with a Monte Carlo-based
likelihood fit in order to estimate the activities in previously
segmented regions of interest along with their uncertainties. This is
done directly from the photon projections in our collimatorless detector
and from the region of interest segmentation provided by an x-ray
computed tomography scan. We have extensively validated our approach
with Ac-225 experimentally in spherical phantoms and mouse phantoms, and
also numerically with simulations of a realistic mouse anatomy. Our
method yields statistically unbiased results with uncertainties smaller
than 20% for activities as low as similar to 111Bq (3nCi) and for
exposures under 30 minutes. We demonstrate that our method yields more
robust recovery coefficients when compared to SPECT imaging with a
commercial pre-clinical scanner, specially at very low activities. Thus,
our technique is complementary to traditional SPECT/CT imaging since it
provides a more accurate and precise organ and tumor dosimetry, with a
more limited spatial information. Finally, our technique is specially
significant in extremely low-activity scenarios when SPECT/CT imaging is
simply not viable.
RI Caravaca, Javier/JTQ-7709-2023; Bidkar, Anil/GLS-1952-2022; Caravaca Rodriguez, Javier/; Bidkar, Anil Parsram/; Flavell, Robert/; Gullberg, Grant/; Du, Shixian/
OI Caravaca Rodriguez, Javier/0000-0002-7971-852X; Bidkar, Anil
Parsram/0000-0003-1249-1081; Flavell, Robert/0000-0002-8694-1199;
Gullberg, Grant/0000-0002-9493-1118; Du, Shixian/0009-0006-5269-2303
ZR 0
AB 我们提出了一种新方法来测量小动物体内器官和病变中光子发射放射性核素的亚微居里活性。我们的技术称为无准直器似然拟合,它将非常高灵敏度的无准直器探测器与基于蒙特卡罗的似然拟合相结合,以估计先前分割的感兴趣区域中的活动及其不确定性。这是直接通过无准直仪探测器中的光子投影和 X 射线计算机断层扫描提供的感兴趣区域分割来完成的。我们使用 Ac-225 在球形模型和小鼠模型中进行了实验,并通过对真实小鼠解剖结构进行数值模拟来广泛验证了我们的方法。我们的方法可产生统计上无偏差的结果,对于低至 111Bq (3nCi) 的活动以及暴露时间低于 30 分钟的情况,不确定性小于 20%。我们证明,与使用商用临床前扫描仪进行 SPECT 成像相比,我们的方法可产生更稳健的恢复系数,特别是在活性非常低的情况下。因此,我们的技术是对传统 SPECT/CT 成像的补充,因为它提供了更准确和精确的器官和肿瘤剂量测定,且空间信息更有限。最后,当 SPECT/CT 成像根本不可行时,我们的技术在活动极低的情况下特别重要。 RI 卡拉瓦卡,哈维尔/JTQ-7709-2023;阿尼尔·比德卡/GLS-1952-2022;卡拉瓦卡·罗德里格斯,哈维尔/;比德卡,阿尼尔·帕斯拉姆/;弗拉维尔,罗伯特/;格兰特·古尔伯格/;杜世贤/ OI Caravaca Rodriguez, Javier/0000-0002-7971-852X;比德卡,阿尼尔·帕斯拉姆/0000-0003-1249-1081;弗拉维尔,罗伯特/0000-0002-8694-1199;格兰特·古尔伯格/0000-0002-9493-1118;杜世贤/0009-0006-5269-2303 ZR 0
ZS 0
TC 0
ZA 0
Z8 0
ZB 0
Z9 0
C1 Univ Calif San Francisco, San Francisco, CA 94107 USA
C1 Univ Calif Berkeley, Berkeley, CA 94720 USA
C1 Lawrence Berkeley Natl Lab, Berkeley, CA 94720 USA
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200003
PM 38478457
ER
ZS 0 TC 0 ZA 0 Z8 0 ZB 0 Z9 0 C1 加州大学旧金山分校,旧金山,加利福尼亚州 94107 美国 C1 加州大学伯克利分校,伯克利,加利福尼亚州 94720 美国 C1 劳伦斯伯克利国家实验室,伯克利,加利福尼亚州 94720 美国 SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200003 PM 38478457 ER
PT J
AU Cui, Hengfei
Li, Yan
Wang, Yifan
Xu, Di
Wu, Lian-Ming
Xia, Yong
PT J AU Cui, 李恒飞, 王岩, 徐一凡, 吴迪, 夏连明, 勇
TI Toward Accurate Cardiac MRI Segmentation With Variational
Autoencoder-Based Unsupervised Domain Adaptation
TI 通过基于变分自动编码器的无监督域适应实现准确的心脏 MRI 分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2924
EP 2936
DI 10.1109/TMI.2024.3382624
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2924 EP 2936 DI 10.1109/TMI.2024.3382624 DT 文章 PD 2024 年 8 月 PY 2024
AB Accurate myocardial segmentation is crucial in the diagnosis and
treatment of myocardial infarction (MI), especially in Late Gadolinium
Enhancement (LGE) cardiac magnetic resonance (CMR) images, where the
infarcted myocardium exhibits a greater brightness. However,
segmentation annotations for LGE images are usually not available.
Although knowledge gained from CMR images of other modalities with ample
annotations, such as balanced-Steady State Free Precession (bSSFP), can
be transferred to the LGE images, the difference in image distribution
between the two modalities (i.e., domain shift) usually results in a
significant degradation in model performance. To alleviate this, an
end-to-end Variational autoencoder based feature Alignment Module
Combining Explicit and Implicit features (VAMCEI) is proposed. We first
re-derive the Kullback-Leibler (KL) divergence between the posterior
distributions of the two domains as a measure of the global distribution
distance. Second, we calculate the prototype contrastive loss between
the two domains, bringing closer the prototypes of the same category
across domains and pushing away the prototypes of different categories
within or across domains. Finally, a domain discriminator is added to
the output space, which indirectly aligns the feature distribution and
forces the extracted features to be more favorable for segmentation. In
addition, by combining CycleGAN and VAMCEI, we propose a more refined
multi-stage unsupervised domain adaptation (UDA) framework for
myocardial structure segmentation. We conduct extensive experiments on
the MSCMRSeg 2019, MyoPS 2020 and MM-WHS 2017 datasets. The experimental
results demonstrate that our framework achieves superior performances
than state-of-the-art methods.
RI Wang, Yifan/JDM-1982-2023; Wang, Yifan/; Wu, Lian-Ming/; Li, Yan/; Cui, Hengfei/
OI Wang, Yifan/0000-0002-4519-7535; Wu, Lian-Ming/0000-0001-7381-5436; Li,
Yan/0009-0000-1331-9182; Cui, Hengfei/0000-0001-8625-2521
Z8 0
ZA 0
ZB 0
ZR 0
TC 1
AB 准确的心肌分割对于心肌梗死(MI)的诊断和治疗至关重要,尤其是在晚期钆增强(LGE)心脏磁共振(CMR)图像中,梗塞心肌表现出更大的亮度。然而,LGE 图像的分割注释通常不可用。尽管从具有充足注释的其他模态的 CMR 图像(例如平衡稳态自由进动 (bSSFP))获得的知识可以转移到 LGE 图像,但这两种模态之间的图像分布差异(即域偏移)通常会导致模型性能显着下降。为了缓解这个问题,提出了一种基于端到端变分自动编码器的特征对齐模块组合显式和隐式特征(VAMCEI)。我们首先重新推导两个域的后验分布之间的 Kullback-Leibler (KL) 散度作为全局分布距离的度量。其次,我们计算两个领域之间的原型对比损失,使跨领域的同一类别的原型更加接近,并推开领域内或跨领域的不同类别的原型。最后,在输出空间中添加域鉴别器,间接对齐特征分布并强制提取的特征更有利于分割。此外,通过结合CycleGAN和VAMCEI,我们提出了一种更精细的多阶段无监督域适应(UDA)框架用于心肌结构分割。我们在 MSCMRSeg 2019、MyoPS 2020 和 MM-WHS 2017 数据集上进行了广泛的实验。实验结果表明,我们的框架比最先进的方法具有更优越的性能。日王一凡/JDM-1982-2023;王一凡/;吴连明/;李艳/;崔恒飞/王一凡/0000-0002-4519-7535;吴连明/0000-0001-7381-5436;李艳/0009-0000-1331-9182;崔恒飞/0000-0001-8625-2521 Z8 0 ZA 0 ZB 0 ZR 0 TC 1
ZS 0
Z9 1
C1 Northwestern Polytech Univ, Sch Comp Sci, Natl Engn Lab Integrated
Aerosp Ground Ocean Big D, Xian 710072, Peoples R China
C1 Northwestern Polytech Univ, Res & Dev Inst, Shenzhen 518057, Peoples R
China
C1 Northwestern Polytech Univ, Chongqing Innovat Ctr, Chongqing 401135,
Peoples R China
C1 Northwestern Polytech Univ, Sch Comp Sci, Natl Engn Lab Integrated
Aerosp Ground Ocean Big D, Xian 710072, Peoples R China
C1 Media Innovat Lab, Huawei Cloud, Xian 710075, Peoples R China
C1 Shanghai Jiao Tong Univ, Renji Hosp, Sch Med, Dept Radiol, Shanghai
200127, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200007
PM 38546999
ER
ZS 0 Z9 1 C1 西北工业大学,Sch Comp Sci,国家工程实验室综合航空航天地面海洋大 D,西安 710072,人民 R 中国 C1 西北工业大学研究与开发研究所,深圳 518057,人民 R 中国 C1 西北工业大学,重庆创新中心,重庆 401135,人民 R 中国 C1 西北工业大学,Sch Comp Sci,国家工程实验室综合航空航天地面海洋大 D,西安 710072,人民 R 中国 C1 媒体创新实验室,华为云,西安 710075,人民 R 中国 C1上海交通大学,仁济医院,医学院,放射科,上海 200127,Peoples R China SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200007 PM 38546999 ER
PT J
AU Wang, Zihao
Yang, Yingyu
Chen, Yuzhou
Yuan, Tingting
Sermesant, Maxime
Delingette, Herve
Wu, Ona
PT J AU Wang, Zihao Yang, Yingyu Chen, Yu Zhou Yuan, Tingting Sermesant, Maxime Delingette, Herve Wu, Ona
TI Mutual Information Guided Diffusion for Zero-Shot Cross-Modality Medical
Image Translation
用于零样本跨模态医学图像翻译的 TI 互信息引导扩散
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2825
EP 2838
DI 10.1109/TMI.2024.3382043
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2825 EP 2838 DI 10.1109/TMI.2024.3382043 DT 文章 PD 2024 年 8 月 PY 2024
AB Cross-modality data translation has attracted great interest in medical
image computing. Deep generative models show performance improvement in
addressing related challenges. Nevertheless, as a fundamental challenge
in image translation, the problem of zero-shot learning cross-modality
image translation with fidelity remains unanswered. To bridge this gap,
we propose a novel unsupervised zero-shot learning method called Mutual
Information guided Diffusion Model, which learns to translate an unseen
source image to the target modality by leveraging the inherent
statistical consistency of Mutual Information between different
modalities. To overcome the prohibitive high dimensional Mutual
Information calculation, we propose a differentiable local-wise mutual
information layer for conditioning the iterative denoising process. The
Local-wise-Mutual-Information-Layer captures identical cross-modality
features in the statistical domain, offering diffusion guidance without
relying on direct mappings between the source and target domains. This
advantage allows our method to adapt to changing source domains without
the need for retraining, making it highly practical when sufficient
labeled source domain data is not available. We demonstrate the superior
performance of MIDiffusion in zero-shot cross-modality translation tasks
through empirical comparisons with other generative models, including
adversarial-based and diffusion-based models. Finally, we showcase the
real-world application of MIDiffusion in 3D zero-shot learning-based
cross-modality image segmentation tasks.
RI Yuan, Tingting/HPC-1925-2023; YANG, Yingyu/; Sermesant, Maxime/AAC-4870-2019; Wu, Ona/
OI YANG, Yingyu/0009-0009-5700-054X; Sermesant, Maxime/0000-0002-6256-8350;
Wu, Ona/0000-0002-5509-9461
ZB 0
ZA 0
AB 跨模态数据翻译引起了医学图像计算的极大兴趣。深度生成模型在解决相关挑战方面表现出性能改进。然而,作为图像翻译的一个基本挑战,零样本学习保真度的跨模态图像翻译问题仍然没有得到解决。为了弥补这一差距,我们提出了一种新颖的无监督零样本学习方法,称为互信息引导扩散模型,该方法学习通过利用不同模态之间互信息的固有统计一致性将看不见的源图像转换为目标模态。为了克服令人望而却步的高维互信息计算,我们提出了一种可微的局部互信息层来调节迭代去噪过程。局部互信息层捕获统计域中相同的跨模态特征,提供扩散指导,而不依赖于源域和目标域之间的直接映射。这一优势使我们的方法能够适应不断变化的源域,而无需重新训练,这使得它在没有足够的标记源域数据时非常实用。我们通过与其他生成模型(包括基于对抗性和基于扩散的模型)的实证比较,证明了 MIDiffusion 在零样本跨模态翻译任务中的优越性能。最后,我们展示了 MIDiffusion 在基于 3D 零样本学习的跨模态图像分割任务中的实际应用。日媛婷婷/HPC-1925-2023;杨英玉/;马克西姆·塞梅森特/AAC-4870-2019;吴奥娜/爱阳英宇/0009-0009-5700-054X; Sermesant,马克西姆/0000-0002-6256-8350;吴奥娜/0000-0002-5509-9461 ZB 0 ZA 0
ZS 0
Z8 0
ZR 0
TC 0
Z9 0
C1 Massachusetts Gen Hosp, Athinoula A Martinos Ctr Biomed Imaging, Boston,
MA 02129 USA
C1 Harvard Univ, Boston, MA 02129 USA
C1 Univ Cote Azur, Inria Ctr, F-06902 Valbonne, France
C1 Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
C1 Georg August Univ Gottingen, Inst Comp Sci, D-37073 Gottingen, Germany
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200023
PM 38551825
ER
ZS 0 Z8 0 ZR 0 TC 0 Z9 0 C1 马萨诸塞州 Gen Hosp,Athinoula A Martinos Ctr Biomed Imaging,波士顿,MA 02129 USA C1 哈佛大学,波士顿,MA 02129 USA C1 Univ Cote Azur,Inria Ctr,F-06902 Valbonne,法国C1 Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA C1 Georg August Univ Gottingen, Inst Comp Sci, D-37073 Gottingen, 德国 SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200023 PM 38551825 急诊室
PT J
AU Yang, Kun
Li, Qiang
Xu, Jiahong
Tang, Meng-Xing
Wang, Zhibiao
Tsui, Po-Hsiang
Zhou, Xiaowei
PT J AU Yang, 李坤, 徐强, 唐嘉宏, 王孟兴, 徐志标, 周博翔, 小伟
TI Frequency-Domain Robust PCA for Real-Time Monitoring of HIFU Treatment
用于实时监测 HIFU 治疗的 TI 频域鲁棒 PCA
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 3001
EP 3012
DI 10.1109/TMI.2024.3385408
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 3001 EP 3012 DI 10.1109/TMI.2024.3385408 DT 文章 PD 2024 年 8 月 PY 2024
AB High intensity focused ultrasound (HIFU) is a thriving non-invasive
technique for thermal ablation of tumors, but significant challenges
remain in its real-time monitoring with medical imaging. Ultrasound
imaging is one of the main imaging modalities for monitoring HIFU
surgery in organs other than the brain, mainly due to its good temporal
resolution. However, strong acoustic interference from HIFU irradiation
severely obscures the B-mode images and compromises the monitoring. To
address this problem, we proposed a frequency-domain robust principal
component analysis (FRPCA) method to separate the HIFU interference from
the contaminated B-mode images. Ex-vivo and in-vivo experiments were
conducted to validate the proposed method based on a clinical HIFU
therapy system combined with an ultrasound imaging platform. The
performance of the FRPCA method was compared with the conventional notch
filtering method. Results demonstrated that the FRPCA method can
effectively remove HIFU interference from the B-mode images, which
allowed HIFU-induced grayscale changes at the focal region to be
recovered. Compared to notch-filtered images, the FRPCA-processed images
showed an 8.9% improvement in terms of the structural similarity (SSIM)
index to the uncontaminated B-mode images. These findings demonstrate
that the FRPCA method presents an effective signal processing framework
to remove the strong HIFU acoustic interference, obtains better dynamic
visualization in monitoring the HIFU irradiation process, and offers
great potential to improve the efficacy and safety of HIFU treatment and
other focused ultrasound related applications.
RI Tang, Mengxing/N-3417-2014; Zhou, Xiaowei/AAP-6620-2020; Tsui, Po-Hsiang/AHE-7426-2022; Yang, Kun/; Zhou, Xiaowei/; Tang, Mengxing/; Tsui, Po-Hsiang/
OI Yang, Kun/0000-0002-1429-042X; Zhou, Xiaowei/0000-0003-3768-9692; Tang,
Mengxing/0000-0001-7686-425X; Tsui, Po-Hsiang/0000-0002-5604-1800
Z8 0
AB 高强度聚焦超声 (HIFU) 是一种蓬勃发展的肿瘤热消融非侵入性技术,但其医学成像实时监测仍面临重大挑战。超声成像是监测大脑以外器官 HIFU 手术的主要成像方式之一,主要是由于其良好的时间分辨率。然而,HIFU 辐射产生的强烈声干扰严重模糊了 B 型图像并影响了监测。为了解决这个问题,我们提出了一种频域鲁棒主成分分析(FRPCA)方法来将 HIFU 干扰与污染的 B 模式图像分开。进行了体外和体内实验来验证所提出的基于临床 HIFU 治疗系统结合超声成像平台的方法。将FRPCA方法的性能与传统的陷波滤波方法进行了比较。结果表明,FRPCA 方法可以有效去除 B 模式图像中的 HIFU 干扰,从而可以恢复 HIFU 引起的焦点区域的灰度变化。与陷波滤波图像相比,FRPCA 处理的图像与未污染的 B 模式图像的结构相似性 (SSIM) 指数提高了 8.9%。这些研究结果表明,FRPCA方法提供了一种有效的信号处理框架来消除强HIFU声学干扰,在监测HIFU照射过程中获得更好的动态可视化,并为提高HIFU治疗和其他聚焦超声相关的疗效和安全性提供了巨大的潜力。应用程序。日唐孟星/N-3417-2014;周晓伟/AAP-6620-2020;徐宝祥/AHE-7426-2022;杨坤/;周晓伟/;唐孟星/;徐宝祥/艾杨坤/0000-0002-1429-042X;周晓伟/0000-0003-3768-9692;唐孟星/0000-0001-7686-425X;徐宝祥/0000-0002-5604-1800 Z8 0
ZS 0
ZR 0
ZA 0
ZB 0
TC 0
Z9 0
C1 Tianjin Univ, Sch Microelect, Tianjin 300072, Peoples R China
C1 Chongqing Med Univ, Coll Biomed Engn, State Key Lab Ultrasound Engn Med,
Chongqing 400016, Peoples R China
C1 Imperial Coll London, Dept Bioengn, London SW7 2AZ, England
C1 Chang Gung Univ, Coll Med, Dept Med Imaging & Radiol Sci, Taoyuan
333323, Taiwan
C1 Chang Gung Mem Hosp Linkou, Dept Pediat, Div Pediat Gastroenterol,
Taoyuan 333423, Taiwan
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200005
PM 38578852
ER
ZS 0 ZR 0 ZA 0 ZB 0 TC 0 Z9 0 C1 天津大学微电子学院, 天津 300072, 人民大学 C1 重庆医科大学, 生物医学工程学院, 超声工程国家重点实验室, 重庆 400016, 人民大学 C1 帝国理工学院London, Dept Bioengn, London SW7 2AZ, England C1 Chang Gung Univ, Coll Med, Dept Med Imaging & Radiol Sci, 桃园 333323, 台湾 C1 Chang Gung Mem Hosp Linkou, Dept Pediat, Div Pediat Gastroenterol, 桃园 333423, 台湾 SN 0278- 0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200005 PM 38578852 ER
PT J
AU Mahapatra, Dwarikanath
Yepes, Antonio Jimeno
Bozorgtabar, Behzad
Roy, Sudipta
Ge, Zongyuan
Reyes, Mauricio
PT J AU Mahapatra、Dwarikanath Yepes、Antonio Jimeno Bozorgtabar、Behzad Roy、Sudipta Ge、Zongyuan Reyes、Mauricio
TI Multi-Label Generalized Zero Shot Chest Xray Classification By Combining
Image-Text Information With Feature Disentanglement.
TI 多标签广义零射击胸部 X 射线分类,将图像文本信息与特征解缠结相结合。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3429471
DT Journal Article
PD 2024-Jul-17
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3429471 DT 期刊文章 PD 2024-Jul-17 PY 2024
AB In fully supervised learning-based medical image classification, the
robustness of a trained model is influenced by its exposure to the range
of candidate disease classes. Generalized Zero Shot Learning (GZSL) aims
to correctly predict seen and novel unseen classes. Current GZSL
approaches have focused mostly on the single-label case. However, it is
common for chest X-rays to be labelled with multiple disease classes. We
propose a novel multi-modal multi-label GZSL approach that leverages
feature disentanglement andmulti-modal information to synthesize
features of unseen classes. Disease labels are processed through a
pre-trained BioBert model to obtain text embeddings that are used to
create a dictionary encoding similarity among different labels. We then
use disentangled features and graph aggregation to learn a second
dictionary of inter-label similarities. A subsequent clustering step
helps to identify representative vectors for each class. The multi-modal
multi-label dictionaries and the class representative vectors are used
to guide the feature synthesis step, which is the most important
component of our pipeline, for generating realistic multi-label disease
samples of seen and unseen classes. Our method is benchmarked against
multiple competing methods and we outperform all of them based on
experiments conducted on the publicly available NIH and CheXpert chest
X-ray datasets.
OI Ge, Zongyuan/0000-0002-5880-8673; Reyes, Mauricio/0000-0002-2434-9990;
Bozorgtabar, Behzad/0000-0002-5759-4896
Z8 0
AB 在基于完全监督学习的医学图像分类中,经过训练的模型的稳健性受到其接触候选疾病类别范围的影响。广义零样本学习(GZSL)旨在正确预测已见过的和新的未见的类。目前的 GZSL 方法主要关注单标签情况。然而,胸部 X 光检查通常会标记多种疾病类别。我们提出了一种新颖的多模态多标签 GZSL 方法,该方法利用特征解缠结和多模态信息来合成未见过的类的特征。疾病标签通过预先训练的 BioBert 模型进行处理,以获得文本嵌入,用于创建编码不同标签之间相似性的字典。然后,我们使用解缠结的特征和图聚合来学习标签间相似性的第二个字典。随后的聚类步骤有助于识别每个类别的代表向量。多模态多标签字典和类代表向量用于指导特征合成步骤,这是我们管道中最重要的组成部分,用于生成可见和未见类别的真实多标签疾病样本。我们的方法以多种竞争方法为基准,根据在公开的 NIH 和 CheXpert 胸部 X 射线数据集上进行的实验,我们优于所有方法。葛总元/0000-0002-5880-8673;毛里西奥·雷耶斯/0000-0002-2434-9990;博佐格塔巴尔,贝赫扎德/0000-0002-5759-4896 Z8 0
ZS 0
ZA 0
ZB 0
TC 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-07-20
UT MEDLINE:39018216
PM 39018216
ER
ZS 0 ZA 0 ZB 0 TC 0 ZR 0 Z9 0 EI 1558-254X DA 2024-07-20 UT MEDLINE:39018216 PM 39018216 ER
PT J
AU Yin, Yi
Clark, Alys R.
Collins, Sally L.
PT J AU Yin、Yi Clark、Alys R. Collins、Sally L.
TI 3D Single Vessel Fractional Moving Blood Volume (3D-svFMBV): Fully
Automated Tissue Perfusion Estimation Using Ultrasound
TI 3D 单血管分数移动血容量 (3D-svFMBV):使用超声进行全自动组织灌注估计
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2707
EP 2717
DI 10.1109/TMI.2024.3376668
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2707 EP 2717 DI 10.1109/TMI.2024.3376668 DT 文章 PD 2024 年 7 月 PY 2024
AB Power Doppler ultrasound (PD-US) is the ideal modality to assess tissue
perfusion as it is cheap, patient-friendly and does not require ionizing
radiation. However, meaningful inter-patient comparison only occurs if
differences in tissue-attenuation are corrected for. This can be done by
standardizing the PD-US signal to a blood vessel assumed to have 100%
vascularity. The original method to do this is called fractional moving
blood volume (FMBV). We describe a novel, fully-automated method
combining image processing, numerical modelling, and deep learning to
estimate three-dimensional single vessel fractional moving blood volume
(3D-svFMBV). We map the PD signals to a characteristic intensity profile
within a single large vessel to define the standardization value at the
high shear vessel margins. This removes the need for mathematical
correction for background signal which can introduce error. The
3D-svFMBV was first tested on synthetic images generated using the
characteristics of uterine artery and physiological ultrasound noise
levels, demonstrating prediction of standardization value close to the
theoretical ideal. Clinical utility was explored using 143
first-trimester placental ultrasound volumes. More biologically
plausible perfusion estimates were obtained, showing improved prediction
of pre-eclampsia compared with those generated with the semi-automated
original 3D-FMBV technique. The proposed 3D-svFMBV method overcomes the
limitations of the original technique to provide accurate and robust
placental perfusion estimation. This not only has the potential to
provide an early pregnancy screening tool but may also be used to assess
perfusion of different organs and tumors.
RI Collins, Sally/H-7964-2019; Collins, Sally/; Clark, Alys/O-5885-2018
OI Collins, Sally/0000-0002-0648-7433; Clark, Alys/0000-0001-5908-2862
AB 能量多普勒超声 (PD-US) 是评估组织灌注的理想方式,因为它价格便宜、对患者友好且不需要电离辐射。然而,只有在组织衰减差异得到纠正的情况下,才会发生有意义的患者间比较。这可以通过将 PD-US 信号标准化到假设具有 100% 血管分布的血管来完成。最初的方法称为移动血容量分数 (FMBV)。我们描述了一种新颖的全自动方法,结合图像处理、数值建模和深度学习来估计三维单血管移动血容量(3D-svFMBV)。我们将 PD 信号映射到单个大血管内的特征强度分布,以定义高剪切血管边缘的标准化值。这消除了对背景信号进行数学校正的需要,因为背景信号可能会引入误差。 3D-svFMBV 首先在使用子宫动脉特征和生理超声噪声水平生成的合成图像上进行测试,证明标准化值的预测接近理论理想值。使用 143 个早孕期胎盘超声体积探讨了临床实用性。获得了在生物学上更合理的灌注估计,与半自动原始 3D-FMBV 技术相比,显示出对先兆子痫的预测得到了改善。所提出的 3D-svFMBV 方法克服了原始技术的局限性,提供准确且稳健的胎盘灌注估计。这不仅有可能提供早孕筛查工具,还可以用于评估不同器官和肿瘤的灌注。RI 柯林斯,莎莉/H-7964-2019;柯林斯,莎莉/;克拉克,阿莉斯/O-5885-2018 OI 柯林斯,莎莉/0000-0002-0648-7433;克拉克,艾莉丝/0000-0001-5908-2862
ZS 0
Z8 0
ZB 0
ZA 0
ZR 0
TC 0
Z9 0
C1 Univ Oxford, Nuffield Dept Womens & Reprod Hlth, Oxford OX1 2JD, England
C1 Univ Oxford, Big Data Inst, Oxford OX1 2JD, England
C1 Univ Auckland, Auckland Bioengn Inst, Auckland 1010, New Zealand
C1 John Radcliffe Hosp, Womens Ctr, Fetal Med Unit, Oxford OX3 9DU, England
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100008
PM 38478454
ER
ZS 0 Z8 0 ZB 0 ZA 0 ZR 0 TC 0 Z9 0 C1 牛津大学,Nuffield Dept Womens & Reprod Hlth,牛津 OX1 2JD,英格兰 C1 牛津大学,大数据研究所,牛津 OX1 2JD,英格兰 C1 奥克兰大学,奥克兰生物工程研究所, 奥克兰 1010, 新西兰 C1 John Radcliffe Hosp, Womens Ctr, Fetal Med Unit, Oxford OX3 9DU, England SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100008 PM 38478454 ER
PT J
AU Wang, Pengyu
Zhang, Huaqi
Yuan, Yixuan
PT J AU Wang、张鹏宇、袁华琪、Yixuan
TI MCPL: Multi-modal Collaborative Prompt Learning for Medical
Vision-Language Model.
TI MCPL:医学视觉语言模型的多模式协作即时学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3418408
DT Journal Article
PD 2024-Jun-24
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3418408 DT 期刊文章 PD 2024-Jun-24 PY 2024
AB Multi-modal prompt learning is a high-performance and cost-effective
learning paradigm, which learns text as well as image prompts to tune
pre-trained vision-language (V-L) models like CLIP for adapting multiple
downstream tasks. However, recent methods typically treat text and image
prompts as independent components without considering the dependency
between prompts. Moreover, extending multi-modal prompt learning into
the medical field poses challenges due to a significant gap between
general- and medical-domain data. To this end, we propose a Multi-modal
Collaborative Prompt Learning (MCPL) pipeline to tune a frozen V-L model
for aligning medical text-image representations, thereby achieving
medical downstream tasks. We first construct the anatomy-pathology (AP)
prompt for multi-modal prompting jointly with text and image prompts.
The AP prompt introduces instance-level anatomy and pathology
information, thereby making a V-L model better comprehend medical
reports and images. Next, we propose graph-guided prompt collaboration
module (GPCM), which explicitly establishes multi-way couplings between
the AP, text, and image prompts, enabling collaborative multi-modal
prompt producing and updating for more effective prompting. Finally, we
develop a novel prompt configuration scheme, which attaches the AP
prompt to the query and key, and the text/image prompt to the value in
self-attention layers for improving the interpretability of multi-modal
prompts. Extensive experiments on numerous medical classification and
object detection datasets show that the proposed pipeline achieves
excellent effectiveness and generalization. Compared with
state-of-the-art prompt learning methods, MCPL provides a more reliable
multi-modal prompt paradigm for reducing tuning costs of V-L models on
medical downstream tasks. Our code:
https://github.com/CUHK-AIM-Group/MCPL.
RI Yuan, Yixuan/KSL-8440-2024
OI Yuan, Yixuan/0000-0002-0853-6948
ZB 0
TC 0
ZA 0
AB 多模态提示学习是一种高性能且经济高效的学习范例,它学习文本和图像提示来调整预先训练的视觉语言 (VL) 模型(例如 CLIP),以适应多个下游任务。然而,最近的方法通常将文本和图像提示视为独立的组件,而不考虑提示之间的依赖性。此外,由于通用数据和医学领域数据之间存在巨大差距,将多模式即时学习扩展到医学领域也带来了挑战。为此,我们提出了一种多模态协作提示学习(MCPL)管道来调整冻结的 VL 模型,以对齐医学文本图像表示,从而实现医学下游任务。我们首先构建解剖病理学(AP)提示,用于结合文本和图像提示的多模式提示。 AP提示引入了实例级解剖学和病理学信息,从而使VL模型更好地理解医学报告和图像。接下来,我们提出了图引导提示协作模块(GPCM),它明确地建立了AP、文本和图像提示之间的多路耦合,实现协作式多模态提示生成和更新,以实现更有效的提示。最后,我们开发了一种新颖的提示配置方案,将 AP 提示附加到查询和密钥,将文本/图像提示附加到自注意力层中的值,以提高多模态提示的可解释性。对大量医学分类和目标检测数据集的广泛实验表明,所提出的流程实现了出色的有效性和泛化性。 与最先进的提示学习方法相比,MCPL 提供了更可靠的多模态提示范式,可降低 VL 模型在医疗下游任务上的调整成本。我们的代码:https://github.com/CUHK-AIM-Group/MCPL。日源以轩/KSL-8440-2024 OI 源以轩/0000-0002-0853-6948 ZB 0 TC 0 ZA 0
ZS 0
Z8 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-06-27
UT MEDLINE:38913527
PM 38913527
ER
ZS 0 Z8 0 ZR 0 Z9 0 EI 1558-254X DA 2024-06-27 UT MEDLINE:38913527 PM 38913527 ER
PT J
AU Mei, Xin
Yang, Libin
Gao, Denghong
Cai, Xiaoyan
Han, Junwei
Liu, Tianming
PT J AU Mei, Xin Yang, 高立斌, 蔡登红, 韩晓燕, 刘俊伟, 天明
TI PhraseAug: An Augmented Medical Report Generation Model with Phrasebook.
TI PhraseAug:带有 Phrasebook 的增强型医疗报告生成模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3416190
DT Journal Article
PD 2024-Jun-18
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3416190 DT 期刊文章 PD 2024-Jun-18 PY 2024
AB Medical report generation is a valuable and challenging task, which
automatically generates accurate and fluent diagnostic reports for
medical images, reducing workload of radiologists and improving
efficiency of disease diagnosis. Fine-grained alignment of medical
images and reports facilitates the exploration of close correlations
between images and texts, which is crucial for cross-modal generation.
However, visual and linguistic biases caused by radiologists' writing
styles make cross-modal image-text alignment difficult. To alleviate
visual-linguistic bias, this paper discretizes medical reports and
introduces an intermediate modality, i.e. phrasebook, consisting of key
noun phrases. As discretized representation of medical reports,
phrasebook contains both disease-related medical terms, and synonymous
phrases representing different writing styles which can identify
synonymous sentences, thereby promoting fine-grained alignment between
images and reports. In this paper, an augmented two-stage medical report
generation model with phrasebook (PhraseAug) is developed, which
combines medical images, clinical histories and writing styles to
generate diagnostic reports. In the first stage, phrasebook is used to
extract semantically relevant important features and predict key phrases
contained in the report. In the second stage, medical reports are
generated according to the predicted key phrases which contain
synonymous phrases, promoting our model to adapt to different writing
styles and generating diverse medical reports. Experimental results on
two public datasets, IU-Xray and MIMIC-CXR, demonstrate that our
proposed PhraseAug outperforms state-of-the-art baselines.
OI Mei, Xin/0000-0002-2768-5252
AB医学报告生成是一项有价值且具有挑战性的任务,它自动生成准确、流畅的医学图像诊断报告,减少放射科医生的工作量,提高疾病诊断的效率。医学图像和报告的细粒度对齐有助于探索图像和文本之间的密切相关性,这对于跨模式生成至关重要。然而,放射科医生的写作风格造成的视觉和语言偏差使得跨模式图像文本对齐变得困难。为了减轻视觉语言偏差,本文对医疗报告进行离散化,并引入了一种中间模式,即由关键名词短语组成的短语手册。作为医学报告的离散化表示,短语手册既包含与疾病相关的医学术语,也包含代表不同写作风格的同义词短语,可以识别同义句子,从而促进图像和报告之间的细粒度对齐。本文开发了一种带有短语手册(PhraseAug)的增强型两阶段医疗报告生成模型,该模型结合医学图像、临床病史和写作风格来生成诊断报告。在第一阶段,短语手册用于提取语义相关的重要特征并预测报告中包含的关键短语。在第二阶段,根据包含同义短语的预测关键词生成医疗报告,促进我们的模型适应不同的写作风格并生成多样化的医疗报告。两个公共数据集 IU-Xray 和 MIMIC-CXR 的实验结果表明,我们提出的 PhraseAug 优于最先进的基线。爱梅鑫/0000-0002-2768-5252
ZS 0
ZB 0
TC 0
Z8 0
ZA 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-06-21
UT MEDLINE:38889023
PM 38889023
ER
ZS 0 ZB 0 TC 0 Z8 0 ZA 0 ZR 0 Z9 0 EI 1558-254X DA 2024-06-21 UT MEDLINE:38889023 PM 38889023 ER
PT J
AU Wang, Enpeng
Liu, Yueang
Tu, Puxun
Taylor, Zeike A
Chen, Xiaojun
PT J AU Wang, Enpeng Liu, Yueang Tu, Puxun Taylor, Zeike A Chen, 晓军
TI Video-based Soft Tissue Deformation Tracking for Laparoscopic Augmented
Reality-based Navigation in Kidney Surgery.
TI 基于视频的软组织变形跟踪,用于肾脏手术中基于增强现实的腹腔镜导航。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3413537
DT Journal Article
PD 2024-Jun-12
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3413537 DT 期刊文章 PD 2024-Jun-12 PY 2024
AB Minimally invasive surgery (MIS) remains technically demanding due to
the difficulty of tracking hidden critical structures within the moving
anatomy of the patient. In this study, we propose a soft tissue
deformation tracking augmented reality (AR) navigation pipeline for
laparoscopic surgery of the kidneys. The proposed navigation pipeline
addresses two main sub-problems: the initial registration and
deformation tracking. Our method utilizes preoperative MR or CT data and
binocular laparoscopes without any additional interventional hardware.
The initial registration is resolved through a probabilistic rigid
registration algorithm and elastic compensation based on dense point
cloud reconstruction. For deformation tracking, the sparse feature point
displacement vector field continuously provides temporal boundary
conditions for the biomechanical model. To enhance the accuracy of the
displacement vector field, a novel feature points selection strategy
based on deep learning is proposed. Moreover, an ex-vivo experimental
method for internal structures error assessment is presented. The
ex-vivo experiments indicate an external surface reprojection error of
4.07 ± 2.17mm and a maximum mean absolutely error for internal
structures of 2.98mm. In-vivo experiments indicate mean absolutely error
of 3.28 ± 0.40mm and 1.90±0.24mm, respectively. The combined qualitative
and quantitative findings indicated the potential of our AR-assisted
navigation system in improving the clinical application of laparoscopic
kidney surgery.
OI Tu, Puxun/0000-0003-4809-9081
TC 0
AB 由于难以跟踪患者移动解剖结构中隐藏的关键结构,微创手术 (MIS) 在技术上仍然要求很高。在这项研究中,我们提出了一种用于肾脏腹腔镜手术的软组织变形跟踪增强现实(AR)导航管道。所提出的导航管道解决两个主要子问题:初始配准和变形跟踪。我们的方法利用术前 MR 或 CT 数据和双目腹腔镜,无需任何额外的介入硬件。通过概率刚性配准算法和基于密集点云重建的弹性补偿来解决初始配准。对于变形跟踪,稀疏特征点位移矢量场连续为生物力学模型提供时间边界条件。为了提高位移矢量场的精度,提出了一种基于深度学习的特征点选择策略。此外,还提出了一种用于内部结构误差评估的离体实验方法。离体实验表明外表面重投影误差为 4.07 ± 2.17mm,内部结构的最大平均绝对误差为 2.98mm。体内实验表明平均绝对误差分别为 3.28 ± 0.40mm 和 1.90±0.24mm。定性和定量相结合的研究结果表明,我们的 AR 辅助导航系统在改善腹腔镜肾脏手术的临床应用方面具有潜力。涂普讯/0000-0003-4809-9081 TC 0
ZS 0
ZR 0
ZA 0
ZB 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-06-18
UT MEDLINE:38865220
PM 38865220
ER
ZS 0 ZR 0 ZA 0 ZB 0 Z8 0 Z9 0 EI 1558-254X DA 2024-06-18 UT MEDLINE:38865220 PM 38865220 ER
PT J
AU Bi, Xia-An
Yang, Zicheng
Huang, Yangjun
Chen, Ke
Xing, Zhaoxu
Xu, Luyun
Wu, Zihao
Liu, Zhengliang
Li, Xiang
Liu, Tianming
PT J AU Bi, 杨夏安, 黄子成, 陈阳军, 邢克, 徐兆旭, 吴路云, 刘子浩, 李正良, 刘翔, 天明
TI CE-GAN: Community Evolutionary Generative Adversarial Network for
Alzheimer's Disease Risk Prediction.
TI CE-GAN:用于阿尔茨海默病风险预测的社区进化生成对抗网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3385756
DT Journal Article
PD 2024-Apr-08
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3385756 DT 期刊文章 PD 2024-Apr-08 PY 2024
AB In the studies of neurodegenerative diseases such as Alzheimer's Disease
(AD), researchers often focus on the associations among multi-omics
pathogeny based on imaging genetics data. However, current studies
overlook the communities in brain networks, leading to inaccurate models
of disease development. This paper explores the developmental patterns
of AD from the perspective of community evolution. We first establish a
mathematical model to describe functional degeneration in the brain as
the community evolution driven by entropy information propagation. Next,
we propose an interpretable Community Evolutionary Generative
Adversarial Network (CE-GAN) to predict disease risk. In the generator
of CE-GAN, community evolutionary convolutions are designed to capture
the evolutionary patterns of AD. The experiments are conducted using
functional magnetic resonance imaging (fMRI) data and single nucleotide
polymorphism (SNP) data. CE-GAN achieves 91.67% accuracy and 91.83% area
under curve (AUC) in AD risk prediction tasks, surpassing advanced
methods on the same dataset. In addition, we validated the effectiveness
of CE-GAN for pathogeny extraction. The source code of this work is
available at https://github.com/fmri123456/CE-GAN.
RI Wu, Zihao/; Li, Xiang/J-6924-2019; Liu, Zhengliang/
OI Wu, Zihao/0000-0001-7483-6570; Li, Xiang/0000-0002-9851-6376; Liu,
Zhengliang/0000-0001-7061-6714
TC 1
Z8 0
ZA 0
ZB 0
AB 在阿尔茨海默氏病(AD)等神经退行性疾病的研究中,研究人员经常关注基于成像遗传学数据的多组学病因之间的关联。然而,目前的研究忽视了大脑网络中的群落,导致疾病发展模型不准确。本文从群落演化的角度探讨AD的发展模式。我们首先建立一个数学模型来描述大脑功能退化作为熵信息传播驱动的群落进化。接下来,我们提出了一个可解释的社区进化生成对抗网络(CE-GAN)来预测疾病风险。在CE-GAN的生成器中,社区进化卷积被设计用来捕捉AD的进化模式。这些实验是使用功能磁共振成像(fMRI)数据和单核苷酸多态性(SNP)数据进行的。 CE-GAN 在 AD 风险预测任务中实现了 91.67% 的准确率和 91.83% 的曲线下面积 (AUC),超越了同一数据集上的先进方法。此外,我们还验证了 CE-GAN 在病原提取方面的有效性。这项工作的源代码可在 https://github.com/fmri123456/CE-GAN 获取。吴日浩/;李翔/J-6924-2019;刘正亮/吴子浩/0000-0001-7483-6570;李翔/0000-0002-9851-6376;刘正亮/0000-0001-7061-6714 TC 1 Z8 0 ZA 0 ZB 0
ZS 0
ZR 0
Z9 1
EI 1558-254X
DA 2024-04-10
UT MEDLINE:38587958
PM 38587958
ER
ZS 0 ZR 0 Z9 1 EI 1558-254X DA 2024-04-10 UT MEDLINE:38587958 PM 38587958 ER
PT J AU Kang, Eunsong Heo, Da-Woon Lee, Jiwon Suk, Heung-, II
TI A Learnable Counter-Condition Analysis Framework for Functional
Connectivity-Based Neurological Disorder Diagnosis
TI 用于基于功能连接的神经系统疾病诊断的可学习反条件分析框架
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1377
EP 1387
DI 10.1109/TMI.2023.3337074
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1377 EP 1387 DI 10.1109/TMI.2023.3337074 DT 文章 PD 2024 年 4 月 PY 2024
AB To understand the biological characteristics of neurological disorders
with functional connectivity (FC), recent studies have widely utilized
deep learning-based models to identify the disease and conducted
post-hoc analyses via explainable models to discover disease-related
biomarkers. Most existing frameworks consist of three stages, namely,
feature selection, feature extraction for classification, and analysis,
where each stage is implemented separately. However, if the results at
each stage lack reliability, it can cause misdiagnosis and incorrect
analysis in afterward stages. In this study, we propose a novel unified
framework that systemically integrates diagnoses (i.e., feature
selection and feature extraction) and explanations. Notably, we devised
an adaptive attention network as a feature selection approach to
identify individual-specific disease-related connections. We also
propose a functional network relational encoder that summarizes the
global topological properties of FC by learning the inter-network
relations without pre-defined edges between functional networks. Last
but not least, our framework provides a novel explanatory power for
neuroscientific interpretation, also termed counter-condition analysis.
We simulated the FC that reverses the diagnostic information (i.e.,
counter-condition FC): converting a normal brain to be abnormal and vice
versa. We validated the effectiveness of our framework by using two
large resting-state functional magnetic resonance imaging (fMRI)
datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD,
and demonstrated that our framework outperforms other competing methods
for disease identification. Furthermore, we analyzed the disease-related
neurological patterns based on counter-condition analysis.
OI , Jiwon/0009-0006-6256-1243; Suk, Heung-Il/0000-0001-7019-8962; Heo,
Da-Woon/0000-0001-9281-8325; Kang, Eunsong/0009-0007-3010-5144
TC 2
ZR 0
Z8 0
为了了解具有功能连接(FC)的神经系统疾病的生物学特征,最近的研究广泛利用基于深度学习的模型来识别疾病,并通过可解释的模型进行事后分析,以发现与疾病相关的生物标志物。大多数现有框架由三个阶段组成,即特征选择、分类特征提取和分析,每个阶段都是单独实现的。但如果每个阶段的结果缺乏可靠性,就会导致后续阶段的误诊和错误分析。在本研究中,我们提出了一种新颖的统一框架,系统地集成了诊断(即特征选择和特征提取)和解释。值得注意的是,我们设计了一个自适应注意力网络作为特征选择方法来识别个体特定的疾病相关联系。我们还提出了一种功能网络关系编码器,通过学习网络间关系来总结 FC 的全局拓扑特性,而无需功能网络之间预先定义的边。最后但并非最不重要的一点是,我们的框架为神经科学解释提供了一种新颖的解释力,也称为反条件分析。我们模拟了逆转诊断信息的FC(即反条件FC):将正常大脑转换为异常大脑,反之亦然。我们通过使用两个大型静息态功能磁共振成像 (fMRI) 数据集、自闭症脑成像数据交换 (ABIDE) 和 REST-meta-MDD 验证了我们框架的有效性,并证明我们的框架优于其他疾病识别竞争方法。此外,我们根据反条件分析分析了与疾病相关的神经模式。OI,智媛/0009-0006-6256-1243;苏香日/0000-0001-7019-8962;许大云/0000-0001-9281-8325;康恩松/0009-0007-3010-5144 TC 2 ZR 0 Z8 0
ZS 0
ZB 0
ZA 0
Z9 2
C1 Korea Univ, Dept Brain & Cognit Engn, Seoul 02841, South Korea
C1 Korea Univ, Dept Artificial Intelligence, Seoul 02841, South Korea
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400009
PM 38019623
ER
ZS 0 ZB 0 ZA 0 Z9 2 C1 韩国大学大脑与认知工程系,首尔 02841,韩国 C1 韩国大学人工智能系,首尔 02841,韩国 SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400009 PM 38019623 ER
PT J
AU Gao, Cong
Feng, Anqi
Liu, Xingtong
Taylor, Russell H.
Armand, Mehran
Unberath, Mathias
PT J AU 高、冯丛、刘安琪、Xingtong Taylor、Russell H. Armand、Mehran Unberath、Mathias
TI A Fully Differentiable Framework for 2D/3D Registration and the
Projective Spatial Transformers
TI 用于 2D/3D 配准和投影空间变换器的完全可微框架
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 275
EP 285
DI 10.1109/TMI.2023.3299588
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 275 EP 285 DI 10.1109/TMI.2023.3299588 DT 文章 PD 2024 年 1 月 PY 2024
AB Image-based 2D/3D registration is a critical technique for fluoroscopic
guided surgical interventions. Conventional intensity-based 2D/3D
registration approaches suffer from a limited capture range due to the
presence of local minima in hand-crafted image similarity functions. In
this work, we aim to extend the 2D/3D registration capture range with a
fully differentiable deep network framework that learns to approximate a
convex-shape similarity function. The network uses a novel Projective
Spatial Transformer (ProST) module that has unique differentiability
with respect to 3D pose parameters, and is trained using an innovative
double backward gradient-driven loss function. We compare the most
popular learning-based pose regression methods in the literature and use
the well-established CMAES intensity-based registration as a benchmark.
We report registration pose error, target registration error (TRE) and
success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis
anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of
65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The
CMAES SRs without using ProST registration are 28.5% and 36.0% in
simulation and real data, respectively. Our results suggest that the
proposed ProST network learns a practical similarity function, which
vastly extends the capture range of conventional intensity-based 2D/3D
registration. We believe that the unique differentiable property of
ProST has the potential to benefit related 3D medical imaging research
applications. The source code is available at
https://github.com/gaocong13/Projective-Spatial-Transformers.
RI Taylor, Russell/A-3268-2010; Liu, Xingtong/ABE-4913-2021; Armand, Mehran/; Taylor, Russell/; Gao, Cong/
OI Liu, Xingtong/0000-0002-8661-7460; Armand, Mehran/0000-0003-1028-8303;
Taylor, Russell/0000-0001-6272-1100; Gao, Cong/0000-0001-6798-8381
ZA 0
ZB 4
TC 6
Z8 0
AB 基于图像的 2D/3D 配准是荧光镜引导手术干预的关键技术。由于手工制作的图像相似性函数中存在局部最小值,传统的基于强度的 2D/3D 配准方法的捕获范围有限。在这项工作中,我们的目标是通过学习近似凸形状相似函数的完全可微的深度网络框架来扩展 2D/3D 配准捕获范围。该网络使用新颖的投影空间变换器 (ProST) 模块,该模块对于 3D 位姿参数具有独特的可微性,并使用创新的双反向梯度驱动损失函数进行训练。我们比较了文献中最流行的基于学习的姿态回归方法,并使用成熟的 CMAES 基于强度的配准作为基准。我们报告配准姿势误差、目标配准误差 (TRE) 和成功率 (SR),平均 TRE 阈值为 10mm。对于骨盆解剖结构,模拟中先进行 ProST 后进行 CMAES 的 TRE 中位数为 4.4 毫米,SR 为 65.6%,实际数据中为 2.2 毫米,SR 为 73.2%。不使用 ProST 注册的 CMAES SR 在模拟和实际数据中分别为 28.5% 和 36.0%。我们的结果表明,所提出的 ProST 网络学习了实用的相似性函数,这极大地扩展了传统的基于强度的 2D/3D 配准的捕获范围。我们相信 ProST 独特的可微分特性有可能使相关 3D 医学成像研究应用受益。源代码可在 https://github.com/gaocong13/Projective-Spatial-Transformers 获取。RI 泰勒,拉塞尔/A-3268-2010;刘星通/ABE-4913-2021;阿尔芒·迈赫兰/;泰勒,拉塞尔/;高聪/刘爱星通/0000-0002-8661-7460;阿尔芒,迈赫兰/0000-0003-1028-8303;泰勒·拉塞尔/0000-0001-6272-1100;高聪/0000-0001-6798-8381 ZA 0 ZB 4 TC 6 Z8 0
ZS 0
ZR 0
Z9 6
C1 Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21211 USA
C1 Intuit Surg Inc, Sunnyvale, CA 94086 USA
C1 Johns Hopkins Univ, Dept Biomed Engn, Baltimore, MD 21211 USA
C1 Johns Hopkins Univ, Dept Orthopaed Surg, Baltimore, MD 21224 USA
C1 Johns Hopkins Univ, Dept Mech Engn, Baltimore, MD 21224 USA
C1 Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21224 USA
C1 Johns Hopkins Univ, Appl Phys Lab, Baltimore, MD 21224 USA
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600012
PM 37549070
ER
ZS 0 ZR 0 Z9 6 C1 约翰·霍普金斯大学,计算科学系,巴尔的摩,马里兰州 21211 美国 C1 Intuit Surg Inc,桑尼维尔,加利福尼亚州 94086 美国 C1 约翰·霍普金斯大学,生物医学工程系,巴尔的摩,马里兰州 21211 美国 C1 约翰·霍普金斯大学,系整形外科,巴尔的摩,马里兰州 21224 美国 C1 约翰霍普金斯大学,机械工程系,巴尔的摩,马里兰州 21224 美国 C1 约翰霍普金斯大学,计算机科学系,巴尔的摩,马里兰州 21224 美国 C1 约翰霍普金斯大学,应用物理实验室,巴尔的摩,马里兰州 21224 美国SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600012 PM 37549070 ER
PT J
AU Sun, Kaicong
Wang, Qian
Shen, Dinggang
PT J AU Sun、王凯聪、沉谦、丁刚
TI Joint Cross-Attention Network With Deep Modality Prior for Fast MRI
Reconstruction
TI 联合交叉注意网络与深度模态先验,用于快速 MRI 重建
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 558
EP 569
DI 10.1109/TMI.2023.3314008
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 558 EP 569 DI 10.1109/TMI.2023.3314008 DT 文章 PD 2024 年 1 月 PY 2024
AB Current deep learning-based reconstruction models for accelerated
multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled
k-space data of single modality using convolutional neural network
(CNN). Although dual-domain information and data consistency constraint
are commonly adopted in fast MRI reconstruction, the performance of
existing models is still limited mainly by three factors: inaccurate
estimation of coil sensitivity, inadequate utilization of structural
prior, and inductive bias of CNN. To tackle these challenges, we propose
an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using
deep guidance of the already acquired intra-subject data. Particularly,
to improve the performance of coil sensitivity estimation, we
simultaneously optimize the latent MR image and sensitivity map (SM).
Besides, we introduce Gating layer and Gaussian layer into SM estimation
to alleviate the "defocus" and "over-coupling" effects and further
ameliorate the SM estimation. To enhance the representation ability of
the proposed model, we deploy Vision Transformer (ViT) and CNN in the
image and k-space domains, respectively. Moreover, we exploit
pre-acquired intra-subject scan as reference modality to guide the
reconstruction of subsampled target modality by resorting to the self-
and cross-attention scheme. Experimental results on public knee and
in-house brain datasets demonstrate that the proposed jCAN outperforms
the state-of-the-art methods by a large margin in terms of SSIM and PSNR
for different acceleration factors and sampling masks.
OI sun, kaicong/0000-0002-9999-2542; Wang, Qian/0000-0002-3490-3836
ZR 0
ZA 0
ZB 0
目前基于深度学习的加速多线圈磁共振成像(MRI)重建模型主要集中于使用卷积神经网络(CNN)的单模态子采样k空间数据。尽管双域信息和数据一致性约束在快速MRI重建中被普遍采用,但现有模型的性能仍然主要受到三个因素的限制:线圈灵敏度估计不准确、结构先验利用不充分以及CNN的归纳偏差。为了应对这些挑战,我们提出了一种基于展开的联合交叉注意力网络,称为 jCAN,使用已获取的受试者内数据的深度指导。特别是,为了提高线圈灵敏度估计的性能,我们同时优化了潜在 MR 图像和灵敏度图(SM)。此外,我们将门控层和高斯层引入SM估计中,以减轻“散焦”和“过耦合”效应,进一步改善SM估计。为了增强所提出模型的表示能力,我们分别在图像和 k 空间域中部署 Vision Transformer (ViT) 和 CNN。此外,我们利用预先获得的受试者内扫描作为参考模态,通过自我和交叉注意方案来指导子采样目标模态的重建。在公共膝盖和内部大脑数据集上的实验结果表明,对于不同的加速因子和采样掩模,所提出的 jCAN 在 SSIM 和 PSNR 方面大幅优于最先进的方法。 OI太阳凯聪/0000-0002-9999-2542;王倩/0000-0002-3490-3836 ZR 0 ZA 0 ZB 0
ZS 0
TC 1
Z8 0
Z9 1
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 200231, Peoples R China
C1 Shanghai United Imaging Intelligence Co Ltd, Shanghai 201815, Peoples R
China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 200231, Peoples R China
C3 Shanghai Clin Res & Trial Ctr
C3 Shanghai United Imaging Intelligence Co Ltd
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600026
PM 37695966
ER
ZS 0 TC 1 Z8 0 Z9 1 C1 上海科技大学生物医学工程有限公司,上海 201210,人民 R 中国 C1 上海临床研究中心,上海 200231,人民 R 中国 C1 上海联影智能有限公司,上海 201815,人民 R 中国C1 上海临床研究与试验中心,上海 200231,人民 R 中国 C3 上海临床研究与试验中心 C3 上海联影智能有限公司 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600026 下午 37695966 ER
PT J
AU Chi, Jianning
Sun, Zhiyi
Meng, Liuyi
Wang, Siqi
Yu, Xiaosheng
Wei, Xiaolin
Yang, Bin
PT J AU Chi, 孙建宁, 孟志毅, 王六一, 余思琪, 魏晓生, 杨晓琳, 斌
TI Low-dose CT image super-resolution with noise suppression based on prior
degradation estimator and self-guidance mechanism.
基于先验退化估计器和自引导机制的 TI 低剂量 CT 图像超分辨率和噪声抑制。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3454268
DT Journal Article
PD 2024-Sep-04
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3454268 DT 期刊文章 PD 2024-Sep-04 PY 2024
AB The anatomies in low-dose computer tomography (LDCT) are usually
distorted during the zooming-in observation process due to the small
amount of quantum. Super-resolution (SR) methods have been proposed to
enhance qualities of LDCT images as post-processing approaches without
increasing radiation damage to patients, but suffered from incorrect
prediction of degradation information and incomplete leverage of
internal connections within the 3D CT volume, resulting in the imbalance
between noise removal and detail sharpening in the super-resolution
results. In this paper, we propose a novel LDCT SR network where the
degradation information self-parsed from the LDCT slice and the 3D
anatomical information captured from the LDCT volume are integrated to
guide the backbone network. The prior degradation estimator (PDE) is
proposed following the contrastive learning strategy to estimate the
degradation features in the LDCT images without paired low-normal dose
CT images. The self-guidance fusion module (SGFM) is designed to capture
anatomical features with internal 3D consistencies between the squashed
images along the coronal, sagittal, and axial views of the CT volume.
Finally, the features representing degradation and anatomical structures
are integrated to recover the CT images with higher resolutions. We
apply the proposed method to the 2016 NIH-AAPM Mayo Clinic LDCT Grand
Challenge dataset and our collected LDCT dataset to evaluate its ability
to recover LDCT images. Experimental results illustrate the superiority
of our network concerning quantitative metrics and qualitative
observations, demonstrating its potential in recovering detail-sharp and
noise-free CT images with higher resolutions from the practical LDCT
images.
Z8 0
AB 由于量子量较小,低剂量计算机断层扫描(LDCT)中的解剖结构在放大观察过程中通常会发生扭曲。超分辨率(SR)方法已被提出作为后处理方法来增强 LDCT 图像的质量,而不增加对患者的辐射损伤,但由于对退化信息的错误预测以及 3D CT 体积内内部连接的利用不完整,导致超分辨率结果中噪声去除和细节锐化之间的不平衡。在本文中,我们提出了一种新颖的 LDCT SR 网络,其中从 LDCT 切片自解析的退化信息和从 LDCT 体积捕获的 3D 解剖信息被集成以指导骨干网络。先验退化估计器(PDE)是按照对比学习策略提出的,用于在没有配对低正常剂量CT图像的情况下估计LDCT图像中的退化特征。自引导融合模块 (SGFM) 旨在捕获沿 CT 体积的冠状、矢状和轴向视图的压缩图像之间具有内部 3D 一致性的解剖特征。最后,融合代表退化和解剖结构的特征,以恢复更高分辨率的 CT 图像。我们将所提出的方法应用于 2016 年 NIH-AAPM Mayo Clinic LDCT Grand Challenge 数据集和我们收集的 LDCT 数据集,以评估其恢复 LDCT 图像的能力。实验结果说明了我们的网络在定量指标和定性观察方面的优越性,证明了其从实际 LDCT 图像中恢复细节清晰、无噪声、分辨率更高的 CT 图像的潜力。 Z8 0
ZS 0
ZA 0
ZR 0
ZB 0
TC 0
Z9 0
EI 1558-254X
DA 2024-09-06
UT MEDLINE:39231060
PM 39231060
ER
ZS 0 ZA 0 ZR 0 ZB 0 TC 0 Z9 0 EI 1558-254X DA 2024-09-06 UT MEDLINE:39231060 PM 39231060 ER
PT J
AU Yan, Renao
Sun, Qiehe
Jin, Cheng
Liu, Yiqing
He, Yonghong
Guan, Tian
Chen, Hao
PT J AU Yan, Renao Sun, 金切和, 刘成, 何一清, 关永红, 陈天, 郝
TI Shapley Values-enabled Progressive Pseudo Bag Augmentation for
Whole-Slide Image Classification.
用于全幻灯片图像分类的 TI Shapley Values 支持的渐进式伪袋增强。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3453386
DT Journal Article
PD 2024-Sep-02
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3453386 DT 期刊文章 PD 2024-Sep-02 PY 2024
AB In computational pathology, whole-slide image (WSI) classification
presents a formidable challenge due to its gigapixel resolution and
limited fine-grained annotations. Multiple-instance learning (MIL)
offers a weakly supervised solution, yet refining instance-level
information from bag-level labels remains challenging. While most of the
conventional MIL methods use attention scores to estimate instance
importance scores (IIS) which contribute to the prediction of the slide
labels, these often lead to skewed attention distributions and
inaccuracies in identifying crucial instances. To address these issues,
we propose a new approach inspired by cooperative game theory: employing
Shapley values to assess each instance's contribution, thereby improving
IIS estimation. The computation of the Shapley value is then accelerated
using attention, meanwhile retaining the enhanced instance
identification and prioritization. We further introduce a framework for
the progressive assignment of pseudo bags based on estimated IIS,
encouraging more balanced attention distributions in MIL models. Our
extensive experiments on CAMELYON-16, BRACS, TCGA-LUNG, and TCGA-BRCA
datasets show our method's superiority over existing state-of-the-art
approaches, offering enhanced interpretability and class-wise insights.
We will release the code upon acceptance.
RI Jin, Cheng/AAY-8189-2021; Chen, Hao/V-4299-2019; Yan, Renao/
OI Jin, Cheng/0000-0002-3522-3592; Chen, Hao/0000-0002-8400-3780; Yan,
Renao/0000-0002-2399-199X
TC 0
AB 在计算病理学中,全切片图像 (WSI) 分类由于其十亿像素分辨率和有限的细粒度注释而提出了巨大的挑战。多实例学习 (MIL) 提供了弱监督解决方案,但从包级标签中提炼实例级信息仍然具有挑战性。虽然大多数传统的 MIL 方法使用注意力分数来估计有助于预测幻灯片标签的实例重要性分数 (IIS),但这些通常会导致注意力分布不均以及识别关键实例时的不准确。为了解决这些问题,我们提出了一种受合作博弈论启发的新方法:利用 Shapley 值来评估每个实例的贡献,从而改进 IIS 估计。然后使用注意力加速 Shapley 值的计算,同时保留增强的实例识别和优先级。我们进一步引入了一个基于估计的 IIS 的渐进式伪袋分配框架,鼓励 MIL 模型中更加平衡的注意力分配。我们对 CAMELYON-16、BRACS、TCGA-LUNG 和 TCGA-BRCA 数据集进行的广泛实验表明,我们的方法优于现有最先进的方法,提供增强的可解释性和分类洞察。我们将在接受后发布代码。程日进/AAY-8189-2021;陈浩/V-4299-2019;严热闹/爱金成/0000-0002-3522-3592;陈浩/0000-0002-8400-3780;严热闹/0000-0002-2399-199X TC 0
ZS 0
ZR 0
ZA 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-04
UT MEDLINE:39222451
PM 39222451
ER
ZS 0 ZR 0 ZA 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-04 UT MEDLINE:39222451 PM 39222451 ER
PT J
AU Borazjani, Kasra
Khosravan, Naji
Ying, Leslie
Hosseinalipour, Seyyedali
PT J AU Borazjani、Kasra Khosravan、Naji Ying、Leslie Hosseinalipour、Seyyedali
TI Multi-Modal Federated Learning for Cancer Staging over Non-IID Datasets
with Unbalanced Modalities.
TI 多模式联合学习,用于通过不平衡模式的非 IID 数据集进行癌症分期。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3450855
DT Journal Article
PD 2024-Aug-28
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3450855 DT 期刊文章 PD 2024 年 8 月 28 日 PY 2024
AB The use of machine learning (ML) for cancer staging through medical
image analysis has gained substantial interest across medical
disciplines. When accompanied by the innovative federated learning (FL)
framework, ML techniques can further overcome privacy concerns related
to patient data exposure. Given the frequent presence of diverse data
modalities within patient records, leveraging FL in a multi-modal
learning framework holds considerable promise for cancer staging.
However, existing works on multi-modal FL often presume that all
data-collecting institutions have access to all data modalities. This
oversimplified approach neglects institutions that have access to only a
portion of data modalities within the system. In this work, we introduce
a novel FL architecture designed to accommodate not only the
heterogeneity of data samples, but also the inherent
heterogeneity/non-uniformity of data modalities across institutions. We
shed light on the challenges associated with varying convergence speeds
observed across different data modalities within our FL system.
Subsequently, we propose a solution to tackle these challenges by
devising a distributed gradient blending and proximity-aware client
weighting strategy tailored for multi-modal FL. To show the superiority
of our method, we conduct experiments using The Cancer Genome Atlas
program (TCGA) datalake considering different cancer types and three
modalities of data: mRNA sequences, histopathological image data, and
clinical information. Our results further unveil the impact and severity
of class-based vs type-based heterogeneity across institutions on the
model performance, which widens the perspective to the notion of data
heterogeneity in multi-modal FL literature.
OI Hosseinalipour, Seyyedali/0000-0003-4266-4000; Borazjani,
Kasra/0009-0006-4112-0991
ZB 0
AB 通过医学图像分析使用机器学习 (ML) 进行癌症分期已经引起了各个医学学科的极大兴趣。当与创新的联邦学习 (FL) 框架相结合时,机器学习技术可以进一步克服与患者数据暴露相关的隐私问题。鉴于患者记录中经常存在不同的数据模式,在多模式学习框架中利用 FL 为癌症分期带来了巨大的希望。然而,现有的多模态 FL 工作通常假设所有数据收集机构都可以访问所有数据模态。这种过于简单化的方法忽略了只能访问系统内部分数据模式的机构。在这项工作中,我们引入了一种新颖的 FL 架构,其设计不仅可以适应数据样本的异质性,还可以适应跨机构数据模式固有的异质性/不均匀性。我们揭示了与 FL 系统中不同数据模式观察到的不同收敛速度相关的挑战。随后,我们提出了一种解决方案,通过设计专为多模态 FL 定制的分布式梯度混合和邻近感知客户端权重策略来应对这些挑战。为了展示我们方法的优越性,我们使用癌症基因组图谱计划 (TCGA) 数据湖进行实验,考虑不同的癌症类型和三种数据模式:mRNA 序列、组织病理学图像数据和临床信息。我们的结果进一步揭示了跨机构基于类别与基于类型的异质性对模型性能的影响和严重性,这拓宽了多模态 FL 文献中数据异质性概念的视角。OI Hosseinalipour,Seyyedali/0000-0003-4266-4000;博拉兹贾尼,卡斯拉/0009-0006-4112-0991 ZB 0
ZS 0
TC 0
ZA 0
Z8 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-08-30
UT MEDLINE:39196746
PM 39196746
ER
ZS 0 TC 0 ZA 0 Z8 0 ZR 0 Z9 0 EI 1558-254X DA 2024-08-30 UT MEDLINE:39196746 PM 39196746 ER
PT J
AU Feng, Yidan
Deng, Sen
Lyu, Jun
Cai, Jing
Wei, Mingqiang
Qin, Jing
PT J AU Feng, Yidan Deng, Sen Lyu, Jun Cai, Jing Wei, Mingqiqqin, Jing
TI Bridging MRI Cross-Modality Synthesis and Multi-Contrast
Super-Resolution by Fine-Grained Difference Learning.
TI 通过细粒度差异学习桥接 MRI 跨模态合成和多对比度超分辨率。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3445969
DT Journal Article
PD 2024-Aug-19
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3445969 DT 期刊文章 PD 2024-Aug-19 PY 2024
AB In multi-modal magnetic resonance imaging (MRI), the tasks of imputing
or reconstructing the target modality share a common obstacle: the
accurate modeling of fine-grained inter-modal differences, which has
been sparingly addressed in current literature. These differences stem
from two sources: 1) spatial misalignment remaining after coarse
registration and 2) structural distinction arising from
modality-specific signal manifestations. This paper integrates the
previously separate research trajectories of cross-modality synthesis
(CMS) and multi-contrast super-resolution (MCSR) to address this
pervasive challenge within a unified framework. Connected through
generalized down-sampling ratios, this unification not only emphasizes
their common goal in reducing structural differences, but also
identifies the key task distinguishing MCSR from CMS: modeling the
structural distinctions using the limited information from the
misaligned target input. Specifically, we propose a composite network
architecture with several key components: a label correction module to
align the coordinates of multi-modal training pairs, a CMS module
serving as the base model, an SR branch to handle target inputs, and a
difference projection discriminator for structural distinction-centered
adversarial training. When training the SR branch as the generator, the
adversarial learning is enhanced with distinction-aware incremental
modulation to ensure better-controlled generation. Moreover, the SR
branch integrates deformable convolutions to address cross-modal spatial
misalignment at the feature level. Experiments conducted on three public
datasets demonstrate that our approach effectively balances structural
accuracy and realism, exhibiting overall superiority in comprehensive
evaluations for both tasks over current state-of-the-art approaches. The
code is available at https://github.com/papshare/FGDL.
ZA 0
ZB 0
AB 在多模态磁共振成像 (MRI) 中,输入或重建目标模态的任务有一个共同的障碍:细粒度模态间差异的精确建模,而当前文献中很少涉及这一问题。这些差异源于两个来源:1)粗略配准后剩余的空间错位;2)由特定模态信号表现引起的结构区别。本文整合了之前独立的跨模态合成(CMS)和多对比度超分辨率(MCSR)的研究轨迹,以在统一的框架内解决这一普遍的挑战。通过广义下采样比率连接,这种统一不仅强调了它们减少结构差异的共同目标,而且还确定了区分 MCSR 和 CMS 的关键任务:使用来自未对齐目标输入的有限信息对结构差异进行建模。具体来说,我们提出了一个具有几个关键组件的复合网络架构:用于对齐多模态训练对坐标的标签校正模块、用作基础模型的 CMS 模块、用于处理目标输入的 SR 分支以及差异投影鉴别器用于以结构区别为中心的对抗性训练。当将 SR 分支训练为生成器时,通过区分感知增量调制增强对抗性学习,以确保更好地控制生成。此外,SR 分支集成了可变形卷积来解决特征级别的跨模态空间错位问题。 在三个公共数据集上进行的实验表明,我们的方法有效地平衡了结构准确性和真实性,在这两项任务的综合评估中表现出相对于当前最先进方法的整体优越性。该代码可在 https://github.com/papshare/FGDL 获取。ZA 0 ZB 0
ZS 0
ZR 0
Z8 0
TC 0
Z9 0
EI 1558-254X
DA 2024-08-21
UT MEDLINE:39159018
PM 39159018
ER
ZS 0 ZR 0 Z8 0 TC 0 Z9 0 EI 1558-254X DA 2024-08-21 UT MEDLINE:39159018 PM 39159018 ER
PT J
AU Cheng, Ziming
Wang, Shidong
Xin, Tong
Zhou, Tao
Zhang, Haofeng
Shao, Ling
PT J AU Cheng、王子明、辛世东、周同、张涛、邵浩峰、凌
TI Few-Shot Medical Image Segmentation via Generating Multiple
Representative Descriptors
通过生成多个代表性描述符进行 TI 少样本医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2202
EP 2214
DI 10.1109/TMI.2024.3358295
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2202 EP 2214 DI 10.1109/TMI.2024.3358295 DT 文章 PD 2024 年 6 月 PY 2024
AB Automatic medical image segmentation has witnessed significant
development with the success of large models on massive datasets.
However, acquiring and annotating vast medical image datasets often
proves to be impractical due to the time consumption, specialized
expertise requirements, and compliance with patient privacy standards,
etc. As a result, Few-shot Medical Image Segmentation (FSMIS) has become
an increasingly compelling research direction. Conventional FSMIS
methods usually learn prototypes from support images and apply
nearest-neighbor searching to segment the query images. However, only a
single prototype cannot well represent the distribution of each class,
thus leading to restricted performance. To address this problem, we
propose to Generate Multiple Representative Descriptors (GMRD), which
can comprehensively represent the commonality within the corresponding
class distribution. In addition, we design a Multiple Affinity Maps
based Prediction (MAMP) module to fuse the multiple affinity maps
generated by the aforementioned descriptors. Furthermore, to address
intra-class variation and enhance the representativeness of descriptors,
we introduce two novel losses. Notably, our model is structured as a
dual-path design to achieve a balance between foreground and background
differences in medical images. Extensive experiments on four publicly
available medical image datasets demonstrate that our method outperforms
the state-of-the-art methods, and the detailed analysis also verifies
the effectiveness of our designed module.
RI Tong, Xin/P-2521-2016; zhou, tao/GQA-3572-2022; wang, lina/HGC-1592-2022; Xin, Tong/; Wang, Shidong/
OI Xin, Tong/0000-0001-5479-262X; Wang, Shidong/0000-0003-1023-1286
TC 3
ZB 0
随着大型模型在海量数据集上的成功,自动医学图像分割已经取得了显着的发展。然而,由于时间消耗、专业知识要求以及遵守患者隐私标准等原因,获取和注释大量医学图像数据集通常被证明是不切实际的。因此,少镜头医学图像分割(FSMIS)已成为越来越多的研究领域。引人注目的研究方向。传统的 FSMIS 方法通常从支持图像中学习原型,并应用最近邻搜索来分割查询图像。然而,只有一个原型无法很好地代表每个类的分布,从而导致性能受到限制。为了解决这个问题,我们提出生成多个代表描述符(GMRD),它可以全面地表示相应类分布内的共性。此外,我们设计了一个基于多亲和力图的预测(MAMP)模块来融合由上述描述符生成的多个亲和力图。此外,为了解决类内变异并增强描述符的代表性,我们引入了两种新颖的损失。值得注意的是,我们的模型采用双路径设计,以实现医学图像中前景和背景差异之间的平衡。对四个公开可用的医学图像数据集的广泛实验表明,我们的方法优于最先进的方法,详细的分析也验证了我们设计的模块的有效性。日童鑫/P-2521-2016;周涛/GQA-3572-2022;王丽娜/HGC-1592-2022;鑫童/;王世东/佟爱鑫/0000-0001-5479-262X;王世东/0000-0003-1023-1286 TC 3 ZB 0
ZS 0
ZA 0
Z8 0
ZR 0
Z9 3
C1 Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples
R China
C1 Newcastle Univ, Sch Engn, Newcastle Upon Tyne NE1 7RU, England
C1 Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE1 7RU, England
C1 Univ Chinese Acad Sci, UCAS Terminus AI Lab, Beijing 100049, Peoples R
China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100003
PM 38265915
ER
ZS 0 ZA 0 Z8 0 ZR 0 Z9 3 C1 南京理工大学,科学与工程学院,南京 210094,人民 R 中国 C1 纽卡斯尔大学,工程学院,泰恩河畔纽卡斯尔 NE1 7RU,英国 C1 纽卡斯尔大学,科学与工程学院,泰恩河畔纽卡斯尔 NE1 7RU, England C1 Univ Chinese Acad Sci, UCAS Terminus AI Lab, 北京 100049, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100003 PM 38265915 ER
PT J AU Wang, Hongyu He, Jiang Cui, Hengfei Yuan, Bo Xia, Yong
TI Robust Stochastic Neural Ensemble Learning With Noisy Labels for
Thoracic Disease Classification
TI 鲁棒随机神经集成学习与噪声标签用于胸部疾病分类
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2180
EP 2190
DI 10.1109/TMI.2024.3357986
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2180 EP 2190 DI 10.1109/TMI.2024.3357986 DT 文章 PD 2024 年 6 月 PY 2024
AB Chest radiography is the most common radiology examination for thoracic
disease diagnosis, such as pneumonia. A tremendous number of chest
X-rays prompt data-driven deep learning models in constructing
computer-aided diagnosis systems for thoracic diseases. However, in
realistic radiology practice, a deep learning-based model often suffers
from performance degradation when trained on data with noisy labels
possibly caused by different types of annotation biases. To this end, we
present a novel stochastic neural ensemble learning (SNEL) framework for
robust thoracic disease diagnosis using chest X-rays. The core idea of
our method is to learn from noisy labels by constructing model ensembles
and designing noise-robust loss functions. Specifically, we propose a
fast neural ensemble method that collects parameters simultaneously
across model instances and along optimization trajectories. Moreover, we
propose a loss function that both optimizes a robust measure and
characterizes a diversity measure of ensembles. We evaluated our
proposed SNEL method on three publicly available hospital-scale chest
X-ray datasets. The experimental results indicate that our method
outperforms competing methods and demonstrate the effectiveness and
robustness of our method in learning from noisy labels. Our code is
available at https://github.com/hywang01/SNEL.
RI Wang, Hongyu/KIK-3613-2024; Cui, Hengfei/; Wang, Hongyu/
OI Cui, Hengfei/0000-0001-8625-2521; Wang, Hongyu/0000-0001-7345-2794
Z8 0
TC 0
AB 胸部X光检查是诊断肺炎等胸部疾病最常见的放射学检查。大量的胸部X光片促使数据驱动的深度学习模型构建胸部疾病的计算机辅助诊断系统。然而,在现实的放射学实践中,基于深度学习的模型在使用可能由不同类型的注释偏差引起的带有噪声标签的数据进行训练时,常常会出现性能下降的问题。为此,我们提出了一种新颖的随机神经集成学习 (SNEL) 框架,用于使用胸部 X 光进行稳健的胸部疾病诊断。我们方法的核心思想是通过构建模型集成和设计抗噪声损失函数来从噪声标签中学习。具体来说,我们提出了一种快速神经集成方法,可以跨模型实例并沿着优化轨迹同时收集参数。此外,我们提出了一种损失函数,它既可以优化鲁棒性度量,又可以表征集合的多样性度量。我们在三个公开的医院规模胸部 X 射线数据集上评估了我们提出的 SNEL 方法。实验结果表明,我们的方法优于竞争方法,并证明了我们的方法在从噪声标签中学习方面的有效性和鲁棒性。我们的代码可在 https://github.com/hywang01/SNEL 获取。王日红宇/KIK-3613-2024;崔恒飞/;王宏宇/崔恒飞/0000-0001-8625-2521;王宏宇/0000-0001-7345-2794 Z8 0 TC 0
ZS 0
ZB 0
ZA 0
ZR 0
Z9 0
C1 Northwestern Polytech Univ, Sch Comp Sci & Engn, Natl Engn Lab
Integrated AeroSp Ground Ocean Big, Data Applicat Technol, Xian 710072,
Peoples R China
C1 Huiying Med Technol Co Ltd, Beijing 100192, Peoples R China
C1 Sichuan Prov Hlth Informat Ctr, Sichuan Prov Hlth & Med Big Data Ctr,
Chengdu 610041, Peoples R China
C3 Huiying Med Technol Co Ltd
C3 Sichuan Prov Hlth Informat Ctr
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100020
PM 38265913
ER
ZS 0 ZB 0 ZA 0 ZR 0 Z9 0 C1 西北工业大学,科学与工程科学学院,国家工程实验室集成航空航天地面海洋大数据应用技术,西安 710072,人民共和国 C1 汇英医疗技术有限公司,北京 100192人民 R 中国 C1 四川省 Hlth 信息中心,四川省 Hlth 医疗大数据中心,成都 610041,人民 R 中国 C3 慧影医疗科技有限公司 C3 四川省 Hlth 信息中心 SN 0278-0062 EI 1558-254X DA 2024-06- 23 UT WOS:001243728100020 PM 38265913 ER
PT J
AU Zhou, Quan
Yu, Bin
Xiao, Feng
Ding, Mingyue
Wang, Zhiwei
Zhang, Xuming
PT J AU Zhou, 余泉, 肖斌, 丁峰, 王明月, 张志伟, 徐明
TI Robust Semi-Supervised 3D Medical Image Segmentation With Diverse
Joint-Task Learning and Decoupled Inter-Student Learning
TI 稳健的半监督 3D 医学图像分割,具有多样化的联合任务学习和解耦的学生间学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2317
EP 2331
DI 10.1109/TMI.2024.3362837
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2317 EP 2331 DI 10.1109/TMI.2024.3362837 DT 文章 PD 2024 年 6 月 PY 2024
AB Semi-supervised segmentation is highly significant in 3D medical image
segmentation. The typical solutions adopt a teacher-student dual-model
architecture, and they constrain the two models' decision consistency on
the same segmentation task. However, the scarcity of medical samples can
lower the diversity of tasks, reducing the effectiveness of consistency
constraint. The issue can further worsen as the weights of the models
gradually become synchronized. In this work, we have proposed to
construct diverse joint-tasks using masked image modelling for enhancing
the reliability of the consistency constraint, and develop a novel
architecture consisting of a single teacher but multiple students to
enjoy the additional knowledge decoupled from the synchronized weights.
Specifically, the teacher and student models 'see' varied
randomly-masked versions of an input, and are trained to segment the
same targets but reconstruct different missing regions concurrently.
Such joint-task of segmentation and reconstruction can have the two
learners capture related but complementary features to derive
instructive knowledge when constraining their consistency. Moreover, two
extra students join the original one to perform an inter-student
learning. The three students share the same encoding but different
decoding designs, and learn decoupled knowledge by constraining their
mutual consistencies, preventing themselves from suboptimally converging
to the biased predictions of the dictatorial teacher. Experimental on
four medical datasets show that our approach performs better than six
mainstream semi-supervised methods. Particularly, our approach achieves
at least 0.61% and 0.36% higher Dice and Jaccard values, respectively,
than the most competitive approach on our in-house dataset. The code
will be released at https://github.com/zxmboshi/DDL.
RI Zhang, Xuming/AAS-7462-2020; Wang, Zhi-Wei/P-1561-2018; Xiao, Feng/
OI Xiao, Feng/0000-0003-3289-3459
ZB 0
Z8 0
ZR 0
AB半监督分割在3D医学图像分割中具有非常重要的意义。典型的解决方案采用师生双模型架构,它们限制了两个模型在同一分割任务上的决策一致性。然而,医学样本的稀缺会降低任务的多样性,降低一致性约束的有效性。随着模型权重逐渐同步,这个问题可能会进一步恶化。在这项工作中,我们提出使用掩模图像建模构建不同的联合任务,以增强一致性约束的可靠性,并开发一种由单个教师和多个学生组成的新颖架构,以享受与同步权重解耦的额外知识。具体来说,教师和学生模型“看到”输入的各种随机屏蔽版本,并经过训练来分割相同的目标,但同时重建不同的缺失区域。这种分割和重建的联合任务可以让两个学习者捕获相关但互补的特征,从而在限制其一致性时导出指导性知识。此外,两名额外的学生加入原来的学生进行学生间学习。三个学生共享相同的编码但不同的解码设计,并通过约束彼此的一致性来学习解耦的知识,防止自己次优地收敛于独裁老师的有偏见的预测。对四个医学数据集的实验表明,我们的方法比六种主流半监督方法表现更好。特别是,与我们内部数据集上最具竞争力的方法相比,我们的方法的 Dice 和 Jaccard 值分别高出至少 0.61% 和 0.36%。 代码将发布在https://github.com/zxmboshi/DDL。张日,旭明/AAS-7462-2020;王志伟/P-1561-2018;肖峰/ OI 肖峰/0000-0003-3289-3459 ZB 0 Z8 0 ZR 0
ZS 0
TC 0
ZA 0
Z9 0
C1 Huazhong Univ Sci & Technol, Coll Life Sci & Technol, Dept Biomed Engn,
Wuhan 430074, Peoples R China
C1 Wuhan Univ, Zhongnan Hosp, Dept Radiol, Wuhan 430071, Peoples R China
C1 Huazhong Univ Sci & Technol, Collaborat Innovat Ctr Biomed Engn, Sch
Engn Sci, MoE Key Lab Biomed Photon, Wuhan 430074, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100011
PM 38319753
ER
ZS 0 TC 0 ZA 0 Z9 0 C1 华中科技大学生命科技学院生物医学工程系,武汉 430074,人民 R 中国 C1 武汉大学,中南医院,放射科,武汉 430071,人民 R 中国 C1 华中大学科学与技术,合作创新中心生物医学工程,科学工程,教育部生物医学光子重点实验室,武汉 430074,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100011 PM 38319753 ER
PT J
AU Ma, Yulan
Cui, Weigang
Liu, Jingyu
Guo, Yuzhu
Chen, Huiling
Li, Yang
PT J AU Ma, 崔玉兰, 刘伟刚, 郭靖宇, 陈玉珠, 李慧玲, 杨
TI A Multi-Graph Cross-Attention-Based Region-Aware Feature Fusion Network
Using Multi-Template for Brain Disorder Diagnosis
TI 基于多图交叉注意力的区域感知特征融合网络,使用多模板进行脑部疾病诊断
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1045
EP 1059
DI 10.1109/TMI.2023.3327283
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1045 EP 1059 DI 10.1109/TMI.2023.3327283 DT 文章 PD 2024 年 3 月 PY 2024
AB Functional connectivity (FC) networks based on resting-state functional
magnetic imaging (rs-fMRI) are reliable and sensitive for brain disorder
diagnosis. However, most existing methods are limited by using a single
template, which may be insufficient to reveal complex brain
connectivities. Furthermore, these methods usually neglect the
complementary information between static and dynamic brain networks, and
the functional divergence among different brain regions, leading to
suboptimal diagnosis performance. To address these limitations, we
propose a novel multi-graph cross-attention based region-aware feature
fusion network (MGCA-RAFFNet) by using multi-template for brain disorder
diagnosis. Specifically, we first employ multi-template to parcellate
the brain space into different regions of interest (ROIs). Then, a
multi-graph cross-attention network (MGCAN), including static and
dynamic graph convolutions, is developed to explore the deep features
contained in multi-template data, which can effectively analyze complex
interaction patterns of brain networks for each template, and further
adopt a dual-view cross-attention (DVCA) to acquire complementary
information. Finally, to efficiently fuse multiple static-dynamic
features, we design a region-aware feature fusion network (RAFFNet),
which is beneficial to improve the feature discrimination by considering
the underlying relations among static-dynamic features in different
brain regions. Our proposed method is evaluated on both public ADNI-2
and ABIDE-I datasets for diagnosing mild cognitive impairment (MCI) and
autism spectrum disorder (ASD). Extensive experiments demonstrate that
the proposed method outperforms the state-of-the-art methods.
RI Guo, Yuzhu/AAH-6702-2020; Li, Yang/K-8907-2012; Chen, Huiling/N-8510-2019; Liu, Jingyu/; Ma, Yulan/; Cui, Weigang/; Guo, Yuzhu/
OI Chen, Huiling/0000-0002-7714-9693; Liu, Jingyu/0000-0002-1646-637X; Ma,
Yulan/0000-0003-4463-9087; Cui, Weigang/0000-0002-7983-9161; Guo,
Yuzhu/0000-0002-8588-5172
ZA 0
AB 基于静息态功能磁共振成像 (rs-fMRI) 的功能连接 (FC) 网络对于脑部疾病诊断而言可靠且灵敏。然而,大多数现有方法都受到使用单一模板的限制,这可能不足以揭示复杂的大脑连接。此外,这些方法通常忽略静态和动态大脑网络之间的互补信息以及不同大脑区域之间的功能差异,导致诊断性能不佳。为了解决这些限制,我们通过使用多模板进行脑部疾病诊断,提出了一种新颖的基于多图交叉注意的区域感知特征融合网络(MGCA-RAFFNet)。具体来说,我们首先采用多模板将大脑空间分割成不同的感兴趣区域(ROI)。然后,开发了包括静态和动态图卷积的多图交叉注意网络(MGCAN)来探索多模板数据中包含的深层特征,可以有效地分析每个模板的大脑网络的复杂交互模式,并且进一步采用双视图交叉注意(DVCA)来获取补充信息。最后,为了有效地融合多个静态-动态特征,我们设计了一个区域感知特征融合网络(RAFFNet),通过考虑不同大脑区域的静态-动态特征之间的潜在关系,有利于提高特征辨别力。我们提出的方法在公共 ADNI-2 和 ABIDE-I 数据集上进行了评估,用于诊断轻度认知障碍 (MCI) 和自闭症谱系障碍 (ASD)。大量的实验表明,所提出的方法优于最先进的方法。郭日玉/AAH-6702-2020;李阳/K-8907-2012;陈慧玲/N-8510-2019;刘静宇/;马玉兰/;崔伟刚/;郭玉柱/陈爱玲/0000-0002-7714-9693;刘靖宇/0000-0002-1646-637X;马玉兰/0000-0003-4463-9087;崔伟刚/0000-0002-7983-9161;郭玉柱/0000-0002-8588-5172 ZA 0
ZS 0
Z8 0
TC 1
ZR 0
ZB 0
Z9 1
C1 Beihang Univ, Dept Automat Sci & Elect Engn, Beijing 100191, Peoples R
China
C1 Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
C1 Beijing Inst Technol, Sch Med Technol, Beijing 100081, Peoples R China
C1 Wenzhou Univ, Coll Comp Sci & Artificial Intelligence, Wenzhou 325035,
Peoples R China
C1 Beihang Univ, Dept Automat Sci & Elect Engn, Beijing 100191, Peoples R
China
C1 Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600006
PM 37874702
ER
ZS 0 Z8 0 TC 1 ZR 0 ZB 0 Z9 1 C1 北京航空航天大学自动化科学与工程系,北京 100191,人民 R 中国 C1 北京航空航天大学,科学工程医学,北京 100191,人民 R 中国 C1 北京理工学院,医学科技大学,北京 100081,人民 R 中国 C1 温州大学,计算科学与人工智能学院,温州 325035,人民 R 中国 C1 北京航空航天大学,自动化科学与工程系,北京 100191,人民 R 中国 C1 北京航空航天大学,虚拟国家重点实验室Real Technol & Syst,北京 100191,Peoples R China SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600006 PM 37874702 ER
PT J
AU Li, Fangda
Hu, Zhiqiang
Chen, Wen
Kak, Avinash
PT J AU Li, 胡芳达, 陈志强, Wen Kak, Avinash
TI A Laplacian Pyramid Based Generative H&E Stain Augmentation Network
TI A 基于拉普拉斯金字塔的生成 H&E 染色增强网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 701
EP 713
DI 10.1109/TMI.2023.3317239
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 701 EP 713 DI 10.1109/TMI.2023.3317239 DT 文章 PD 2024 年 2 月 PY 2024
AB Hematoxylin and Eosin (H&E) staining is a widely used sample preparation
procedure for enhancing the saturation of tissue sections and the
contrast between nuclei and cytoplasm in histology images for medical
diagnostics. However, various factors, such as the differences in the
reagents used, result in high variability in the colors of the stains
actually recorded. This variability poses a challenge in achieving
generalization for machine-learning based computer-aided diagnostic
tools. To desensitize the learned models to stain variations, we propose
the Generative Stain Augmentation Network (G-SAN) - a GAN-based
framework that augments a collection of cell images with simulated yet
realistic stain variations. At its core, G-SAN uses a novel and highly
computationally efficient Laplacian Pyramid (LP) based generator
architecture, that is capable of disentangling stain from cell
morphology. Through the task of patch classification and nucleus
segmentation, we show that using G-SAN-augmented training data provides
on average 15.7% improvement in F1 score and 7.3% improvement in
panoptic quality, respectively. Our code is available at
https://github.com/lifangda01/GSAN-Demo.
RI Li, Fangda/JAX-9494-2023; Li, Fangda/
OI Li, Fangda/0000-0002-0967-6432
ZA 0
ZR 0
Z8 0
TC 2
ZB 1
AB 苏木精和曙红 (H&E) 染色是一种广泛使用的样品制备程序,用于增强组织切片的饱和度以及医学诊断组织学图像中细胞核和细胞质之间的对比度。然而,各种因素,例如所用试剂的差异,导致实际记录的染色颜色存在很大差异。这种可变性对实现基于机器学习的计算机辅助诊断工具的泛化提出了挑战。为了使学习模型对染色变化不敏感,我们提出了生成染色增强网络(G-SAN)——一种基于 GAN 的框架,可以通过模拟但真实的染色变化来增强细胞图像集合。 G-SAN 的核心使用了一种新颖且计算效率高的基于拉普拉斯金字塔 (LP) 的生成器架构,能够将染色与细胞形态分离。通过斑块分类和核分割的任务,我们表明使用 G-SAN 增强的训练数据分别使 F1 分数平均提高 15.7%,全景质量提高 7.3%。我们的代码可在 https://github.com/lifangda01/GSAN-Demo 获取。李日方达/JAX-9494-2023;李方达/ OI 李方达/0000-0002-0967-6432 ZA 0 ZR 0 Z8 0 TC 2 ZB 1
ZS 0
Z9 2
C1 Purdue Univ, Dept Elect & Comp Engn, W Lafayette, IN 47906 USA
C1 SenseTime Res, Beijing 100080, Peoples R China
C3 SenseTime Res
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400014
PM 37725720
ER
ZS 0 Z9 2 C1 Purdue Univ, Dept Elect & Comp Engn, W Lafayette, IN 47906 USA C1 SenseTime Res,北京 100080,Peoples R China C3 SenseTime Res SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS: 001203303400014 下午 37725720 ER
PT J
AU Chen, Haobo
Cai, Yehua
Wang, Changyan
Chen, Lin
Zhang, Bo
Han, Hong
Guo, Yuqing
Ding, Hong
Zhang, Qi
PT J AU Chen, 蔡浩波, 王业华, 陈昌彦, 张琳, 韩博, 郭红, 丁雨清, 张红, 齐
TI Multi-Organ Foundation Model for Universal Ultrasound Image Segmentation
with Task Prompt and Anatomical Prior.
用于通用超声图像分割的 TI 多器官基础模型,具有任务提示和解剖先验。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3472672
DT Journal Article
PD 2024-Oct-03
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3472672 DT 期刊文章 PD 2024 年 10 月 3 日 PY 2024
AB Semantic segmentation of ultrasound (US) images with deep learning has
played a crucial role in computer-aided disease screening, diagnosis and
prognosis. However, due to the scarcity of US images and small field of
view, resulting segmentation models are tailored for a specific single
organ and may lack robustness, overlooking correlations among anatomical
structures of multiple organs. To address these challenges, we propose
the Multi-Organ FOundation (MOFO) model for universal US image
segmentation. The MOFO is optimized jointly from multiple organs across
various anatomical regions to overcome the data scarcity and explore
correlations between multiple organs. The MOFO extracts organ-invariant
representations from US images. Simultaneously, the task prompt is
employed to refine organ-specific representations for segmentation
predictions. Moreover, the anatomical prior is incorporated to enhance
the consistency of the anatomical structures. A multi-organ US database,
comprising 7039 images from 10 organs across various regions of the
human body, has been established to evaluate our model. Results
demonstrate that the MOFO outperforms single-organ methods in terms of
the Dice coefficient, 95% Hausdorff distance and average symmetric
surface distance with statistically sufficient margins. Our experiments
in multi-organ universal segmentation for US images serve as a
pioneering exploration of improving segmentation performance by
leveraging semantic and anatomical relationships within US images of
multiple organs.
OI Ding, Hong/0000-0002-9998-0904
ZR 0
TC 0
ZA 0
ZB 0
Z8 0
AB 利用深度学习对超声 (US) 图像进行语义分割,在计算机辅助疾病筛查、诊断和预后中发挥了至关重要的作用。然而,由于超声图像的稀缺性和视野较小,所得到的分割模型是针对特定的单个器官定制的,并且可能缺乏鲁棒性,忽略了多个器官解剖结构之间的相关性。为了应对这些挑战,我们提出了用于通用美国图像分割的多器官基础(MOFO)模型。 MOFO 由不同解剖区域的多个器官联合优化,以克服数据稀缺性并探索多个器官之间的相关性。 MOFO 从 US 图像中提取器官不变的表示。同时,任务提示用于细化分割预测的器官特异性表示。此外,结合解剖先验以增强解剖结构的一致性。已经建立了一个多器官美国数据库来评估我们的模型,该数据库包含来自人体不同区域的 10 个器官的 7039 张图像。结果表明,MOFO 在 Dice 系数、95% Hausdorff 距离和平均对称表面距离方面优于单器官方法,并且具有统计上足够的裕度。我们在美国图像的多器官通用分割方面的实验是通过利用多个器官的美国图像中的语义和解剖关系来提高分割性能的开创性探索。丁洪/0000-0002-9998-0904 ZR 0 TC 0 ZA 0 ZB 0 Z8 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-10-05
UT MEDLINE:39361457
PM 39361457
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-10-05 UT MEDLINE:39361457 PM 39361457 ER
PT J AU Liu, Yuyuan Tian, Yu Wang, Chong Chen, Yuanhong Liu, Fengbei Belagiannis, Vasileios Carneiro, Gustavo
TI Translation Consistent Semi-supervised Segmentation for 3D Medical
Images.
TI Translation 一致的 3D 医学图像半监督分割。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3468896
DT Journal Article
PD 2024-Sep-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3468896 DT 期刊文章 PD 2024-Sep-26 PY 2024
AB 3D medical image segmentation methods have been successful, but their
dependence on large amounts of voxel-level annotated data is a
disadvantage that needs to be addressed given the high cost to obtain
such annotation. Semi-supervised learning (SSL) solves this issue by
training models with a large unlabelled and a small labelled dataset.
The most successful SSL approaches are based on consistency learning
that minimises the distance between model responses obtained from
perturbed views of the unlabelled data. These perturbations usually keep
the spatial input context between views fairly consistent, which may
cause the model to learn segmentation patterns from the spatial input
contexts instead of the foreground objects. In this paper, we introduce
the Translation Consistent Co-training (TraCoCo) which is a consistency
learning SSL method that perturbs the input data views by varying their
spatial input context, allowing the model to learn segmentation patterns
from foreground objects. Furthermore, we propose a new Confident
Regional Cross entropy (CRC) loss, which improves training convergence
and keeps the robustness to co-training pseudo-labelling mistakes. Our
method yields state-of-the-art (SOTA) results for several 3D data
benchmarks, such as the Left Atrium (LA), Pancreas-CT (Pancreas), and
Brain Tumor Segmentation (BraTS19). Our method also attains best results
on a 2D-slice benchmark, namely the Automated Cardiac Diagnosis
Challenge (ACDC), further demonstrating its effectiveness. Our code,
training logs and checkpoints are available at
https://github.com/yyliu01/ TraCoCo.
OI Belagiannis, Vasileios/0000-0003-0960-8453
Z8 0
AB 3D 医学图像分割方法已经取得了成功,但它们对大量体素级注释数据的依赖是一个缺点,考虑到获得此类注释的成本很高,需要解决这一缺点。半监督学习 (SSL) 通过使用大型未标记数据集和小型标记数据集训练模型来解决此问题。最成功的 SSL 方法基于一致性学习,该学习可以最小化从未标记数据的扰动视图获得的模型响应之间的距离。这些扰动通常使视图之间的空间输入上下文相当一致,这可能导致模型从空间输入上下文而不是前景对象中学习分割模式。在本文中,我们介绍了翻译一致性协同训练(TraCoCo),这是一种一致性学习 SSL 方法,它通过改变输入数据视图的空间输入上下文来扰乱输入数据视图,从而允许模型从前景对象中学习分割模式。此外,我们提出了一种新的置信区域交叉熵(CRC)损失,它提高了训练收敛性并保持了对协同训练伪标签错误的鲁棒性。我们的方法为多个 3D 数据基准提供了最先进的 (SOTA) 结果,例如左心房 (LA)、胰腺 CT (胰腺) 和脑肿瘤分割 (BraTS19)。我们的方法还在 2D 切片基准测试(即自动心脏诊断挑战(ACDC))上获得了最佳结果,进一步证明了其有效性。我们的代码、训练日志和检查点可在 https://github.com/yyliu01/TraCoCo 上获取。 OI Belagiannis,瓦西里奥斯/0000-0003-0960-8453 Z8 0
ZS 0
TC 0
ZA 0
ZR 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-28
UT MEDLINE:39325612
PM 39325612
ER
ZS 0 TC 0 ZA 0 ZR 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-28 UT MEDLINE:39325612 PM 39325612 ER
PT J
AU Huang, Wendong
Hu, Jinwu
Xiao, Junhao
Wei, Yang
Bi, Xiuli
Xiao, Bin
PT J AU Huang, 胡文东, 肖金武, 魏俊豪, 毕杨, 肖秀丽, 斌
TI Prototype-Guided Graph Reasoning Network for Few-Shot Medical Image
Segmentation.
用于少镜头医学图像分割的 TI 原型引导图形推理网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3459943
DT Journal Article
PD 2024-Sep-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3459943 DT 期刊文章 PD 2024-Sep-13 PY 2024
AB Few-shot semantic segmentation (FSS) is of tremendous potential for
data-scarce scenarios, particularly in medical segmentation tasks with
merely a few labeled data. Most of the existing FSS methods typically
distinguish query objects with the guidance of support prototypes.
However, the variances in appearance and scale between support and query
objects from the same anatomical class are often exceedingly
considerable in practical clinical scenarios, thus resulting in
undesirable query segmentation masks. To tackle the aforementioned
challenge, we propose a novel prototype-guided graph reasoning network
(PGRNet) to explicitly explore potential contextual relationships in
structured query images. Specifically, a prototype-guided graph
reasoning module is proposed to perform information interaction on the
query graph under the guidance of support prototypes to fully exploit
the structural properties of query images to overcome intra-class
variances. Moreover, instead of fixed support prototypes, a dynamic
prototype generation mechanism is devised to yield a collection of
dynamic support prototypes by mining rich contextual information from
support images to further boost the efficiency of information
interaction between support and query branches. Equipped with the
proposed two components, PGRNet can learn abundant contextual
representations for query images and is therefore more resilient to
object variations. We validate our method on three publicly available
medical segmentation datasets, namely CHAOS-T2, MS-CMRSeg, and Synapse.
Experiments indicate that the proposed PGRNet outperforms previous FSS
methods by a considerable margin and establishes a new state-of-the-art
performance.
OI Huang, Wendong/0000-0003-0557-8912; Xiao, Junhao/0000-0003-1129-8036;
Hu, Jinwu/0000-0002-6025-8578; Bi, Xiuli/0000-0003-3134-217X
TC 0
AB 少样本语义分割 (FSS) 在数据稀缺场景中具有巨大潜力,特别是在只有少量标记数据的医学分割任务中。大多数现有的 FSS 方法通常在支持原型的指导下区分查询对象。然而,在实际临床场景中,来自同一解剖类别的支持对象和查询对象之间的外观和尺度差异通常非常大,从而导致不期望的查询分割掩模。为了解决上述挑战,我们提出了一种新颖的原型引导图推理网络(PGRNet)来明确探索结构化查询图像中潜在的上下文关系。具体来说,提出了原型引导的图推理模块,在支持原型的指导下对查询图进行信息交互,以充分利用查询图像的结构特性来克服类内方差。此外,设计了动态原型生成机制,而不是固定的支持原型,通过从支持图像中挖掘丰富的上下文信息来生成动态支持原型的集合,以进一步提高支持和查询分支之间的信息交互效率。配备了所提出的两个组件,PGRNet 可以学习查询图像的丰富上下文表示,因此对对象变化更具弹性。我们在三个公开可用的医学分割数据集(即 CHAOS-T2、MS-CMRSeg 和 Synapse)上验证了我们的方法。实验表明,所提出的 PGRNet 大大优于以前的 FSS 方法,并建立了新的最先进性能。爱黄文东/0000-0003-0557-8912;肖俊豪/0000-0003-1129-8036;胡金吾/0000-0002-6025-8578;毕秀丽/0000-0003-3134-217X TC 0
ZS 0
ZB 0
ZA 0
ZR 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-09-15
UT MEDLINE:39269802
PM 39269802
ER
ZS 0 ZB 0 ZA 0 ZR 0 Z8 0 Z9 0 EI 1558-254X DA 2024-09-15 UT MEDLINE:39269802 PM 39269802 ER
PT J AU Luo, Yilin Huang, Hsuan-Kai Sastry, Karteekeya Hu, Peng Tong, Xin Kuo, Joseph Aborahama, Yousuf Na, Shuai Villa, Umberto Anastasio, Mark A Wang, Lihong V
TI Full-wave Image Reconstruction in Transcranial Photoacoustic Computed
Tomography using a Finite Element Method.
使用有限元方法进行经颅光声计算机断层扫描的 TI 全波图像重建。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3456595
DT Journal Article
PD 2024-Sep-09
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3456595 DT 期刊文章 PD 2024-Sep-09 PY 2024
AB Transcranial photoacoustic computed tomography presents challenges in
human brain imaging due to skull-induced acoustic aberration. Existing
full-wave image reconstruction methods rely on a unified elastic wave
equation for skull shear and longitudinal wave propagation, therefore
demanding substantial computational resources. We propose an efficient
discrete imaging model based on finite element discretization. The
elastic wave equation for solids is solely applied to the hard-tissue
skull region, while the soft-tissue or coupling-medium region that
dominates the simulation domain is modeled with the simpler acoustic
wave equation for liquids. The solid-liquid interfaces are explicitly
modeled with elastic-acoustic coupling. Furthermore, finite element
discretization allows coarser, irregular meshes to conform to object
geometry. These factors significantly reduce the linear system size by
20 times to facilitate accurate whole-brain simulations with improved
speed. We derive a matched forward-adjoint operator pair based on the
model to enable integration with various optimization algorithms. We
validate the reconstruction framework through numerical simulations and
phantom experiments.
OI Aborahama, Yousuf/0000-0002-9527-8923; , D.V.S.S.N.Karteekeya
Sastry/0000-0001-7659-4878; Villa, Umberto/0000-0002-5142-2559; Hu,
Peng/0000-0002-2933-1239
ZB 0
TC 0
ZA 0
ZR 0
Z8 0
由于颅骨引起的声像差,AB 经颅光声计算机断层扫描对人脑成像提出了挑战。现有的全波图像重建方法依赖于颅骨剪切和纵波传播的统一弹性波方程,因此需要大量的计算资源。我们提出了一种基于有限元离散化的高效离散成像模型。固体的弹性波方程仅应用于硬组织颅骨区域,而主导模拟域的软组织或耦合介质区域则使用更简单的液体声波方程进行建模。固液界面通过弹性声耦合进行显式建模。此外,有限元离散化允许更粗糙、不规则的网格符合物体的几何形状。这些因素将线性系统尺寸显着减小了 20 倍,以促进精确的全脑模拟并提高速度。我们根据模型推导出匹配的前向伴随算子对,以实现与各种优化算法的集成。我们通过数值模拟和模型实验验证了重建框架。 OI Aborahama,优素福/0000-0002-9527-8923; , DVSSNKarteekeya Sastry/0000-0001-7659-4878;维拉,翁贝托/0000-0002-5142-2559;胡鹏/0000-0002-2933-1239 ZB 0 TC 0 ZA 0 ZR 0 Z8 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-09-11
UT MEDLINE:39250376
PM 39250376
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-09-11 UT MEDLINE:39250376 PM 39250376 IS
PT J
AU Tang, Kunming
Jiang, Zhiguo
Wu, Kun
Shi, Jun
Xie, Fengying
Wang, Wei
Wu, Haibo
Zheng, Yushan
PT J AU Tang, Kunming Jiang, 吴志国, Kun Shi, Jun Xie, Fengying Wang, Wei Wu, Haibo Cheng, Yushan
TI Self-Supervised Representation Distribution Learning for Reliable Data
Augmentation in Histopathology WSI Classification.
TI 自监督表示分布学习,用于组织病理学 WSI 分类中的可靠数据增强。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3447672
DT Journal Article
PD 2024-Aug-22
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3447672 DT 期刊文章 PD 2024 年 8 月 22 日 PY 2024
AB Multiple instance learning (MIL) based whole slide image (WSI)
classification is often carried out on the representations of patches
extracted from WSI with a pre-trained patch encoder. The performance of
classification relies on both patch-level representation learning and
MIL classifier training. Most MIL methods utilize a frozen model
pre-trained on ImageNet or a model trained with self-supervised learning
on histopathology image dataset to extract patch image representations
and then fix these representations in the training of the MIL
classifiers for efficiency consideration. However, the invariance of
representations cannot meet the diversity requirement for training a
robust MIL classifier, which has significantly limited the performance
of the WSI classification. In this paper, we propose a Self-Supervised
Representation Distribution Learning framework (SSRDL) for patch-level
representation learning with an online representation sampling strategy
(ORS) for both patch feature extraction and WSI-level data augmentation.
The proposed method was evaluated on three datasets under three MIL
frameworks. The experimental results have demonstrated that the proposed
method achieves the best performance in histopathology image
representation learning and data augmentation and outperforms
state-of-the-art methods under different WSI classification frameworks.
The code is available at https://github.com/lazytkm/SSRDL.
OI Zheng, Yushan/0000-0003-3302-0481; wu, kun/0009-0008-3029-398X
ZA 0
AB 基于多实例学习 (MIL) 的整个幻灯片图像 (WSI) 分类通常是使用预训练的补丁编码器对从 WSI 中提取的补丁表示进行的。分类的性能依赖于补丁级表示学习和 MIL 分类器训练。大多数 MIL 方法利用在 ImageNet 上预训练的冻结模型或在组织病理学图像数据集上经过自监督学习训练的模型来提取补丁图像表示,然后在 MIL 分类器的训练中修复这些表示以考虑效率。然而,表示的不变性无法满足训练鲁棒MIL分类器的多样性要求,这极大地限制了WSI分类的性能。在本文中,我们提出了一种用于补丁级表示学习的自监督表示分布学习框架(SSRDL),并使用在线表示采样策略(ORS)来进行补丁特征提取和 WSI 级数据增强。所提出的方法在三个 MIL 框架下的三个数据集上进行了评估。实验结果表明,所提出的方法在组织病理学图像表示学习和数据增强方面实现了最佳性能,并且在不同的 WSI 分类框架下优于最先进的方法。该代码可在 https://github.com/lazytkm/SSRDL 获取。郑玉山/0000-0003-3302-0481;吴坤/0009-0008-3029-398X ZA 0
ZS 0
Z8 0
TC 0
ZB 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-08-24
UT MEDLINE:39172602
PM 39172602
ER
ZS 0 Z8 0 TC 0 ZB 0 ZR 0 Z9 0 EI 1558-254X DA 2024-08-24 UT MEDLINE:39172602 PM 39172602 ER
PT J
AU Amaan Valiuddin, M M
Viviers, Christiaan G A
Van Sloun, Ruud J G
De With, Peter H N
Sommen, Fons van der
PT J AU Amaan Valiuddin、MM Viviers、Christiaan GA Van Sloun、Ruud JG De With、Peter HN Sommen、Fons van der
TI Investigating and Improving Latent Density Segmentation Models for
Aleatoric Uncertainty Quantification in Medical Imaging.
TI 研究和改进医学成像中任意不确定性量化的潜在密度分割模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3445999
DT Journal Article
PD 2024-Aug-19
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3445999 DT 期刊文章 PD 2024-Aug-19 PY 2024
AB Data uncertainties, such as sensor noise, occlusions or limitations in
the acquisition method can introduce irreducible ambiguities in images,
which result in varying, yet plausible, semantic hypotheses. In Machine
Learning, this ambiguity is commonly referred to as aleatoric
uncertainty. In image segmentation, latent density models can be
utilized to address this problem. The most popular approach is the
Probabilistic U-Net (PU-Net), which uses latent Normal densities to
optimize the conditional data log-likelihood Evidence Lower Bound. In
this work, we demonstrate that the PU-Net latent space is severely
sparse and heavily under-utilized. To address this, we introduce mutual
information maximization and entropy-regularized Sinkhorn Divergence in
the latent space to promote homogeneity across all latent dimensions,
effectively improving gradient-descent updates and latent space
informativeness. Our results show that by applying this on public
datasets of various clinical segmentation problems, our proposed
methodology receives up to 11% performance gains compared against
preceding latent variable models for probabilistic segmentation on the
Hungarian-Matched Intersection over Union. The results indicate that
encouraging a homogeneous latent space significantly improves latent
density modeling for medical image segmentation.
OI Viviers, Christiaan/0000-0001-6455-0288; Valiuddin,
Amaan/0009-0005-2856-5841; van der Sommen, Fons/0000-0002-3593-2356
Z8 0
ZR 0
ZA 0
TC 0
ZB 0
AB 数据不确定性,例如传感器噪声、遮挡或采集方法的限制,可能会在图像中引入不可约的模糊性,从而导致不同但合理的语义假设。在机器学习中,这种模糊性通常被称为任意不确定性。在图像分割中,可以利用潜在密度模型来解决这个问题。最流行的方法是概率 U-Net (PU-Net),它使用潜在正态密度来优化条件数据对数似然证据下界。在这项工作中,我们证明了 PU-Net 潜在空间严重稀疏且严重未得到充分利用。为了解决这个问题,我们在潜在空间中引入互信息最大化和熵正则化 Sinkhorn 散度,以促进所有潜在维度的同质性,有效提高梯度下降更新和潜在空间信息量。我们的结果表明,通过将其应用于各种临床分割问题的公共数据集,与之前在匈牙利匹配交集联合上进行概率分割的潜变量模型相比,我们提出的方法获得了高达 11% 的性能提升。结果表明,鼓励均匀的潜在空间可以显着改善医学图像分割的潜在密度建模。 OI Viviers,克里斯蒂安/0000-0001-6455-0288;瓦柳丁,阿曼/0009-0005-2856-5841;范德索门,丰斯/0000-0002-3593-2356 Z8 0 ZR 0 ZA 0 TC 0 ZB 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-08-21
UT MEDLINE:39159017
PM 39159017
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-08-21 UT MEDLINE:39159017 PM 39159017 ER
PT J
AU Xu, Jing
Huang, Kai
Zhong, Lianzhen
Gao, Yuan
Sun, Kai
Liu, Wei
Zhou, Yanjie
Guo, Wenchao
Guo, Yuan
Zou, Yuanqiang
Duan, Yuping
Lu, Le
Wang, Yu
Chen, Xiang
Zhao, Shuang
PT J AU Xu、Jing Huang、Kaizhong、Lianzhen Gau、Yuan Sun、Kai Liu、Wei Zhou、YanjieGuo、WenchaoGuo、YuanZou、YuanqiuDuan、YupingLu、LeWang、YuChen、XiangZhao、Shuang
TI RemixFormer++: A Multi-modal Transformer Model for Precision Skin Tumor
Differential Diagnosis with Memory-efficient Attention.
TI RemixFormer++:一种多模态 Transformer 模型,用于具有内存高效注意力的精确皮肤肿瘤鉴别诊断。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3441012
DT Journal Article
PD 2024-Aug-09
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3441012 DT 期刊文章 PD 2024-Aug-09 PY 2024
AB Diagnosing malignant skin tumors accurately at an early stage can be
challenging due to ambiguous and even confusing visual characteristics
displayed by various categories of skin tumors. To improve diagnosis
precision, all available clinical data from multiple sources,
particularly clinical images, dermoscopy images, and medical history,
could be considered. Aligning with clinical practice, we propose a novel
Transformer model, named Remix-Former++ that consists of a clinical
image branch, a dermoscopy image branch, and a metadata branch. Given
the unique characteristics inherent in clinical and dermoscopy images,
specialized attention strategies are adopted for each type. Clinical
images are processed through a top-down architecture, capturing both
localized lesion details and global contextual information. Conversely,
dermoscopy images undergo a bottom-up processing with two-level
hierarchical encoders, designed to pinpoint fine-grained structural and
textural features. A dedicated metadata branch seamlessly integrates
non-visual information by encoding relevant patient data. Fusing
features from three branches substantially boosts disease classification
accuracy. RemixFormer++ demonstrates exceptional performance on four
single-modality datasets (PAD-UFES-20, ISIC 2017/2018/2019). Compared
with the previous best method using a public multi-modal Derm7pt
dataset, we achieved an absolute 5.3% increase in averaged F1 and 1.2%
in accuracy for the classification of five skin tumors. Furthermore,
using a large-scale in-house dataset of 10,351 patients with the twelve
most common skin tumors, our method obtained an overall classification
accuracy of 92.6%. These promising results, on par or better with the
performance of 191 dermatologists through a comprehensive reader study,
evidently imply the potential clinical usability of our method.
ZR 0
ZA 0
Z8 0
ZB 0
TC 0
AB 由于各类皮肤肿瘤所显示的视觉特征不明确甚至令人困惑,因此在早期准确诊断恶性皮肤肿瘤可能具有挑战性。为了提高诊断精度,可以考虑来自多个来源的所有可用临床数据,特别是临床图像、皮肤镜图像和病史。结合临床实践,我们提出了一种新颖的 Transformer 模型,名为 Remix-Former++,由临床图像分支、皮肤镜图像分支和元数据分支组成。鉴于临床和皮肤镜图像固有的独特特征,每种类型都采用专门的关注策略。临床图像通过自上而下的架构进行处理,捕获局部病变细节和全局上下文信息。相反,皮肤镜图像通过两级分层编码器进行自下而上的处理,旨在精确定位细粒度的结构和纹理特征。专用元数据分支通过对相关患者数据进行编码来无缝集成非视觉信息。融合三个分支的特征大大提高了疾病分类的准确性。 RemixFormer++ 在四个单模态数据集(PAD-UFES-20、ISIC 2017/2018/2019)上展示了卓越的性能。与之前使用公共多模态 Derm7pt 数据集的最佳方法相比,我们的平均 F1 绝对提高了 5.3%,五种皮肤肿瘤的分类准确率提高了 1.2%。此外,使用包含 10,351 名患有 12 种最常见皮肤肿瘤的患者的大规模内部数据集,我们的方法获得了 92.6% 的总体分类准确率。 通过全面的读者研究,这些有希望的结果与 191 名皮肤科医生的表现相当或更好,显然意味着我们的方法具有潜在的临床可用性。ZR 0 ZA 0 Z8 0 ZB 0 TC 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-08-11
UT MEDLINE:39120989
PM 39120989
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-08-11 UT MEDLINE:39120989 PM 39120989 ER
PT J
AU Quan, Quan
Yao, Qingsong
Zhu, Heqin
Kevin Zhou, S
PT J AU Quan, Quan Yao, 朱庆松, Heqin Kevin Zhou, S
TI IGU-Aug: Information-guided unsupervised augmentation and pixel-wise
contrastive learning for medical image analysis.
TI IGU-8 月:用于医学图像分析的信息引导无监督增强和逐像素对比学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3436713
DT Journal Article
PD 2024-Aug-01
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3436713 DT 期刊文章 PD 2024-Aug-01 PY 2024
AB Contrastive learning (CL) is a form of self-supervised learning and has
been widely used for various tasks. Different from widely studied
instance-level contrastive learning, pixel-wise contrastive learning
mainly helps with pixel-wise dense prediction tasks. The counter-part to
an instance in instance-level CL is a pixel, along with its neighboring
context, in pixel-wise CL. Aiming to build better feature
representation, there is a vast literature about designing instance
augmentation strategies for instance-level CL; but there is little
similar work on pixel augmentation for pixel-wise CL with a pixel
granularity. In this paper, we attempt to bridge this gap. We first
classify a pixel into three categories, namely low-, medium-, and
high-informative, based on the information quantity the pixel contains.
We then adaptively design separate augmentation strategies for each
category in terms of augmentation intensity and sampling ratio.
Extensive experiments validate that our information-guided pixel
augmentation strategy succeeds in encoding more discriminative
representations and surpassing other competitive approaches in
unsupervised local feature matching. Furthermore, our pretrained model
improves the performance of both one-shot and fully supervised models.
To the best of our knowledge, we are the first to propose a pixel
augmentation method with a pixel granularity for enhancing unsupervised
pixel-wise contrastive learning. Code is available at https:
//github.com/Curli-quan/IGU-Aug.
OI Zhou, S. Kevin/0000-0002-6881-4444
Z8 0
ZR 0
TC 0
ZA 0
AB 对比学习(CL)是自我监督学习的一种形式,已广泛应用于各种任务。与广泛研究的实例级对比学习不同,逐像素对比学习主要有助于逐像素密集预测任务。实例级 CL 中实例的对应部分是像素级 CL 中的像素及其相邻上下文。为了构建更好的特征表示,有大量关于为实例级 CL 设计实例增强策略的文献;但对于具有像素粒度的逐像素 CL 的像素增强,几乎没有类似的工作。在本文中,我们试图弥合这一差距。我们首先根据像素包含的信息量将像素分为三类,即低信息量、中信息量和高信息量。然后,我们根据增强强度和采样率自适应地为每个类别设计单独的增强策略。大量的实验验证了我们的信息引导像素增强策略成功地编码了更具辨别力的表示,并在无监督局部特征匹配中超越了其他竞争方法。此外,我们的预训练模型提高了一次性模型和完全监督模型的性能。据我们所知,我们是第一个提出一种具有像素粒度的像素增强方法,用于增强无监督的逐像素对比学习。代码可在 https://github.com/Curli-quan/IGU-Aug 获取。周凯文/0000-0002-6881-4444 Z8 0 ZR 0 TC 0 ZA 0
ZS 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-08-03
UT MEDLINE:39088491
PM 39088491
ER
ZS 0 ZB 0 Z9 0 EI 1558-254X DA 2024-08-03 UT MEDLINE:39088491 PM 39088491 ER
PT J
AU Daneshmand, Parisa Ghaderi
Rabbani, Hossein
PT J AU Daneshmand、帕里莎·加德里·拉巴尼、侯赛因
TI Tensor Ring Decomposition Guided Dictionary Learning for OCT Image
Denoising
用于 OCT 图像去噪的 TI 张量环分解引导字典学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2547
EP 2562
DI 10.1109/TMI.2024.3369176
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2547 EP 2562 DI 10.1109/TMI.2024.3369176 DT 文章 PD 2024 年 7 月 PY 2024
AB Optical coherence tomography (OCT) is a non-invasive and effective tool
for the imaging of retinal tissue. However, the heavy speckle noise,
resulting from multiple scattering of the light waves, obscures
important morphological structures and impairs the clinical diagnosis of
ocular diseases. In this paper, we propose a novel and powerful model
known as tensor ring decomposition-guided dictionary learning (TRGDL)
for OCT image denoising, which can simultaneously utilize two useful
complementary priors, i.e., three-dimensional low-rank and sparsity
priors, under a unified framework. Specifically, to effectively use the
strong correlation between nearby OCT frames, we construct the OCT group
tensors by extracting cubic patches from OCT images and clustering
similar patches. Then, since each created OCT group tensor has a
low-rank structure, to exploit spatial, non-local, and its temporal
correlations in a balanced way, we enforce the TR decomposition model on
each OCT group tensor. Next, to use the beneficial three-dimensional
inter-group sparsity, we learn shared dictionaries in both spatial and
temporal dimensions from all of the stacked OCT group tensors.
Furthermore, we develop an effective algorithm to solve the resulting
optimization problem by using two efficient optimization approaches,
including proximal alternating minimization and the alternative
direction method of multipliers. Finally, extensive experiments on OCT
datasets from various imaging devices are conducted to prove the
generality and usefulness of the proposed TRGDL model. Experimental
simulation results show that the suggested TRGDL model outperforms
state-of-the-art approaches for OCT image denoising both qualitatively
and quantitatively.
RI daneshmand, Parisa/AAV-4217-2021; Ghaderi Daneshmand, Parisa/
OI Ghaderi Daneshmand, Parisa/0000-0002-0772-7427
ZR 0
ZB 0
AB 光学相干断层扫描 (OCT) 是一种非侵入性且有效的视网膜组织成像工具。然而,光波多次散射产生的严重散斑噪声掩盖了重要的形态结构,损害了眼部疾病的临床诊断。在本文中,我们提出了一种新颖且强大的模型,称为张量环分解引导字典学习(TRGDL),用于 OCT 图像去噪,该模型可以同时利用两个有用的互补先验,即三维低秩和稀疏先验,在一个统一的框架。具体来说,为了有效利用附近 OCT 帧之间的强相关性,我们通过从 OCT 图像中提取立方块并对相似块进行聚类来构造 OCT 组张量。然后,由于每个创建的 OCT 组张量都具有低秩结构,为了以平衡的方式利用空间、非局部及其时间相关性,我们在每个 OCT 组张量上强制执行 TR 分解模型。接下来,为了利用有益的三维组间稀疏性,我们从所有堆叠的 OCT 组张量中学习空间和时间维度上的共享字典。此外,我们开发了一种有效的算法来通过使用两种有效的优化方法来解决由此产生的优化问题,包括近端交替最小化和乘法器的替代方向方法。最后,对各种成像设备的 OCT 数据集进行了广泛的实验,以证明所提出的 TRGDL 模型的通用性和实用性。实验模拟结果表明,所提出的 TRGDL 模型在定性和定量方面均优于 OCT 图像去噪的最先进方法。RI daneshmand,帕里萨/AAV-4217-2021; Ghaderi Daneshmand,巴黎/ OI Ghaderi Daneshmand,巴黎/0000-0002-0772-7427 ZR 0 ZB 0
ZS 0
ZA 0
Z8 0
TC 0
Z9 0
C1 Isfahan Univ Med Sci, Med Image & Signal Proc Res Ctr, Sch Adv Technol
Med, Esfahan 8174673461, Iran
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100025
PM 38393847
ER
ZS 0 ZA 0 Z8 0 TC 0 Z9 0 C1 伊斯法罕大学医学科学、医学图像和信号处理研究中心、Sch Adv 技术医学、伊斯法罕 8174673461、伊朗 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS: 001263692100025 下午 38393847 ER
PT J
AU Liu, Mengjun
Zhang, Huifeng
Liu, Mianxin
Chen, Dongdong
Zhuang, Zixu
Wang, Xin
Zhang, Lichi
Peng, Daihui
Wang, Qian
PT J AU Liu, 张孟军, 刘惠峰, 陈勉新, 庄东东, 王子旭, 张鑫, 彭丽驰, 王代辉, 钱
TI Randomizing Human Brain Function Representation for Brain Disease
Diagnosis
TI 随机化人脑功能表示以用于脑部疾病诊断
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2537
EP 2546
DI 10.1109/TMI.2024.3368064
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2537 EP 2546 DI 10.1109/TMI.2024.3368064 DT 文章 PD 2024 年 7 月 PY 2024
AB Resting-state fMRI (rs-fMRI) is an effective tool for quantifying
functional connectivity (FC), which plays a crucial role in exploring
various brain diseases. Due to the high dimensionality of fMRI data, FC
is typically computed based on the region of interest (ROI), whose
parcellation relies on a pre-defined atlas. However, utilizing the brain
atlas poses several challenges including 1) subjective selection bias in
choosing from various brain atlases, 2) parcellation of each subject's
brain with the same atlas yet disregarding individual specificity; 3)
lack of interaction between brain region parcellation and downstream
ROI-based FC analysis. To address these limitations, we propose a novel
randomizing strategy for generating brain function representation to
facilitate neural disease diagnosis. Specifically, we randomly sample
brain patches, thus avoiding ROI parcellations of the brain atlas. Then,
we introduce a new brain function representation framework for the
sampled patches. Each patch has its function description by referring to
anchor patches, as well as the position description. Furthermore, we
design an adaptive-selection-assisted Transformer network to optimize
and integrate the function representations of all sampled patches within
each brain for neural disease diagnosis. To validate our framework, we
conduct extensive evaluations on three datasets, and the experimental
results establish the effectiveness and generality of our proposed
method, offering a promising avenue for advancing neural disease
diagnosis beyond the confines of traditional atlas-based methods. Our
code is available at https://github.com/mjliu2020/RandomFR.
RI Wang, Qian/AAY-9494-2020; chen, Dong/AAW-7012-2020; , Mengjun/; Zhang, Huifeng/; Wang, Qian/; Zhuang, Zixu/; Liu, Mianxin/
OI , Mengjun/0000-0002-6940-3575; Zhang, Huifeng/0000-0002-2809-0058; Wang,
Qian/0000-0002-3490-3836; Zhuang, Zixu/0000-0001-7451-6999; Liu,
Mianxin/0000-0001-5171-778X
ZA 0
TC 0
AB 静息态 fMRI (rs-fMRI) 是量化功能连接 (FC) 的有效工具,在探索各种脑部疾病中发挥着至关重要的作用。由于功能磁共振成像数据的高维性,FC 通常是根据感兴趣区域 (ROI) 计算的,其分割依赖于预定义的图集。然而,利用大脑图谱带来了一些挑战,包括1)从不同的大脑图谱中进行选择时的主观选择偏差,2)使用相同的图谱对每个受试者的大脑进行分区,但忽略个体特异性; 3)大脑区域分割和下游基于 ROI 的 FC 分析之间缺乏相互作用。为了解决这些限制,我们提出了一种新的随机化策略来生成大脑功能表示以促进神经疾病的诊断。具体来说,我们随机对大脑斑块进行采样,从而避免大脑图谱的 ROI 分割。然后,我们为采样的补丁引入了一个新的大脑功能表示框架。每个补丁都有其参考锚点补丁的功能描述,以及位置描述。此外,我们设计了一个自适应选择辅助的 Transformer 网络来优化和集成每个大脑内所有采样斑块的功能表示,以进行神经疾病诊断。为了验证我们的框架,我们对三个数据集进行了广泛的评估,实验结果证实了我们提出的方法的有效性和通用性,为超越传统基于图集的方法的范围推进神经疾病诊断提供了一条有前途的途径。我们的代码可在 https://github.com/mjliu2020/RandomFR 获取。钱日/AAY-9494-2020;陈东/AAW-7012-2020; 、孟君/;张慧峰/;王茜/;庄子虚/;刘勉新/OI,孟军/0000-0002-6940-3575;张惠峰/0000-0002-2809-0058;王倩/0000-0002-3490-3836;庄子旭/0000-0001-7451-6999;刘勉新/0000-0001-5171-778X ZA 0 TC 0
ZS 0
ZR 0
ZB 0
Z8 0
Z9 0
C1 Shanghai Jiao Tong Univ, Sch Biomed Engn, Shanghai 200030, Peoples R
China
C1 Shanghai Jiao Tong Univ, Sch Med, Shanghai Mental Hlth Ctr, Shanghai
200030, Peoples R China
C1 Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 ShanghaiTech Univ, State Key Lab Adv Med Mat & Devices, Shanghai 201210,
Peoples R China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 201210, Peoples R China
C3 Shanghai Artificial Intelligence Lab
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100017
PM 38376975
ER
ZS 0 ZR 0 ZB 0 Z8 0 Z9 0 C1 上海交通大学,上海生物医学工程学院,上海 200030,人民路中国 C1 上海交通大学,上海医学院,上海精神卫生中心,上海 200030,人民路中国 C1 上海人工智能人民 R 中国 C1 上海科技大学生物医学工程实验室,上海 200232,人民 R 中国 C1 上海科技大学先进医学材料与器械国家重点实验室,上海 201210,人民 R 中国 C1 上海临床研究中心,上海201210, Peoples R China C3 上海人工智能实验室 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100017 PM 38376975 ER
PT J AU Pan, Jiazhen Huang, Wenqi Rueckert, Daniel Kustner, Thomas Hammernik, Kerstin
TI Motion-Compensated MR CINE Reconstruction With Reconstruction-Driven
Motion Estimation
具有重建驱动运动估计的 TI 运动补偿 MR CINE 重建
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2420
EP 2433
DI 10.1109/TMI.2024.3364504
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2420 EP 2433 DI 10.1109/TMI.2024.3364504 DT 文章 PD 2024 年 7 月 PY 2024
AB In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an
effective approach to address highly undersampled acquisitions by
incorporating motion information between frames. In this work, we
propose a novel perspective for addressing the MCMR problem and a more
integrated and efficient solution to the MCMR field. Contrary to
state-of-the-art (SOTA) MCMR methods which break the original problem
into two sub-optimization problems, i.e. motion estimation and
reconstruction, we formulate this problem as a single entity with one
single optimization. Our approach is unique in that the motion
estimation is directly driven by the ultimate goal, reconstruction, but
not by the canonical motion-warping loss (similarity measurement between
motion-warped images and target images). We align the objectives of
motion estimation and reconstruction, eliminating the drawbacks of
artifacts-affected motion estimation and therefore error-propagated
reconstruction. Further, we can deliver high-quality reconstruction and
realistic motion without applying any regularization/smoothness loss
terms, circumventing the non-trivial weighting factor tuning. We
evaluate our method on two datasets: 1) an in-house acquired 2D CINE
dataset for the retrospective study and 2) the public OCMR cardiac
dataset for the prospective study. The conducted experiments indicate
that the proposed MCMR framework can deliver artifact-free motion
estimation and high-quality MR images even for imaging accelerations up
to 20x, outperforming SOTA non-MCMR and MCMR methods in both qualitative
and quantitative evaluation across all experiments. The code is
available at https://github.com/JZPeterPan/MCMR-Recon-Driven-Motion.
RI Rueckert, Daniel/C-4393-2008; Rueckert, Daniel/; Hammernik, Kerstin/; Huang, Wenqi/
OI Rueckert, Daniel/0000-0002-5683-5889; Hammernik,
Kerstin/0000-0002-2734-1409; Huang, Wenqi/0000-0002-0155-5409
AB 在心脏 CINE 中,运动补偿 MR 重建 (MCMR) 是一种通过合并帧之间的运动信息来解决高度欠采样采集问题的有效方法。在这项工作中,我们提出了解决 MCMR 问题的新视角,以及 MCMR 领域更集成、更高效的解决方案。与将原始问题分解为两个子优化问题(即运动估计和重建)的最先进(SOTA)MCMR 方法相反,我们将此问题表述为具有单个优化的单个实体。我们的方法的独特之处在于,运动估计是由最终目标重建直接驱动的,而不是由规范运动扭曲损失(运动扭曲图像和目标图像之间的相似性测量)驱动的。我们将运动估计和重建的目标结合起来,消除了受伪影影响的运动估计的缺点,从而消除了误差传播重建的缺点。此外,我们可以提供高质量的重建和真实的运动,而无需应用任何正则化/平滑度损失项,从而规避了重要的权重因子调整。我们在两个数据集上评估我们的方法:1)用于回顾性研究的内部获取的 2D CINE 数据集和 2)用于前瞻性研究的公共 OCMR 心脏数据集。进行的实验表明,即使成像加速度高达 20 倍,所提出的 MCMR 框架也可以提供无伪影运动估计和高质量 MR 图像,在所有实验的定性和定量评估中均优于 SOTA 非 MCMR 和 MCMR 方法。该代码可在 https://github.com/JZPeterPan/MCMR-Recon-Driven-Motion 获取。RI Rueckert,丹尼尔/C-4393-2008;丹尼尔·吕克特/;哈默尼克,克斯汀/;黄文琪/丹尼尔·OI Rueckert/0000-0002-5683-5889;哈默尼克,克斯坦/0000-0002-2734-1409;黄文琪/0000-0002-0155-5409
ZS 0
ZA 0
ZB 0
ZR 0
Z8 0
TC 0
Z9 0
C1 Tech Univ Munich, Klinikum Rechts Isar, D-81675 Munich, Germany
C1 Imperial Coll London, Dept Comp, London SW7 2RH, England
C1 Univ Hosp Tuebingen, Dept Diagnost & Intervent Radiol, Med Image & Data
Anal MIDAS Lab, Tubingen, Germany
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100021
PM 38354077
ER
ZS 0 ZA 0 ZB 0 ZR 0 Z8 0 TC 0 Z9 0 C1 慕尼黑工业大学,Klinikum Rechts Isar,D-81675 慕尼黑,德国 C1 Imperial Coll London,Dept Comp,伦敦 SW7 2RH,England C1 Univ Hosp Tuebingen,Dept Diagnost & Intervent Radiol,医学图像和数据肛门 MIDAS 实验室,德国图宾根 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100021 PM 38354077 ER
PT J
AU Han, Kangfu
Li, Gang
Fang, Zhiwen
Yang, Feng
PT J AU Han、李康富、方刚、杨志文、冯
TI Multi-Template Meta-Information Regularized Network for Alzheimer's
Disease Diagnosis Using Structural MRI
TI 多模板元信息正则化网络用于使用结构 MRI 诊断阿尔茨海默病
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1664
EP 1676
DI 10.1109/TMI.2023.3344384
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1664 EP 1676 DI 10.1109/TMI.2023.3344384 DT 文章 PD 2024 年 5 月 PY 2024
AB Structural magnetic resonance imaging (sMRI) has been widely applied in
computer-aided Alzheimer's disease (AD) diagnosis, owing to its
capabilities in providing detailed brain morphometric patterns and
anatomical features in vivo. Although previous works have validated the
effectiveness of incorporating metadata (e.g., age, gender, and
educational years) for sMRI-based AD diagnosis, existing methods solely
paid attention to metadata-associated correlation to AD (e.g., gender
bias in AD prevalence) or confounding effects (e.g., the issue of normal
aging and metadata-related heterogeneity). Hence, it is difficult to
fully excavate the influence of metadata on AD diagnosis. To address
these issues, we constructed a novel Multi-template Meta-information
Regularized Network (MMRN) for AD diagnosis. Specifically, considering
diagnostic variation resulting from different spatial transformations
onto different brain templates, we first regarded different
transformations as data augmentation for self-supervised learning after
template selection. Since the confounding effects may arise from
excessive attention to meta-information owing to its correlation with
AD, we then designed the modules of weakly supervised meta-information
learning and mutual information minimization to learn and disentangle
meta-information from learned class-related representations, which
accounts for meta-information regularization for disease diagnosis. We
have evaluated our proposed MMRN on two public multi-center cohorts,
including the Alzheimer's Disease Neuroimaging Initiative (ADNI) with
1,950 subjects and the National Alzheimer's Coordinating Center (NACC)
with 1,163 subjects. The experimental results have shown that our
proposed method outperformed the state-of-the-art approaches in both
tasks of AD diagnosis, mild cognitive impairment (MCI) conversion
prediction, and normal control (NC) vs. MCI vs. AD classification.
RI Han, Kangfu/HQZ-8945-2023; Yang, Feng/; Li, Gang/; Han, Kangfu/
OI Yang, Feng/0000-0001-7190-4064; Li, Gang/0000-0001-9585-1382; Han,
Kangfu/0000-0002-3663-9960
ZR 0
AB 结构磁共振成像 (sMRI) 由于能够提供详细的大脑形态测量模式和体内解剖特征,已广泛应用于计算机辅助阿尔茨海默病 (AD) 诊断。尽管之前的工作已经验证了将元数据(例如年龄、性别和受教育年限)纳入基于 sMRI 的 AD 诊断的有效性,但现有方法仅关注与 AD 相关的元数据相关性(例如 AD 患病率中的性别偏见)或混杂效应(例如,正常老化问题和元数据相关的异质性)。因此,很难充分挖掘元数据对AD诊断的影响。为了解决这些问题,我们构建了一种新颖的用于 AD 诊断的多模板元信息正则化网络(MMRN)。具体来说,考虑到不同大脑模板上的不同空间变换所导致的诊断变化,我们首先将不同的变换视为模板选择后自我监督学习的数据增强。由于由于元信息与AD的相关性而过度关注元信息可能会产生混杂效应,因此我们设计了弱监督元信息学习和互信息最小化模块,以从学习到的类相关表示中学习和分离元信息,这解释了疾病诊断的元信息正则化。我们在两个公共多中心队列中评估了我们提出的 MMRN,其中包括阿尔茨海默病神经影像计划 (ADNI) 的 1,950 名受试者和国家阿尔茨海默病协调中心 (NACC) 的 1,163 名受试者。 实验结果表明,我们提出的方法在 AD 诊断、轻度认知障碍 (MCI) 转换预测以及正常对照 (NC) 与 MCI 与 AD 分类这两项任务中均优于最先进的方法。日韩康夫/HQZ-8945-2023;杨峰/;李刚/;韩康福/艾阳峰/0000-0001-7190-4064;李刚/0000-0001-9585-1382;韩康夫/0000-0002-3663-9960 ZR 0
ZS 0
ZB 0
ZA 0
TC 2
Z8 0
Z9 2
C1 Southern Med Univ, Sch Biomed Engn, Guangdong Prov Key Lab Med Image
Proc, Guangzhou 510515, Peoples R China
C1 Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec,
Guangzhou 510515, Peoples R China
C1 Univ North Carolina Chapel Hill, Biomed Res Imaging Ctr, Dept Radiol,
Chapel Hill, NC 27599 USA
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800019
PM 38109240
ER
ZS 0 ZB 0 ZA 0 TC 2 Z8 0 Z9 2 C1 南方医科大学,广东省生物医学工程重点实验室医学影像技术中心,广州 510515,人民 R 中国 C1 南方医科大学,广东省工程实验室医学影像与诊断技术中心,广州 510515, Peoples R China C1 Univ North Carolina Chapel Hill, Biomed Res Imaging Ctr, Dept Radiol, Chapel Hill, NC 27599 USA SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800019 PM 38109240 ER
PT J
AU Liang, Quanmin
Ma, Junji
Chen, Xitian
Lin, Qixiang
Shu, Ni
Dai, Zhengjia
Lin, Ying
PT J AU 梁, 马全民, 陈俊吉, 林西田, 舒其翔, 戴倪, 林正佳, Ying
TI A Hybrid Routing Pattern in Human Brain Structural Network Revealed By
Evolutionary Computation
TI 进化计算揭示人脑结构网络中的混合路由模式
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1895
EP 1909
DI 10.1109/TMI.2024.3351907
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1895 EP 1909 DI 10.1109/TMI.2024.3351907 DT 文章 PD 2024 年 5 月 PY 2024
AB The human brain functional connectivity network (FCN) is constrained and
shaped by the communication processes in the structural connectivity
network (SCN). The underlying communication mechanism thus becomes a
critical issue for understanding the formation and organization of the
FCN. A number of communication models supported by different routing
strategies have been proposed, with shortest path (SP), random diffusion
(DIF), and spatial navigation (NAV) as the most typical, respectively
requiring network global knowledge, local knowledge, and both for path
seeking. Yet these models all assumed every brain region to use one
routing strategy uniformly, ignoring convergent evidence that supports
the regional heterogeneity in both terms of biological substrates and
functional roles. In this regard, the current study developed a hybrid
communication model that allowed each brain region to choose a routing
strategy from SP, DIF, and NAV independently. A genetic algorithm was
designed to uncover the underlying region-wise hybrid routing strategy
(namely HYB). The HYB was found to outperform the three typical routing
strategies in predicting FCN and facilitating robust communication.
Analyses on HYB further revealed that brain regions in lower-order
functional modules inclined to route signals using global knowledge,
while those in higher-order functional modules preferred DIF that
requires only local knowledge. Compared to regions that used global
knowledge for routing, regions using DIF had denser structural
connections, participated in more functional modules, but played a less
dominant role within modules. Together, our findings further evidenced
that hybrid routing underpins efficient SCN communication and locally
heterogeneous structure-function coupling.
RI Lin, Qixiang/GXG-3598-2022; Shu, Ni/B-9583-2014; Liang, Quanmin/; Lin, Ying/
OI Liang, Quanmin/0000-0001-9935-5167; Lin, Ying/0000-0003-4141-1490
ZA 0
ZB 0
Z8 0
AB 人脑功能连接网络 (FCN) 受到结构连接网络 (SCN) 中通信过程的约束和塑造。因此,底层的通信机制成为理解 FCN 形成和组织的关键问题。人们提出了多种支持不同路由策略的通信模型,其中最典型的是最短路径(SP)、随机扩散(DIF)和空间导航(NAV),分别需要网络全局知识、局部知识以及两者的支持。路径寻求。然而,这些模型都假设每个大脑区域统一使用一种路由策略,忽略了支持生物基质和功能角色方面区域异质性的收敛证据。对此,本研究开发了一种混合通信模型,允许每个大脑区域独立地从 SP、DIF 和 NAV 中选择路由策略。遗传算法旨在揭示潜在的区域混合路由策略(即 HYB)。研究发现 HYB 在预测 FCN 和促进稳健通信方面优于三种典型路由策略。对 HYB 的分析进一步表明,低阶功能模块中的大脑区域倾向于使用全局知识来路由信号,而高阶功能模块中的大脑区域则更喜欢仅需要局部知识的 DIF。与使用全局知识进行路由的区域相比,使用DIF的区域具有更密集的结构连接,参与更多的功能模块,但在模块内发挥的主导作用较小。总之,我们的研究结果进一步证明混合路由支持高效的 SCN 通信和局部异构结构功能耦合。林日翔/GXG-3598-2022;舒妮/B-9583-2014;梁全民/;林英/爱良全民/0000-0001-9935-5167;林英/0000-0003-4141-1490 ZA 0 ZB 0 Z8 0
ZS 0
TC 0
ZR 0
Z9 0
C1 Sun Yat Sen Univ, Dept Psychol, Guangzhou 510006, Peoples R China
C1 Peng Cheng Lab, Res Ctr Artificial Intelligence, Shenzhen 518066,
Peoples R China
C1 Emory Univ, Sch Med, Dept Neurol, Atlanta, GA 30322 USA
C1 Beijing Normal Univ, State Key Lab Cognit Neurosci & Learning, Beijing
100875, Peoples R China
C1 Beijing Normal Univ, IDG McGovern Inst Brain Res, Beijing 100875,
Peoples R China
C1 Sun Yat Sen Univ, Zhongshan Sch Med, Guangdong Prov Key Lab Brain Funct
& Dis, Guangzhou 510080, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800022
PM 38194401
ER
ZS 0 TC 0 ZR 0 Z9 0 C1 中山大学心理科,广州 510006,人民 R 中国 C1 彭程实验室,人工智能研究中心,深圳 518066,人民 R 中国 C1 埃默里大学,医学院,神经科,亚特兰大, GA 30322 USA C1 北京师范大学, 认知神经科学与学习国家重点实验室, 北京 100875, 人民 R 中国 C1 北京师范大学, IDG 麦戈文脑科学研究所, 北京 100875, 人民 R 中国 C1 中山大学, 中山医学院,广东省脑功能与疾病重点实验室, 广州 510080, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800022 PM 38194401 ER
PT J
AU Yao, Qingsong
He, Zecheng
Li, Yuexiang
Lin, Yi
Ma, Kai
Zheng, Yefeng
Zhou, S. Kevin
PT J AU Yao, 何青松, 李泽成, 林跃翔, 马一, 郑凯, 周业峰, S. Kevin
TI Adversarial Medical Image With Hierarchical Feature Hiding
具有分层特征隐藏的 TI 对抗性医学图像
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1296
EP 1307
DI 10.1109/TMI.2023.3335098
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1296 EP 1307 DI 10.1109/TMI.2023.3335098 DT 文章 PD 2024 年 4 月 PY 2024
AB Deep learning based methods for medical images can be easily compromised
by adversarial examples (AEs), posing a great security flaw in clinical
decision-making. It has been discovered that conventional adversarial
attacks like PGD which optimize the classification logits, are easy to
distinguish in the feature space, resulting in accurate reactive
defenses. To better understand this phenomenon and reassess the
reliability of the reactive defenses for medical AEs, we thoroughly
investigate the characteristic of conventional medical AEs.
Specifically, we first theoretically prove that conventional adversarial
attacks change the outputs by continuously optimizing vulnerable
features in a fixed direction, thereby leading to outlier
representations in the feature space. Then, a stress test is conducted
to reveal the vulnerability of medical images, by comparing with natural
images. Interestingly, this vulnerability is a double-edged sword, which
can be exploited to hide AEs. We then propose a simple-yet-effective
hierarchical feature constraint (HFC), a novel add-on to conventional
white-box attacks, which assists to hide the adversarial feature in the
target feature distribution. The proposed method is evaluated on three
medical datasets, both 2D and 3D, with different modalities. The
experimental results demonstrate the superiority of HFC, i.e., it
bypasses an array of state-of-the-art adversarial medical AE detectors
more efficiently than competing adaptive attacks, which reveals the
deficiencies of medical reactive defense and allows to develop more
robust defenses in future.
RI Ma, kai/KSL-8338-2024; Zheng, Yefeng/ABG-7053-2020; Lin, Yi/JKH-5104-2023; Li, Yuexiang/AAZ-2676-2020; Ma, Kai/; He, Zecheng/; Zheng, Yefeng/; LIN, Yi/; Zhou, S. Kevin/
OI Ma, Kai/0000-0003-2805-3692; He, Zecheng/0000-0003-2639-2826; Zheng,
Yefeng/0000-0003-2195-2847; LIN, Yi/0000-0002-7635-2518; Zhou, S.
Kevin/0000-0002-6881-4444
AB 基于深度学习的医学图像方法很容易受到对抗性示例 (AE) 的影响,给临床决策带来很大的安全缺陷。人们发现,像PGD这样优化分类逻辑的传统对抗性攻击很容易在特征空间中区分,从而产生准确的反应性防御。为了更好地理解这一现象并重新评估医学不良事件反应防御的可靠性,我们深入研究了传统医学不良事件的特征。具体来说,我们首先从理论上证明,传统的对抗性攻击通过在固定方向上不断优化易受攻击的特征来改变输出,从而导致特征空间中的异常表示。然后,通过与自然图像进行比较,进行压力测试以揭示医学图像的脆弱性。有趣的是,这个漏洞是一把双刃剑,可以用来隐藏 AE。然后,我们提出了一种简单而有效的分层特征约束(HFC),这是传统白盒攻击的一种新颖的附加功能,有助于隐藏目标特征分布中的对抗特征。所提出的方法在具有不同模式的 2D 和 3D 三个医学数据集上进行评估。实验结果证明了 HFC 的优越性,即它比竞争性自适应攻击更有效地绕过一系列最先进的对抗性医学 AE 探测器,这揭示了医学反应防御的缺陷,并允许在以下领域开发更强大的防御未来。马凯/KSL-8338-2024;郑业峰/ABG-7053-2020;林毅/JKH-5104-2023;李月香/AAZ-2676-2020;马凯/;何则成/;郑业峰/;林毅/;周,S. 凯文/马凯凯/0000-0003-2805-3692;何泽成/0000-0003-2639-2826;郑业峰/0000-0003-2195-2847;林毅/0000-0002-7635-2518;周,S.凯文/0000-0002-6881-4444
ZS 0
TC 0
ZA 0
ZB 0
Z8 0
ZR 0
Z9 0
C1 Chinese Acad Sci, Inst Comp Technol, Beijing 10045, Peoples R China
C1 Univ Chinese Acad Sci, Beijing, Peoples R China
C1 Meta Real Labs, Burlingame, CA 94010 USA
C1 Jarvis Res Ctr, Tencent YouTu Lab, Shenzhen 518057, Peoples R China
C1 Guangxi Med Univ, Guangxi Key Lab Genom & Personalized Med, Med AI ReS
MARS Grp, Nanning 530021, Peoples R China
C1 Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R
China
C1 Univ Sci & Technol China, Sch Biomed Engn, Hefei 230026, Peoples R China
C1 Univ Sci & Technol China, Suzhou Inst Adv Res, Hefei 230026, Peoples R
China
C1 Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
C3 Meta Real Labs
SN 0278-0062
EI 1558-254X
DA 2024-07-06
UT WOS:001196733400025
PM 37995172
ER
ZS 0 TC 0 ZA 0 ZB 0 Z8 0 ZR 0 Z9 0 C1 中国科学院,计算技术研究所,北京 10045,人民 R 中国 C1 中国科学院,北京,人民 R 中国 C1 Meta Real Labs,Burlingame,CA 94010 USA C1 Jarvis Res Ctr,腾讯优图实验室,深圳 518057,人民 R 中国 C1 广西医科大学,广西基因组与个性化医学重点实验室,Med AI ReS MARS Grp,南宁 530021,人民 R 中国 C1 香港科技大学,计算机系科学与工程,香港,人民R中国C1中国科技大学,Sch Biomed Engn,合肥230026,人民R中国C1中国科技大学,苏州先进研究中心,合肥230026,人民R中国C1中国科学院, Inst Comp Technol,北京 100045,Peoples R China C3 Meta Real Labs SN 0278-0062 EI 1558-254X DA 2024-07-06 UT WOS:001196733400025 PM 37995172 ER
PT J
AU Zhang, Fan
Cho, Kang Ik Kevin
Seitz-Holland, Johanna
Ning, Lipeng
Legarreta, Jon Haitz
Rathi, Yogesh
Westin, Carl-Fredrik
O'Donnell, Lauren J.
Pasternak, Ofer
PT J AU 张、Fan Cho、Kang Ik Kevin Seitz-Holland、Johanna Ning、Lipeng Legarreta、Jon Haitz Rathi、Yogesh Westin、Carl-Fredrik O'Donnell、Lauren J. Pasternak、Ofer
TI DDParcel: Deep Learning Anatomical Brain Parcellation From Diffusion MRI
TI DDParcel:利用扩散 MRI 进行深度学习大脑解剖学分区
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1191
EP 1202
DI 10.1109/TMI.2023.3331691
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1191 EP 1202 DI 10.1109/TMI.2023.3331691 DT 文章 PD 2024 年 3 月 PY 2024
AB Parcellation of anatomically segregated cortical and subcortical brain
regions is required in diffusion MRI (dMRI) analysis for region-specific
quantification and better anatomical specificity of tractography. Most
current dMRI parcellation approaches compute the parcellation from
anatomical MRI (T1- or T2-weighted) data, using tools such as FreeSurfer
or CAT12, and then register it to the diffusion space. However, the
registration is challenging due to image distortions and low resolution
of dMRI data, often resulting in mislabeling in the derived brain
parcellation. Furthermore, these approaches are not applicable when
anatomical MRI data is unavailable. As an alternative we developed the
Deep Diffusion Parcellation (DDParcel), a deep learning method for fast
and accurate parcellation of brain anatomical regions directly from dMRI
data. The input to DDParcel are dMRI parameter maps and the output are
labels for 101 anatomical regions corresponding to the FreeSurfer
Desikan-Killiany (DK) parcellation. A multi-level fusion network
leverages complementary information in the different input maps, at
three network levels: input, intermediate layer, and output. DDParcel
learns the registration of diffusion features to anatomical MRI from the
high-quality Human Connectome Project data. Then, to predict brain
parcellation for a new subject, the DDParcel network no longer requires
anatomical MRI data but only the dMRI data. Comparing DDParcel's
parcellation with T1w-based parcellation shows higher test-retest
reproducibility and a higher regional homogeneity, while requiring much
less computational time. Generalizability is demonstrated on a range of
populations and dMRI acquisition protocols. Utility of DDParcel's
parcellation is demonstrated on tractography analysis for fiber tract
identification.
RI O'Donnell, Lauren/ABA-7546-2020; Seitz-Holland, Johanna/KXA-4324-2024; Ning, Lipeng/; O'Donnell, Lauren Jean/; Legarreta Gorrono, Jon Haitz/N-4425-2017
OI Ning, Lipeng/0000-0003-4992-459X; O'Donnell, Lauren
Jean/0000-0003-0197-7801; Legarreta Gorrono, Jon
Haitz/0000-0002-9661-1396
ZA 0
TC 0
Z8 0
AB 在扩散 MRI (dMRI) 分析中,需要对解剖学上分离的皮质和皮质下脑区域进行分区,以实现区域特异性量化和纤维束成像更好的解剖特异性。当前大多数 dMRI 分割方法使用 FreeSurfer 或 CAT12 等工具根据解剖 MRI(T1 或 T2 加权)数据计算分割,然后将其配准到扩散空间。然而,由于图像失真和 dMRI 数据分辨率低,配准具有挑战性,通常会导致派生的大脑分区中的错误标记。此外,当解剖 MRI 数据不可用时,这些方法就不适用。作为替代方案,我们开发了深度扩散分割 (DDParcel),这是一种深度学习方法,可直接根据 dMRI 数据快速准确地分割大脑解剖区域。 DDParcel 的输入是 dMRI 参数图,输出是与 FreeSurfer Desikan-Killiany (DK) 分区相对应的 101 个解剖区域的标签。多级融合网络在三个网络级别(输入、中间层和输出)利用不同输入映射中的互补信息。 DDParcel 从高质量的人类连接组项目数据中学习扩散特征与解剖 MRI 的配准。然后,为了预测新受试者的大脑分区,DDParcel 网络不再需要解剖 MRI 数据,而只需要 dMRI 数据。将 DDParcel 的分区与基于 T1w 的分区进行比较,显示出更高的重测再现性和更高的区域同质性,同时需要更少的计算时间。普遍性在一系列人群和 dMRI 采集协议中得到了证明。 DDParcel 分割的实用性在用于纤维束识别的纤维束成像分析中得到了证明。RI 奥唐纳,劳伦/ABA-7546-2020;塞茨-霍兰,约翰娜/KXA-4324-2024;宁立鹏/;奥唐纳,劳伦·琼/; Legarreta Gorrono,Jon Haitz/N-4425-2017 OI Ning,Lipeng/0000-0003-4992-459X;奥唐纳,劳伦·吉恩/0000-0003-0197-7801;莱加雷塔·戈罗诺,乔恩·海茨/0000-0002-9661-1396 ZA 0 TC 0 Z8 0
ZS 0
ZR 0
ZB 0
Z9 0
C1 Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu
611731, Peoples R China
C1 Harvard Med Sch, Brigham & Womens Hosp, Dept Radiol, Boston, MA 02115
USA
C1 Harvard Med Sch, Brigham & Womens Hosp, Dept Psychiat, Boston, MA 02115
USA
C1 Brigham & Womens Hosp, Dept Psychiat, Boston, MA 02115 USA
C1 Brigham & Womens Hosp, Dept Radiol, Boston, MA 02115 USA
C1 Harvard Med Sch, Massachusetts Gen Hosp, Dept Psychiat, Boston, MA 02114
USA
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600016
PM 37943635
ER
ZS 0 ZR 0 ZB 0 Z9 0 C1 中国科技大学信息与通信工程学院,成都 611731,人民 R 中国 C1 哈佛医学院,布莱根妇女医院,放射科,波士顿,马萨诸塞州 02115 美国 C1 哈佛医学院, Brigham & Womens Hosp, Dept Psychiat, Boston, MA 02115 USA C1 Brigham & Womens Hosp, Dept Psychiat, Boston, MA 02115 USA C1 Brigham & Womens Hosp, Dept Radiol, Boston, MA 02115 USA C1 哈佛医学院,马萨诸塞州 Gen Hosp , Dept Psychiat, Boston, MA 02114 USA SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600016 PM 37943635 ER
PT J
AU Hashemi, Ali
Cai, Chang
Gao, Yijing
Ghosh, Sanjay
Mueller, Klaus-Robert
Nagarajan, Srikantan S.
Haufe, Stefan
PT J AU Hashemi、阿里蔡、高常、Yijing Ghosh、Sanjay Mueller、Klaus-Robert Nagarajan、Srikantan S. Haufe、Stefan
TI Joint Learning of Full-Structure Noise in Hierarchical Bayesian
Regression Models
分层贝叶斯回归模型中全结构噪声的 TI 联合学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 610
EP 624
DI 10.1109/TMI.2022.3224085
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 610 EP 624 DI 10.1109/TMI.2022.3224085 DT 文章 PD 2024 年 2 月 PY 2024
AB We consider the reconstruction of brain activity from
electroencephalography (EEG). This inverse problem can be formulated as
a linear regression with independent Gaussian scale mixture priors for
both the source and noise components. Crucial factors influencing the
accuracy of the source estimation are not only the noise level but also
its correlation structure, but existing approaches have not addressed
the estimation of noise covariance matrices with full structure. To
address this shortcoming, we develop hierarchical Bayesian (type-II
maximum likelihood) models for observations with latent variables for
source and noise, which are estimated jointly from data. As an extension
to classical sparse Bayesian learning (SBL), where across-sensor
observations are assumed to be independent and identically distributed,
we consider Gaussian noise with full covariance structure. Using the
majorization-maximization framework and Riemannian geometry, we derive
an efficient algorithm for updating the noise covariance along the
manifold of positive definite matrices. We demonstrate that our
algorithm has guaranteed and fast convergence and validate it in
simulations and with real MEG data. Our results demonstrate that the
novel framework significantly improves upon state-of-the-art techniques
in the real-world scenario where the noise is indeed non-diagonal and
full-structured. Our method has applications in many domains beyond
biomagnetic inverse problems.
RI Hashemi, Ali/AAB-2562-2019; Mueller, Klaus-Robert/C-3196-2013; Nagarajan, Srikantan/; Mueller, Klaus-Robert/; Hashemi, Ali/; Haufe, Stefan/
OI Nagarajan, Srikantan/0000-0001-7209-3857; Mueller,
Klaus-Robert/0000-0002-3861-7685; Hashemi, Ali/0000-0003-4957-4630;
Haufe, Stefan/0000-0003-1470-9195
ZR 0
ZA 0
ZB 1
Z8 0
TC 3
AB 我们考虑通过脑电图(EEG)重建大脑活动。这个反问题可以表述为具有源和噪声分量的独立高斯尺度混合先验的线性回归。影响源估计准确性的关键因素不仅是噪声水平,还包括其相关结构,但现有方法尚未解决具有完整结构的噪声协方差矩阵的估计。为了解决这个缺点,我们开发了分层贝叶斯(II 类最大似然)模型,用于具有源和噪声潜在变量的观测,这些变量是根据数据联合估计的。作为经典稀疏贝叶斯学习(SBL)的扩展,假设跨传感器观测值是独立且同分布的,我们考虑具有完整协方差结构的高斯噪声。使用majorization-maximization框架和黎曼几何,我们推导出一种有效的算法来更新沿正定矩阵流形的噪声协方差。我们证明我们的算法能够保证快速收敛,并在模拟和真实 MEG 数据中对其进行验证。我们的结果表明,该新颖的框架显着改进了现实场景中最先进的技术,其中噪声确实是非对角线和全结构化的。我们的方法在生物磁逆问题之外的许多领域都有应用。 RI 哈希米,阿里/AAB-2562-2019;克劳斯-罗伯特·穆勒/C-3196-2013; Nagarajan,斯里坎坦/;克劳斯-罗伯特·穆勒/;阿里·哈希米/; Haufe, Stefan/OI Nagarajan, Srikantan/0000-0001-7209-3857;克劳斯-罗伯特·穆勒/0000-0002-3861-7685;阿里·哈希米/0000-0003-4957-4630;豪夫,斯特凡/0000-0003-1470-9195 ZR 0 ZA 0 ZB 1 Z8 0 TC 3
ZS 0
Z9 3
C1 Tech Univ Berlin, Fac Elect Engn & Comp Sci 4, Inst Software Engn &
Theoret Comp Sci, Uncertainty Inverse Modeling & Machine Learning G,
D-10587 Berlin, Germany
C1 Tech Univ Berlin, Fac Elect Engn & Comp Sci 4, Inst Software Engn &
Theoret Comp Sci, Machine Learning Grp, D-10587 Berlin, Germany
C1 Univ Calif San Francisco, Dept Radiol & Biomed Imaging, San Francisco,
CA 94143 USA
C1 Tech Univ Berlin, Machine Learning Grp, D-10587 Berlin, Germany
C1 BIFOLD Berlin Inst Fdn Learning & Data, D-10623 Berlin, Germany
C1 Korea Univ, Dept Artificial Intelligence, Seoul 02841, South Korea
C1 Max Planck Inst Informat, D-66123 Saarbrucken, Germany
C1 Tech Univ Berlin, Uncertainty Inverse Modeling & Machine Learning G,
D-10587 Berlin, Germany
C1 Phys Tech Bundesanstalt Braunschweig & Berlin, D-10587 Berlin, Germany
C1 Charite Univ Med Berlin, Berlin Ctr Adv Neuroimaging, D-10117 Berlin,
Germany
C3 BIFOLD Berlin Inst Fdn Learning & Data
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400012
PM 36423312
ER
ZS 0 Z9 3 C1 柏林理工大学,Fac Elect Engn & Comp Sci 4,Inst Software Engn & Theoret Comp Sci,不确定性逆向建模与机器学习 G,D-10587 Berlin,德国 C1 Tech Univ Berlin,Fac Elect Engn & Comp Sci 4, Inst Software Engn & Theoret Comp Sci, Machine Learning Grp, D-10587 Berlin, 德国 C1 Univ Calif San Francisco, Dept Radiol & Biomed Imaging, San Francisco, CA 94143 USA C1 Tech Univ Berlin, Machine Learning Grp, D-10587德国柏林 C1 BIFOLD 柏林研究所 Fdn 学习与数据,D-10623 德国柏林 C1 韩国大学,人工智能系,首尔 02841,韩国 C1 马克斯普朗克研究所信息,D-66123 萨尔布吕肯,德国 C1 柏林理工大学,逆不确定性建模与机器学习 G, D-10587 德国柏林 C1 Phys Tech Bundesanstalt Braunschweig & Berlin, D-10587 德国柏林 C1 Charite Univ Med Berlin, Berlin Ctr Adv Neuroimaging, D-10117 Berlin, 德国 C3 BIFOLD Berlin Inst Fdn 学习与数据 SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400012 PM 36423312 ER
PT J
AU Lian, Jie
Liu, Jingyu
Zhang, Shu
Gao, Kai
Liu, Xiaoqing
Zhang, Dingwen
Yu, Yizhou
PT J AU Lian, 刘杰, 张静宇, 高树, 刘凯, 张晓庆, 于丁文, 益洲
TI A Structure-Aware Relation Network for Thoracic Diseases Detection and
Segmentation (vol 40, pg 2042, 2021)
TI 用于胸部疾病检测和分割的结构感知关系网络(第 40 卷,第 2042 页,2021 年)
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 899
EP 899
DI 10.1109/TMI.2023.3346411
DT Correction
PD FEB 2024
PY 2024
RI Liu, Xiaoxiao/HNI-6180-2023; zhang, dingwen/R-3463-2019
ZA 0
SO IEEE 医学成像交易 VL 43 IS 2 BP 899 EP 899 DI 10.1109/TMI.2023.3346411 DT 校正 PD FEB 2024 PY 2024 RI Liu,Xiaoxiao/HNI-6180-2023;张丁文/R-3463-2019 ZA 0
ZS 0
ZB 0
Z8 0
ZR 0
TC 0
Z9 0
C1 Deepwise Artificial Intelligence Lab, Beijing 100080, Peoples R China
C1 Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
C1 Northwestern Polytech Univ, Sch Automat, Brain & Artificial Intelligence
Lab, Xian 710072, Peoples R China
C1 Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
C3 Deepwise Artificial Intelligence Lab
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400013
ER
ZS 0 ZB 0 Z8 0 ZR 0 TC 0 Z9 0 C1 深智人工智能实验室,北京 100080,人民 R 中国 C1 北京大学,工程科学与计算科学学院,北京 100871,人民 R 中国 C1 西北工业大学,自动化,脑科学& 人工智能实验室,西安 710072,人民 R 中国 C1 香港大学计算科学系,香港,人民 R 中国 C3 Deepwise 人工智能实验室 SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400013 ER
PT J
AU Wang, Jiacheng
Jin, Yueming
Stoyanov, Danail
Wang, Liansheng
PT J AU Wang, 金家成, Yueming Stoyanov, Danail Wang, 连胜
TI FedDP: Dual Personalization in Federated Medical Image Segmentation
TI FedDP:联合医学图像分割中的双重个性化
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 297
EP 308
DI 10.1109/TMI.2023.3299206
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 297 EP 308 DI 10.1109/TMI.2023.3299206 DT 文章 PD 2024 年 1 月 PY 2024
AB Personalized federated learning (PFL) addresses the data heterogeneity
challenge faced by general federated learning (GFL). Rather than
learning a single global model, with PFL a collection of models are
adapted to the unique feature distribution of each site. However,
current PFL methods rarely consider self-attention networks which can
handle data heterogeneity by long-range dependency modeling and they do
not utilize prediction inconsistencies in local models as an indicator
of site uniqueness. In this paper, we propose FedDP, a novel federated
learning scheme with dual personalization, which improves model
personalization from both feature and prediction aspects to boost image
segmentation results. We leverage long-range dependencies by designing a
local query (LQ) that decouples the query embedding layer out of each
local model, whose parameters are trained privately to better adapt to
the respective feature distribution of the site. We then propose
inconsistency-guided calibration (IGC), which exploits the inter-site
prediction inconsistencies to accommodate the model learning
concentration. By encouraging a model to penalize pixels with larger
inconsistencies, we better tailor prediction-level patterns to each
local site. Experimentally, we compare FedDP with the state-of-the-art
PFL methods on two popular medical image segmentation tasks with
different modalities, where our results consistently outperform others
on both tasks. Our code and models are available at
https://github.com/jcwang123/PFL-Seg-Trans.
RI wang, jiacheng/GQA-7809-2022; jiacheng, wang/; Jin, Yueming/
OI jiacheng, wang/0000-0003-2595-265X; Jin, Yueming/0000-0003-3775-3877
TC 3
ZA 0
ZB 1
Z8 1
AB 个性化联邦学习 (PFL) 解决了通用联邦学习 (GFL) 面临的数据异构性挑战。通过 PFL,模型集合可以适应每个站点的独特特征分布,而不是学习单个全局模型。然而,当前的 PFL 方法很少考虑自注意力网络,它可以通过远程依赖建模来处理数据异质性,并且它们没有利用局部模型中的预测不一致作为站点唯一性的指标。在本文中,我们提出了 FedDP,一种具有双重个性化的新型联邦学习方案,它从特征和预测方面改进了模型个性化,以提高图像分割结果。我们通过设计本地查询(LQ)来利用远程依赖关系,该本地查询将查询嵌入层与每个本地模型解耦,其参数经过私下训练,以更好地适应站点各自的特征分布。然后,我们提出了不一致引导校准(IGC),它利用站点间预测的不一致来适应模型学习的集中度。通过鼓励模型惩罚具有较大不一致的像素,我们可以更好地针对每个本地站点定制预测级别模式。实验上,我们在两种不同模式的流行医学图像分割任务上将 FedDP 与最先进的 PFL 方法进行比较,我们的结果在这两项任务上始终优于其他任务。我们的代码和模型可在 https://github.com/jcwang123/PFL-Seg-Trans 获取。日王家成/GQA-7809-2022;王嘉诚/;金月明/OI家成王/0000-0003-2595-265X;金月明/0000-0003-3775-3877 TC 3 ZA 0 ZB 1 Z8 1
ZS 0
ZR 0
Z9 4
C1 Xiamen Univ, Dept Comp Sci, Sch Informat, Xiamen, Peoples R China
C1 Xiamen Univ, Natl Inst Data Sci Hlth & Med, Xiamen 361005, Peoples R
China
C1 Univ Coll Sci London, Wellcome EPSRC Ctr Intervent & Surg Sci WEISS,
London, England
C1 UCL, Dept Comp Sci, London W1W 7TY, England
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600008
PM 37494156
ER
ZS 0 ZR 0 Z9 4 C1 厦门大学,计算机科学系,Sch Informat,厦门,人民大学 C1 厦门大学,国家数据科学研究院,厦门 361005,人民大学 C1 大学科学学院,伦敦,Wellcome EPSRC Ctr Intervent & Surg Sci WEISS,伦敦,英格兰 C1 UCL,Dept Comp Sci,伦敦 W1W 7TY,英格兰 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600008 PM 37494156 ER
PT J
AU Kadry, Karim
Olender, Max L
Schuh, Andreas
Karmakar, Abhishek
Petersen, Kersten
Schaap, Michiel
Marlevi, David
UpdePac, Adam
Mizukami, Takuya
Taylor, Charles
Edelman, Elazer R
Nezami, Farhad R
PT J AU Kadry、Karim Olender、Max L Schuh、Andreas Karmakar、Abhishek Petersen、Kersten Schaap、Michiel Marlevi、David UpdePac、Adam Mizukami、Takuya Taylor、Charles Edelman、Elazer R Nezami、Farhad R
TI Morphology-based non-rigid registration of coronary computed tomography
and intravascular images through virtual catheter path optimization.
通过虚拟导管路径优化,基于 TI Morphology 的冠状动脉计算机断层扫描和血管内图像的非刚性配准。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3474053
DT Journal Article
PD 2024-Oct-07
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3474053 DT 期刊文章 PD 2024 年 10 月 7 日 PY 2024
AB Coronary computed tomography angiography (CCTA) provides 3D information
on obstructive coronary artery disease, but cannot fully visualize
high-resolution features within the vessel wall. Intravascular imaging,
in contrast, can spatially resolve atherosclerotic in cross sectional
slices, but is limited in capturing 3D relationships between each slice.
Co-registering CCTA and intravascular images enables a variety of
clinical research applications but is time consuming and user-dependent.
This is due to intravascular images suffering from non-rigid distortions
arising from irregularities in the imaging catheter path. To address
these issues, we present a morphology-based framework for the rigid and
non-rigid matching of intravascular images to CCTA images. To do this,
we find the optimal virtual catheter path that samples the coronary
artery in CCTA image space to recapitulate the coronary artery
morphology observed in the intravascular image. We validate our
framework on a multi-center cohort of 40 patients using bifurcation
landmarks as ground truth for longitudinal and rotational registration.
Our registration approach significantly outperforms other approaches for
bifurcation alignment. By providing a differentiable framework for
multi-modal vascular co-registration, our framework reduces the manual
effort required to conduct large-scale multi-modal clinical studies and
enables the development of machine learning-based co-registration
approaches.
AB 冠状动脉计算机断层扫描血管造影 (CCTA) 提供阻塞性冠状动脉疾病的 3D 信息,但无法完全可视化血管壁内的高分辨率特征。相比之下,血管内成像可以在空间上解析横截面切片中的动脉粥样硬化,但在捕获每个切片之间的 3D 关系方面受到限制。联合配准 CCTA 和血管内图像可实现各种临床研究应用,但非常耗时且依赖于用户。这是由于血管内图像因成像导管路径的不规则性而遭受非刚性扭曲。为了解决这些问题,我们提出了一个基于形态学的框架,用于血管内图像与 CCTA 图像的刚性和非刚性匹配。为此,我们找到了在 CCTA 图像空间中对冠状动脉进行采样的最佳虚拟导管路径,以概括在血管内图像中观察到的冠状动脉形态。我们使用分叉地标作为纵向和旋转配准的基本事实,在 40 名患者的多中心队列上验证了我们的框架。我们的配准方法明显优于其他分叉对齐方法。通过为多模式血管联合注册提供可微分的框架,我们的框架减少了进行大规模多模式临床研究所需的手动工作,并能够开发基于机器学习的联合注册方法。
ZS 0
ZA 0
ZR 0
Z8 0
ZB 0
TC 0
Z9 0
EI 1558-254X
DA 2024-10-09
UT MEDLINE:39374277
PM 39374277
ER
ZS 0 ZA 0 ZR 0 Z8 0 ZB 0 TC 0 Z9 0 EI 1558-254X DA 2024-10-09 UT MEDLINE:39374277 PM 39374277 ER
PT J
AU Leconte, Alexis
Poree, Jonathan
Rauby, Brice
Wu, Alice
Ghigo, Nin
Xing, Paul
Lee, Stephen
Bourquin, Chloe
Ramos-Palacios, Gerardo
Sadikot, Abbas F
Provost, Jean
PT J AU Leconte、Alexis Poree、Jonathan Rauby、Brice Wu、Alice Ghigo、Nin Xing、Paul Lee、Stephen Bourquin、Chloe Ramos-Palacios、Gerardo Sadikot、Abbas F Provost、Jean
TI A Tracking prior to Localization workflow for Ultrasound Localization
Microscopy.
TI A 超声定位显微镜的定位工作流程之前的跟踪。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3456676
DT Journal Article
PD 2024-Sep-09
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3456676 DT 期刊文章 PD 2024-Sep-09 PY 2024
AB Ultrasound Localization Microscopy (ULM) has proven effective in
resolving microvascular structures and local mean velocities at
sub-diffraction-limited scales, offering high-resolution imaging
capabilities. Dynamic ULM (DULM) enables the creation of angiography or
velocity movies throughout cardiac cycles. Currently, these techniques
rely on a Localization-and-Tracking (LAT) workflow consisting in
detecting microbubbles (MB) in the frames before pairing them to
generate tracks. While conventional LAT methods perform well at low
concentrations, they suffer from longer acquisition times and degraded
localization and tracking accuracy at higher concentrations, leading to
biased angiogram reconstruction and velocity estimation. In this study,
we propose a novel approach to address these challenges by reversing the
current workflow. The proposed method, Tracking-and-Localization (TAL),
relies on first tracking the MB and then performing localization.
Through comprehensive benchmarking using both in silico and in vivo
experiments and employing various metrics to quantify ULM angiography
and velocity maps, we demonstrate that the TAL method consistently
outperforms the reference LAT workflow. Moreover, when applied to DULM,
TAL successfully extracts velocity variations along the cardiac cycle
with improved repeatability. The findings of this work highlight the
effectiveness of the TAL approach in overcoming the limitations of
conventional LAT methods, providing enhanced ULM angiography and
velocity imaging.
OI Rauby, Brice/0000-0003-0737-7954; Xing, Paul/0000-0003-1779-7821
ZB 0
Z8 0
事实证明,AB 超声定位显微镜 (ULM) 可以有效地解析亚衍射极限尺度的微血管结构和局部平均速度,并提供高分辨率成像功能。动态 ULM (DULM) 可以在整个心动周期内创建血管造影或速度影片。目前,这些技术依赖于定位和跟踪 (LAT) 工作流程,包括在将帧配对以生成轨迹之前检测帧中的微泡 (MB)。虽然传统的 LAT 方法在低浓度下表现良好,但在较高浓度下,它们的采集时间较长,定位和跟踪精度下降,导致血管造影重建和速度估计存在偏差。在这项研究中,我们提出了一种通过扭转当前工作流程来应对这些挑战的新方法。所提出的方法“跟踪和定位”(TAL) 依赖于首先跟踪 MB,然后执行定位。通过使用计算机和体内实验进行全面基准测试,并采用各种指标来量化 ULM 血管造影和速度图,我们证明 TAL 方法始终优于参考 LAT 工作流程。此外,当应用于 DULM 时,TAL 成功提取了心动周期中的速度变化,并提高了可重复性。这项工作的结果强调了 TAL 方法在克服传统 LAT 方法的局限性方面的有效性,提供增强的 ULM 血管造影和速度成像。 OI Rauby,布里斯/0000-0003-0737-7954;邢保罗/0000-0003-1779-7821 ZB 0 Z8 0
ZS 0
ZA 0
TC 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-09-11
UT MEDLINE:39250374
PM 39250374
ER
ZS 0 ZA 0 TC 0 ZR 0 Z9 0 EI 1558-254X DA 2024-09-11 UT MEDLINE:39250374 PM 39250374 ER
PT J
AU Song, Xuegang
Shu, Kaixiang
Yang, Peng
Zhao, Cheng
Zhou, Feng
Frangi, Alejandro F
Xiao, Xiaohua
Dong, Lei
Wang, Tianfu
Wang, Shuqiang
Lei, Baiying
PT J AU Song, 舒学刚, 杨凯翔, 赵鹏, 周成, Feng Frangi, Alejandro F 肖, 董晓华, 王磊, 王天福, 雷树强, Baiying
TI Knowledge-aware Multisite Adaptive Graph Transformer for Brain Disorder
Diagnosis.
用于脑部疾病诊断的 TI 知识感知多站点自适应图形转换器。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3453419
DT Journal Article
PD 2024-Sep-02
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3453419 DT 期刊文章 PD 2024-Sep-02 PY 2024
AB Brain disorder diagnosis via resting-state functional magnetic resonance
imaging (rs-fMRI) is usually limited due to the complex imaging features
and sample size. For brain disorder diagnosis, the graph convolutional
network (GCN) has achieved remarkable success by capturing interactions
between individuals and the population. However, there are mainly three
limitations: 1) The previous GCN approaches consider the non-imaging
information in edge construction but ignore the sensitivity differences
of features to non-imaging information. 2) The previous GCN approaches
solely focus on establishing interactions between subjects (i.e.,
individuals and the population), disregarding the essential relationship
between features. 3) Multisite data increase the sample size to help
classifier training, but the inter-site heterogeneity limits the
performance to some extent. This paper proposes a knowledge-aware
multisite adaptive graph Transformer to address the above problems.
First, we evaluate the sensitivity of features to each piece of
non-imaging information, and then construct feature-sensitive and
feature-insensitive subgraphs. Second, after fusing the above subgraphs,
we integrate a Transformer module to capture the intrinsic relationship
between features. Third, we design a domain adaptive GCN using multiple
loss function terms to relieve data heterogeneity and to produce the
final classification results. Last, the proposed framework is validated
on two brain disorder diagnostic tasks. Experimental results show that
the proposed framework can achieve state-of-the-art performance.
RI shu, kai xiang/; Frangi, Alejandro/C-6500-2008; Zhou, Feng/
OI shu, kai xiang/0009-0003-6471-6876; Frangi,
Alejandro/0000-0002-2675-528X; Zhou, Feng/0000-0001-6123-073X
ZR 0
由于复杂的成像特征和样本量,通过静息态功能磁共振成像 (rs-fMRI) 进行脑部疾病诊断通常受到限制。对于脑部疾病诊断,图卷积网络(GCN)通过捕获个体与群体之间的相互作用取得了显着的成功。然而,主要存在三个局限性:1)先前的GCN方法在边缘构建中考虑了非图像信息,但忽略了特征对非图像信息的敏感性差异。 2)以前的GCN方法仅关注建立主体(即个体和群体)之间的相互作用,而忽略了特征之间的本质关系。 3)多站点数据增加了样本量以帮助分类器训练,但站点间的异质性在一定程度上限制了性能。本文提出了一种知识感知的多站点自适应图 Transformer 来解决上述问题。首先,我们评估特征对每条非图像信息的敏感性,然后构建特征敏感和特征不敏感子图。其次,在融合上述子图之后,我们集成了一个 Transformer 模块来捕获特征之间的内在关系。第三,我们设计了一个域自适应 GCN,使用多个损失函数项来减轻数据异质性并产生最终的分类结果。最后,所提出的框架在两项脑部疾病诊断任务上得到了验证。实验结果表明,所提出的框架可以实现最先进的性能。日树,凯翔/;亚历杭德罗·弗兰吉/C-6500-2008;周峰/爱树凯翔/0009-0003-6471-6876;弗朗吉·亚历杭德罗/0000-0002-2675-528X;周峰/0000-0001-6123-073X ZR 0
ZS 0
ZB 0
TC 0
ZA 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-09-04
UT MEDLINE:39222450
PM 39222450
ER
ZS 0 ZB 0 TC 0 ZA 0 Z8 0 Z9 0 EI 1558-254X DA 2024-09-04 UT MEDLINE:39222450 PM 39222450 ER
PT J
AU Chakravarty, Arunava
Emre, Taha
Leingang, Oliver
Riedl, Sophie
Mai, Julia
Scholl, Hendrik P. N.
Sivaprasad, Sobha
Rueckert, Daniel
Lotery, Andrew
Schmidt-Erfurth, Ursula
Bogunovic, Hrvoje
CA PINNACLE Consortium
PT J AU Chakravarty、Arunava Emre、Taha Leingang、Oliver Riedl、Sophie Mai、Julia Scholl、Hendrik PN Sivaprasad、Sobha Rueckert、Daniel Lotery、Andrew Schmidt-Erfurth、Ursula Bogunovic、Hrvoje CA PINNACLE 联盟
TI Morph-SSL: Self-Supervision With Longitudinal Morphing for Forecasting
AMD Progression From OCT Volumes
TI Morph-SSL:利用纵向变形进行自我监督,用于根据 OCT 量预测 AMD 进展
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3224
EP 3239
DI 10.1109/TMI.2024.3390940
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3224 EP 3239 DI 10.1109/TMI.2024.3390940 DT 文章 PD 2024 年 SEP 2024 年 PY
AB The lack of reliable biomarkers makes predicting the conversion from
intermediate to neovascular age-related macular degeneration (iAMD,
nAMD) a challenging task. We develop a Deep Learning (DL) model to
predict the future risk of conversion of an eye from iAMD to nAMD from
its current OCT scan. Although eye clinics generate vast amounts of
longitudinal OCT scans to monitor AMD progression, only a small subset
can be manually labeled for supervised DL. To address this issue, we
propose Morph-SSL, a novel Self-supervised Learning (SSL) method for
longitudinal data. It uses pairs of unlabelled OCT scans from different
visits and involves morphing the scan from the previous visit to the
next. The Decoder predicts the transformation for morphing and ensures a
smooth feature manifold that can generate intermediate scans between
visits through linear interpolation. Next, the Morph-SSL trained
features are input to a Classifier which is trained in a supervised
manner to model the cumulative probability distribution of the time to
conversion with a sigmoidal function. Morph-SSL was trained on
unlabelled scans of 399 eyes (3570 visits). The Classifier was evaluated
with a five-fold cross-validation on 2418 scans from 343 eyes with
clinical labels of the conversion date. The Morph-SSL features achieved
an AUC of 0.779 in predicting the conversion to nAMD within the next 6
months, outperforming the same network when trained end-to-end from
scratch or pre-trained with popular SSL methods. Automated prediction of
the future risk of nAMD onset can enable timely treatment and
individualized AMD management.
RI Bogunovic, Hrvoje/J-3445-2014; emre, taha/JVM-9949-2024; Chakravarty, Arunava/AAS-1580-2020; Rueckert, Daniel/C-4393-2008; Sivaprasad, S./D-6876-2015; Mai, Julia/; Chakravarty, Arunava/; Sivaprasad, Sobha/; Schmidt-Erfurth, Ursula/; Rueckert, Daniel/
OI Mai, Julia/0000-0002-0528-9742; Chakravarty,
Arunava/0000-0003-3646-0650; Sivaprasad, Sobha/0000-0001-8952-0659;
Schmidt-Erfurth, Ursula/0000-0002-7788-7311; Rueckert,
Daniel/0000-0002-5683-5889
ZR 0
TC 1
ZA 0
Z8 0
ZB 0
AB 缺乏可靠的生物标志物使得预测从中间型到新生血管性年龄相关性黄斑变性(iAMD、nAMD)的转化成为一项具有挑战性的任务。我们开发了一种深度学习 (DL) 模型,通过当前的 OCT 扫描来预测眼睛从 iAMD 转换为 nAMD 的未来风险。尽管眼科诊所生成大量纵向 OCT 扫描来监测 AMD 进展,但只有一小部分可以手动标记为监督 DL。为了解决这个问题,我们提出了 Morph-SSL,一种新颖的纵向数据自监督学习(SSL)方法。它使用来自不同访问的成对未标记 OCT 扫描,并涉及将上次访问的扫描变形为下一次访问的扫描。解码器预测变形的变换并确保平滑的特征流形,可以通过线性插值在访问之间生成中间扫描。接下来,将 Morph-SSL 训练的特征输入到分类器中,该分类器以监督方式进行训练,以使用 sigmoidal 函数对转换时间的累积概率分布进行建模。 Morph-SSL 在 399 只眼睛(3570 次访问)的未标记扫描上进行了训练。该分类器通过五倍交叉验证对 343 只眼睛的 2418 次扫描进行了评估,并附有转换日期的临床标签。 Morph-SSL 功能在预测未来 6 个月内向 nAMD 的转换方面实现了 0.779 的 AUC,在从头开始进行端到端训练或使用流行的 SSL 方法进行预训练时,其性能优于同一网络。自动预测 nAMD 未来发病风险可以实现及时治疗和个性化 AMD 管理。 RI Bogunovic,赫尔沃耶/J-3445-2014;埃姆雷、塔哈/JVM-9949-2024; Chakravarty,阿鲁纳瓦/AAS-1580-2020;丹尼尔·吕克特/C-4393-2008;西瓦普拉萨德,S./D-6876-2015;麦,朱莉娅/; Chakravarty,阿鲁纳瓦/;西瓦普拉萨德,索巴/;施密特-埃尔福特,乌苏拉/;丹尼尔·吕克特/朱莉娅·奥麦/0000-0002-0528-9742;查克拉瓦尔蒂,阿鲁纳瓦/0000-0003-3646-0650;西瓦普拉萨德,索巴/0000-0001-8952-0659;施密特-埃尔福特,乌苏拉/0000-0002-7788-7311;丹尼尔·吕克特/0000-0002-5683-5889 ZR 0 TC 1 ZA 0 Z8 0 ZB 0
ZS 0
Z9 1
C1 Med Univ Vienna, Dept Ophthalmol & Optometry, A-1090 Vienna, Austria
C1 Inst Mol & Clin Ophthalmol Basel, CH-4031 Basel, Switzerland
C1 Univ Basel, Dept Ophthalmol, CH-4001 Basel, Switzerland
C1 Moorfields Eye Hosp NHS Fdn Trust, NIHR Moorfields Biomed Res Ctr,
London EC1V 2PD, England
C1 Imperial Coll London, BioMedIA, London SW7 2AZ, England
C1 Tech Univ Munich, Inst AI & Informat Med, Klinikum Rechts Isar, D-80333
Munich, Germany
C1 Univ Southampton, Fac Med, Clin & Expt Sci, Southampton SO17 1BJ,
England
C1 Med Univ Vienna, Christian Doppler Lab Artificial Intelligence Reti,
A-1090 Vienna, Austria
C3 Inst Mol & Clin Ophthalmol Basel
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600023
PM 38635383
ER
ZS 0 Z9 1 C1 维也纳医学大学,眼科与验光系,A-1090 维也纳,奥地利 C1 巴塞尔摩尔与临床眼科研究所,CH-4031 巴塞尔,瑞士 C1 巴塞尔大学眼科系,CH-4001 巴塞尔,瑞士 C1 Moorfields Eye Hosp NHS Fdn Trust、NIHR Moorfields Biomed Res Ctr,伦敦 EC1V 2PD,英格兰 C1 Imperial Coll London,BioMediA,伦敦 SW7 2AZ,英格兰 C1 慕尼黑科技大学,Inst AI & Informat Med,Klinikum Rechts Isar,D-80333 慕尼黑,德国 C1 Univ Southampton, Fac Med, Clin & Expt Sci, Southampton SO17 1BJ, England C1 Med Univ Vienna, Christian Doppler Lab Artificial Intelligence Reti, A-1090 Vienna, Austria C3 Inst Mol & Clin Ophamol Basel SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600023 PM 38635383 ER
PT J
AU Noelke, Jan-Hinrich
Adler, Tim J.
Schellenberg, Melanie
Dreher, Kris K.
Holzwarth, Niklas
Bender, Christoph J.
Tizabi, Minu D.
Seitel, Alexander
Maier-Hein, Lena
PT J AU Noelke、Jan-Hinrich Adler、Tim J. Schellenberg、Melanie Dreher、Kris K. Holzwarth、Niklas Bender、Christoph J. Tizabi、Minu D. Seitel、Alexander Maier-Hein、Lena
TI Photoacoustic Quantification of Tissue Oxygenation Using Conditional
Invertible Neural Networks
使用条件可逆神经网络对组织氧合进行 TI 光声量化
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3366
EP 3376
DI 10.1109/TMI.2024.3403417
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3366 EP 3376 DI 10.1109/TMI.2024.3403417 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Intelligent systems in interventional healthcare depend on the reliable
perception of the environment. In this context, photoacoustic tomography
(PAT) has emerged as a non-invasive, functional imaging modality with
great clinical potential. Current research focuses on converting the
high-dimensional, not human-interpretable spectral data into the
underlying functional information, specifically the blood oxygenation.
One of the largely unexplored issues stalling clinical advances is the
fact that the quantification problem is ambiguous, i.e. that radically
different tissue parameter configurations could lead to almost identical
photoacoustic spectra. In the present work, we tackle this problem with
conditional Invertible Neural Networks (cINNs). Going beyond traditional
point estimates, our network is used to compute an approximation of the
conditional posterior density of tissue parameters given the
photoacoustic spectrum. To this end, an automatic mode detection
algorithm extracts the plausible solution from the sample-based
posterior. According to a comprehensive validation study based on both
synthetic and real images, our approach is well-suited for exploring
ambiguity in quantitative PAT.
RI Seitel, Alexander/AAR-2520-2021; Nolke, Jan-Hinrich/; Schellenberg, Melanie/; Seitel, Alexander/; Holzwarth, Niklas/; Dreher, Kris K./; Adler, Tim J/; Maier-Hein, Lena/; Bender, Christoph Julien/
OI Nolke, Jan-Hinrich/0000-0001-7600-3839; Schellenberg,
Melanie/0000-0002-7911-5622; Seitel, Alexander/0000-0002-5919-9646;
Holzwarth, Niklas/0000-0003-4440-2047; Dreher, Kris
K./0000-0002-9179-9414; Adler, Tim J/0000-0002-3424-6629; Maier-Hein,
Lena/0000-0003-4910-9368; Bender, Christoph Julien/0009-0009-5432-774X
ZB 0
Z8 0
TC 0
ZR 0
介入医疗保健中的 AB 智能系统依赖于对环境的可靠感知。在这种背景下,光声断层扫描(PAT)已成为一种具有巨大临床潜力的非侵入性功能成像方式。目前的研究重点是将高维的、非人类可解释的光谱数据转换为潜在的功能信息,特别是血液氧合。阻碍临床进展的很大程度上尚未探索的问题之一是量化问题不明确,即完全不同的组织参数配置可能导致几乎相同的光声光谱。在目前的工作中,我们使用条件可逆神经网络(cINN)来解决这个问题。超越传统的点估计,我们的网络用于计算给定光声光谱的组织参数的条件后验密度的近似值。为此,自动模式检测算法从基于样本的后验中提取合理的解决方案。根据基于合成图像和真实图像的综合验证研究,我们的方法非常适合探索定量 PAT 中的模糊性。 RI Seitel,亚历山大/AAR-2520-2021;扬-辛里奇·诺尔克/;梅兰妮·谢伦伯格/;亚历山大·塞特尔/;尼克拉斯·霍尔兹沃斯/;德雷尔,克里斯·K./;阿德勒,蒂姆·J/;迈尔-海因,莉娜/;克里斯托夫·朱利安·本德尔/Jan-Hinrich OI·诺尔克/0000-0001-7600-3839;梅兰妮·谢伦伯格/0000-0002-7911-5622;亚历山大·塞特尔/0000-0002-5919-9646;尼克拉斯·霍尔兹沃斯/0000-0003-4440-2047;德雷尔,克里斯·K./0000-0002-9179-9414;阿德勒,蒂姆 J/0000-0002-3424-6629;迈尔-海因,莉娜/0000-0003-4910-9368;克里斯托夫·朱利安·本德尔/0009-0009-5432-774X ZB 0 Z8 0 TC 0 ZR 0
ZS 0
ZA 0
Z9 0
C1 German Canc Res Ctr, Div Intelligent Med Syst IMSY, D-69120 Heidelberg,
Germany
C1 Heidelberg Univ, Fac Math & Comp Sci, D-69117 Heidelberg, Germany
C1 hema to GmbH, D-81667 Munich, Germany
C1 NCT Heidelberg, DKFZ & Univ Hosp Heidelberg, Natl Ctr Tumor Dis NCT,
D-69120 Heidelberg, Germany
C1 Helmholtz Informat & Data Sci Sch Hlth HIDSS4Hlth, D-69120 Heidelberg,
Germany
C1 Mem Sloan Kettering Canc Ctr, Dept Med Phys, New York, NY 10065 USA
C1 Heidelberg Univ, Fac Phys & Astron, D-69117 Heidelberg, Germany
C1 Heidelberg Univ, Med Fac, D-69117 Heidelberg, Germany
C3 hema to GmbH
C3 Helmholtz Informat & Data Sci Sch Hlth HIDSS4Hlth
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600006
PM 38787678
ER
ZS 0 ZA 0 Z9 0 C1 德国 Canc Res Ctr,Div 智能医疗系统 IMSY,D-69120 海德堡,德国 C1 海德堡大学,Fac Math & Comp Sci,D-69117 海德堡,德国 C1 hema to GmbH,D-81667 慕尼黑,德国 C1 NCT 海德堡、DKFZ 和 Univ Hosp 海德堡、Natl Ctr Tumor Dis NCT、D-69120 海德堡、德国 C1 Helmholtz Informat & Data Sci Sch Hlth HIDSS4Hlth、D-69120 海德堡、德国 C1 Mem Sloan Kettering Canc Ctr、Dept Med Phys、纽约,NY 10065 美国 C1 海德堡大学,Fac Phys & Astron,D-69117 海德堡,德国 C1 海德堡大学,Med Fac,D-69117 海德堡,德国 C3 hema to GmbH C3 亥姆霍兹信息与数据科学 Sch Hlth HIDSS4Hlth SN 0278- 0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600006 PM 38787678 ER
PT J
AU Ye, Shuquan
Xu, Yan
Chen, Dongdong
Han, Songfang
Liao, Jing
PT J AU Ye, 徐淑泉, 陈彦, 韩东东, 廖松芳, Jing
TI Learning a Single Network for Robust Medical Image Segmentation With
Noisy Labels
TI 学习单一网络以实现具有噪声标签的稳健医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3188
EP 3199
DI 10.1109/TMI.2024.3389776
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3188 EP 3199 DI 10.1109/TMI.2024.3389776 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Robust segmenting with noisy labels is an important problem in medical
imaging due to the difficulty of acquiring high-quality annotations.
Despite the enormous success of recent developments, these developments
still require multiple networks to construct their frameworks and focus
on limited application scenarios, which leads to inflexibility in
practical applications. They also do not explicitly consider the coarse
boundary label problem, which results in sub-optimal results. To
overcome these challenges, we propose a novel Simultaneous Edge
Alignment and Memory-Assisted Learning (SEAMAL) framework for
noisy-label robust segmentation. It achieves single-network robust
learning, which is applicable for both 2D and 3D segmentation, in both
Set-HQ-knowable and Set-HQ-agnostic scenarios. Specifically, to achieve
single-model noise robustness, we design a Memory-assisted Selection and
Correction module (MSC) that utilizes predictive history consistency
from the Prediction Memory Bank to distinguish between reliable and
non-reliable labels pixel-wisely, and that updates the reliable ones at
the superpixel level. To overcome the coarse boundary label problem,
which is common in practice, and to better utilize shape-relevant
information at the boundary, we propose an Edge Detection Branch (EDB)
that explicitly learns the boundary via an edge detection layer with
only slight additional computational cost, and we improve the sharpness
and precision of the boundary with a thinning loss. Extensive
experiments verify that SEAMAL outperforms previous works significantly.
RI Chen, Dongdong/M-9516-2019; Han, Songfang/AAA-5781-2020; Ye, Shuquan/; Han, Songfang/; Chen, Dongdong/; LIAO, Jing/
OI Ye, Shuquan/0000-0001-5121-8040; Han, Songfang/0000-0002-6432-8764;
Chen, Dongdong/0000-0002-4642-4373; LIAO, Jing/0000-0001-7014-5377
Z8 0
ZR 0
ZA 0
TC 0
ZB 0
AB 由于获取高质量注释的困难,带有噪声标签的鲁棒分割是医学成像中的一个重要问题。尽管最近的发展取得了巨大的成功,但这些发展仍然需要多个网络来构建其框架,并且专注于有限的应用场景,这导致实际应用中缺乏灵活性。他们也没有明确考虑粗边界标签问题,这导致了次优结果。为了克服这些挑战,我们提出了一种新颖的同步边缘对齐和记忆辅助学习(SEAMAL)框架,用于噪声标签鲁棒分割。它实现了单网络鲁棒学习,适用于 2D 和 3D 分割、Set-HQ-knowable 和 Set-HQ-agnostic 场景。具体来说,为了实现单模型噪声鲁棒性,我们设计了一个内存辅助选择和校正模块(MSC),该模块利用预测内存库中的预测历史一致性来逐像素区分可靠和不可靠标签,并更新超像素级别的可靠。为了克服实践中常见的粗边界标签问题,并更好地利用边界处的形状相关信息,我们提出了一种边缘检测分支(EDB),它通过边缘检测层显式地学习边界,只需少量的额外计算成本,并且我们通过细化损失提高了边界的清晰度和精度。大量的实验验证了 SEAMAL 的性能显着优于之前的工作。日陈东东/M-9516-2019;韩松芳/AAA-5781-2020;叶树全/;韩松芳/;陈东东/;廖静/艾叶树全/0000-0001-5121-8040;韩松芳/0000-0002-6432-8764;陈东东/0000-0002-4642-4373;廖静/0000-0001-7014-5377 Z8 0 ZR 0 ZA 0 TC 0 ZB 0
ZS 0
Z9 0
C1 City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
C1 Beihang Univ, Dept Biol Sci & Med Engn, Beijing 100191, Peoples R China
C1 Microsoft, Redmond, WA 98052 USA
C1 Snap, Santa Monica, CA 90405 USA
C3 Snap
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600008
PM 38635382
ER
ZS 0 Z9 0 C1 City Univ Hong Kong,Dept Comp Sci,香港,Peoples R China C1 Beihang Univ,Dept Biol Sci & Med Engn,Beijing 100191,Peoples R China C1 Microsoft,Redmond,WA 98052 USA C1 Snap,圣莫尼卡,CA 90405 美国 C3 Snap SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600008 PM 38635382 ER
PT J
AU Dai, Tianjie
Zhang, Ruipeng
Hong, Feng
Yao, Jiangchao
Zhang, Ya
Wang, Yanfeng
PT J AU Dai, 张天杰, 洪瑞鹏, 姚峰, 张江超, 王亚, 艳峰
TI UniChest: Conquer-and-Divide Pre-Training for Multi-Source Chest X-Ray
Classification
TI UniChest:多源胸部 X 射线分类的征服和划分预训练
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2901
EP 2912
DI 10.1109/TMI.2024.3381123
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2901 EP 2912 DI 10.1109/TMI.2024.3381123 DT 文章 PD 2024 年 8 月 PY 2024
AB Vision-Language Pre-training (VLP) that utilizes the multi-modal
information to promote the training efficiency and effectiveness, has
achieved great success in vision recognition of natural domains and
shown promise in medical imaging diagnosis for the Chest X-Rays (CXRs).
However, current works mainly pay attention to the exploration on single
dataset of CXRs, which locks the potential of this powerful paradigm on
larger hybrid of multi-source CXRs datasets. We identify that although
blending samples from the diverse sources offers the advantages to
improve the model generalization, it is still challenging to maintain
the consistent superiority for the task of each source due to the
existing heterogeneity among sources. To handle this dilemma, we design
a Conquer-and-Divide pre-training framework, termed as UniChest, aiming
to make full use of the collaboration benefit of multiple sources of
CXRs while reducing the negative influence of the source heterogeneity.
Specially, the "Conquer" stage in UniChest encourages the model to
sufficiently capture multi-source common patterns, and the "Divide"
stage helps squeeze personalized patterns into different small experts
(query networks). We conduct thorough experiments on many benchmarks,
e.g., ChestX-ray14, CheXpert, Vindr-CXR, Shenzhen, Open-I and SIIM-ACR
Pneumothorax, verifying the effectiveness of UniChest over a range of
baselines, and release our codes and pre-training models at
https://github.com/Elfenreigen/UniChest.
RI Yao, Jiangchao/JOZ-1621-2023; zhang, ruipeng/GPS-7820-2022; Zhang, Ya/Y-8255-2019; Dai, Tianjie/; zhang, ruipeng/; Zhang, Ya/; Hong, Feng/
OI Dai, Tianjie/0000-0002-7420-2356; zhang, ruipeng/0000-0002-4372-4987;
Zhang, Ya/0000-0002-5390-9053; Hong, Feng/0009-0004-0137-4087
ZB 0
TC 0
Z8 0
AB视觉语言预训练(VLP)利用多模态信息来提高训练效率和效果,在自然领域的视觉识别方面取得了巨大成功,并在胸部X光片(CXRs)的医学影像诊断中显示出应用前景。 )。然而,当前的工作主要关注对单个 CXR 数据集的探索,这将这种强大范式的潜力锁定在更大的多源 CXR 数据集混合上。我们发现,尽管混合来自不同来源的样本可以提供提高模型泛化能力的优势,但由于来源之间存在异质性,保持每个来源的任务的一致优势仍然具有挑战性。为了解决这个困境,我们设计了一个征服和划分预训练框架,称为UniChest,旨在充分利用多源CXR的协作优势,同时减少源异构性的负面影响。特别是,UniChest 中的“征服”阶段鼓励模型充分捕获多源常见模式,而“划分”阶段有助于将个性化模式压缩到不同的小专家(查询网络)中。我们在ChestX-ray14、CheXpert、Vindr-CXR、Shenzhen、Open-I和SIIM-ACR Pneumothorax等许多基准上进行了深入的实验,验证了UniChest在一系列基准上的有效性,并发布了我们的代码和预训练模型位于 https://github.com/Elfenreigen/UniChest。日遥,江超/JOZ-1621-2023;张瑞鹏/GPS-7820-2022;张亚/Y-8255-2019;戴天杰/;张瑞鹏/;张雅/;洪峰/戴天杰/0000-0002-7420-2356;张瑞鹏/0000-0002-4372-4987;张亚/0000-0002-5390-9053;洪峰/0009-0004-0137-4087 ZB 0 TC 0 Z8 0
ZS 0
ZA 0
ZR 0
Z9 0
C1 Shanghai Jiao Tong Univ, Cooperat Medianet Innovat Ctr, Shanghai 200240,
Peoples R China
C1 Shanghai AI Lab, Shanghai 200032, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200004
PM 38526891
ER
ZS 0 ZA 0 ZR 0 Z9 0 C1 上海交通大学,合作媒体网络创新中心,上海 200240,人民 R 中国 C1 上海人工智能实验室,上海 200032,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200004 PM 38526891 ER
PT J
AU Chen, Ming
Bian, Yijun
Chen, Nanguang
Qiu, Anqi
PT J AU 陈、卞明、陈一君、邱南光、安琪
TI Orthogonal Mixed-Effects Modeling for High-Dimensional Longitudinal
Data: An Unsupervised Learning Approach.
高维纵向数据的 TI 正交混合效应建模:一种无监督学习方法。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3435855
DT Journal Article
PD 2024-Jul-30
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3435855 DT 期刊文章 PD 2024-Jul-30 PY 2024
AB The linear mixed-effects model is commonly utilized to interpret
longitudinal data, characterizing both the global longitudinal
trajectory across all observations and longitudinal trajectories within
individuals. However, characterizing these trajectories in
high-dimensional longitudinal data presents a challenge. To address
this, our study proposes a novel approach, Unsupervised Orthogonal
Mixed-Effects Trajectory Modeling (UOMETM), that leverages unsupervised
learning to generate latent representations of both global and
individual trajectories. We design an autoencoder with a latent space
where an orthogonal constraint is imposed to separate the space of
global trajectories from individual trajectories. We also devise a
cross-reconstruction loss to ensure consistency of global trajectories
and enhance the orthogonality between representation spaces. To evaluate
UOMETM, we conducted simulation experiments on images to verify that
every component functions as intended. Furthermore, we evaluated its
performance and robustness using longitudinal brain cortical thickness
from two Alzheimer's disease (AD) datasets. Comparative analyses with
state-of-the-art methods revealed UOMETM's superiority in identifying
global and individual longitudinal patterns, achieving a lower
reconstruction error, superior orthogonality, and higher accuracy in AD
classification and conversion forecasting. Remarkably, we found that the
space of global trajectories did not significantly contribute to AD
classification compared to the space of individual trajectories,
emphasizing their clear separation. Moreover, our model exhibited
satisfactory generalization and robustness across different datasets.
The study shows the outstanding performance and potential clinical use
of UOMETM in the context of longitudinal data analysis.
RI Bian, Yijun/; Qiu, Anqi/H-2267-2011
OI Bian, Yijun/0000-0002-5926-7100; Qiu, Anqi/0000-0002-0215-6338
Z8 0
ZA 0
ZB 0
TC 0
AB 线性混合效应模型通常用于解释纵向数据,表征所有观察结果的全局纵向轨迹和个体内部的纵向轨迹。然而,在高维纵向数据中表征这些轨迹提出了挑战。为了解决这个问题,我们的研究提出了一种新方法,即无监督正交混合效应轨迹建模(UOMETM),它利用无监督学习来生成全局和个体轨迹的潜在表示。我们设计了一个具有潜在空间的自动编码器,其中施加正交约束以将全局轨迹的空间与个体轨迹分开。我们还设计了交叉重建损失来确保全局轨迹的一致性并增强表示空间之间的正交性。为了评估 UOMETM,我们对图像进行了模拟实验,以验证每个组件是否按预期运行。此外,我们使用两个阿尔茨海默病 (AD) 数据集的纵向大脑皮质厚度评估了其性能和稳健性。与最先进方法的比较分析揭示了 UOMETM 在识别全局和个体纵向模式、实现较低的重建误差、优异的正交性以及 AD 分类和转换预测方面更高的准确性方面的优越性。值得注意的是,我们发现与个体轨迹的空间相比,全局轨迹的空间对 AD 分类没有显着贡献,强调了它们的明显分离。此外,我们的模型在不同数据集上表现出令人满意的泛化性和鲁棒性。 该研究在纵向数据分析的背景下展示了 UOMETM 的出色性能和潜在的临床用途。日卞毅君/;邱安琪/H-2267-2011 卞毅君/0000-0002-5926-7100;邱安琪/0000-0002-0215-6338 Z8 0 ZA 0 ZB 0 TC 0
ZS 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-08-01
UT MEDLINE:39078772
PM 39078772
ER
ZS 0 ZR 0 Z9 0 EI 1558-254X DA 2024-08-01 UT MEDLINE:39078772 PM 39078772 ER
PT J
AU Liu, Jiaxuan
Li, Haitao
Zeng, Bolun
Wang, Huixiang
Kikinis, Ron
Joskowicz, Leo
Chen, Xiaojun
PT J AU Liu、李家轩、曾海涛、王博伦、Huiyang Kikinis、Ron Joskowicz、Leo Chen、晓军
TI An end-to-end geometry-based pipeline for automatic preoperative
surgical planning of pelvic fracture reduction and fixation.
TI 一种基于几何形状的端到端管道,用于自动进行骨盆骨折复位和固定术前手术规划。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3429403
DT Journal Article
PD 2024-Jul-16
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3429403 DT 期刊文章 PD 2024-Jul-16 PY 2024
AB Computer-assisted preoperative planning of pelvic fracture reduction
surgery has the potential to increase the accuracy of the surgery and to
reduce complications. However, the diversity of the pelvic fractures and
the disturbance of small fracture fragments present a great challenge to
perform reliable automatic preoperative planning. In this paper, we
present a comprehensive and automatic preoperative planning pipeline for
pelvic fracture surgery. It includes pelvic fracture labeling, reduction
planning of the fracture, and customized screw implantation. First,
automatic bone fracture labeling is performed based on the separation of
the fracture sections. Then, fracture reduction planning is performed
based on automatic extraction and pairing of the fracture surfaces.
Finally, screw implantation is planned using the adjoint fracture
surfaces. The proposed pipeline was tested on different types of pelvic
fracture in 14 clinical cases. Our method achieved a translational and
rotational accuracy of 2.56 mm and 3.31° in reduction planning. For
fixation planning, a clinical acceptance rate of 86.7% was achieved. The
results demonstrate the feasibility of the clinical application of our
method. Our method has shown accuracy and reliability for complex
multi-body bone fractures, which may provide effective clinical
preoperative guidance and may improve the accuracy of pelvic fracture
reduction surgery.
OI Joskowicz, Leo/0000-0002-3010-4770; Zeng, Bolun/0000-0001-6929-7699
ZA 0
AB 计算机辅助骨盆骨折复位手术术前计划有可能提高手术的准确性并减少并发症。然而,骨盆骨折的多样性和小骨折碎片的干扰对进行可靠的自动术前计划提出了巨大的挑战。在本文中,我们提出了一种用于骨盆骨折手术的全面且自动的术前计划流程。它包括骨盆骨折标记、骨折复位计划和定制螺钉植入。首先,根据骨折断面的分离进行自动骨折标记。然后,基于骨折表面的自动提取和配对来执行骨折复位计划。最后,计划使用伴随断裂面进行螺钉植入。所提出的管道在 14 个临床病例中对不同类型的骨盆骨折进行了测试。我们的方法在缩减规划中实现了 2.56 毫米和 3.31° 的平移和旋转精度。对于固定计划,临床接受率为86.7%。结果证明了我们的方法临床应用的可行性。我们的方法对复杂的多体骨折表现出准确性和可靠性,可以为临床提供有效的术前指导,并可以提高骨盆骨折复位手术的准确性。 OI Joskowicz,利奥/0000-0002-3010-4770;曾伯伦/0000-0001-6929-7699 ZA 0
ZS 0
ZR 0
TC 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-07-20
UT MEDLINE:39012731
PM 39012731
ER
ZS 0 ZR 0 TC 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-07-20 UT MEDLINE:39012731 PM 39012731 ER
PT J
AU Karageorgos, Grigorios M
Zhang, Jiayong
Peters, Nils
Xia, Wenjun
Niu, Chuang
Paganetti, Harald
Wang, Ge
De Man, Bruno
PT J AU Karageorgos、Grigorios M 张、Jiayong Peters、Nils Xia、牛文君、Chuang Paganetti、Harald Wang、Ge De Man、Bruno
TI A denoising diffusion probabilistic model for metal artifact reduction
in CT.
TI 用于减少 CT 中金属伪影的降噪扩散概率模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3416398
DT Journal Article
PD 2024-Jul-04
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3416398 DT 期刊文章 PD 2024-Jul-04 PY 2024
AB The presence of metal objects leads to corrupted CT projection
measurements, resulting in metal artifacts in the reconstructed CT
images. AI promises to offer improved solutions to estimate missing
sinogram data for metal artifact reduction (MAR), as previously shown
with convolutional neural networks (CNNs) and generative adversarial
networks (GANs). Recently, denoising diffusion probabilistic models
(DDPM) have shown great promise in image generation tasks, potentially
outperforming GANs. In this study, a DDPM-based approach is proposed for
inpainting of missing sinogram data for improved MAR. The proposed model
is unconditionally trained, free from information on metal objects,
which can potentially enhance its generalization capabilities across
different types of metal implants compared to conditionally trained
approaches. The performance of the proposed technique was evaluated and
compared to the state-of-the-art normalized MAR (NMAR) approach as well
as to CNN-based and GAN-based MAR approaches. The DDPM-based approach
provided significantly higher SSIM and PSNR, as compared to NMAR (SSIM:
p < 10-26; PSNR: p < 10-21), the CNN (SSIM: p < 10-25; PSNR: p < 10-9)
and the GAN (SSIM: p < 10-6; PSNR: p < 0.05) methods. The DDPM-MAR
technique was further evaluated based on clinically relevant image
quality metrics on clinical CT images with virtually introduced metal
objects and metal artifacts, demonstrating superior quality relative to
the other three models. In general, the AI-based techniques showed
improved MAR performance compared to the non-AI-based NMAR approach. The
proposed methodology shows promise in enhancing the effectiveness of
MAR, and therefore improving the diagnostic accuracy of CT.
OI Wang, Ge/0000-0002-2656-7705; Karageorgos, Grigorios
Marios/0000-0002-0528-7798; Niu, Chuang/0000-0002-3310-7803
TC 0
AB 金属物体的存在会导致 CT 投影测量损坏,从而导致重建的 CT 图像中出现金属伪影。人工智能有望提供改进的解决方案来估计缺失的正弦图数据,以减少金属伪影 (MAR),正如之前的卷积神经网络 (CNN) 和生成对抗网络 (GAN) 所示。最近,去噪扩散概率模型(DDPM)在图像生成任务中显示出巨大的前景,可能优于 GAN。在本研究中,提出了一种基于 DDPM 的方法来修复缺失的正弦图数据,以改进 MAR。所提出的模型是无条件训练的,不受金属物体信息的影响,与有条件训练的方法相比,这可以潜在地增强其在不同类型金属植入物上的泛化能力。对所提出技术的性能进行了评估,并将其与最先进的归一化 MAR (NMAR) 方法以及基于 CNN 和基于 GAN 的 MAR 方法进行了比较。与 NMAR(SSIM:p < 10-26;PSNR:p < 10-21)、CNN(SSIM:p < 10-25;PSNR:PSNR: p < 10-9)和 GAN(SSIM:p < 10-6;PSNR:p < 0.05)方法。基于临床 CT 图像上的临床相关图像质量指标,进一步评估了 DDPM-MAR 技术,其中虚拟引入了金属物体和金属伪影,证明了相对于其他三种模型的卓越质量。总体而言,与非基于 AI 的 NMAR 方法相比,基于 AI 的技术显示出改进的 MAR 性能。所提出的方法有望提高 MAR 的有效性,从而提高 CT 的诊断准确性。王爱葛/0000-0002-2656-7705;卡拉吉奥戈斯,格里戈里奥斯·马里奥斯/0000-0002-0528-7798;牛庄/0000-0002-3310-7803 TC 0
ZS 0
ZR 0
ZA 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-07-06
UT MEDLINE:38963746
PM 38963746
ER
ZS 0 ZR 0 ZA 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-07-06 UT MEDLINE:38963746 PM 38963746 ER
PT J
AU Zhou, Jie
Jie, Biao
Wang, Zhengdong
Zhang, Zhixiang
Du, Tongchun
Bian, Weixin
Yang, Yang
Jia, Jun
PT J AU Zhou, 杰杰, 王飚, 张正东, 杜志祥, 卞同春, 杨伟新, 杨佳, Jun
TI LCGNet: Local Sequential Feature Coupling Global Representation Learning
for Functional Connectivity Network Analysis with fMRI.
TI LCGNet:使用 fMRI 进行功能连接网络分析的局部顺序特征耦合全局表示学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3421360
DT Journal Article
PD 2024-Jul-01
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3421360 DT 期刊文章 PD 2024-Jul-01 PY 2024
AB Analysis of functional connectivity networks (FCNs) derived from
resting-state functional magnetic resonance imaging (rs-fMRI) has
greatly advanced our understanding of brain diseases, including
Alzheimer's disease (AD) and attention deficit hyperactivity disorder
(ADHD). Advanced machine learning techniques, such as convolutional
neural networks (CNNs), have been used to learn high-level feature
representations of FCNs for automated brain disease classification. Even
though convolution operations in CNNs are good at extracting local
properties of FCNs, they generally cannot well capture global temporal
representations of FCNs. Recently, the transformer technique has
demonstrated remarkable performance in various tasks, which is
attributed to its effective self-attention mechanism in capturing the
global temporal feature representations. However, it cannot effectively
model the local network characteristics of FCNs. To this end, in this
paper, we propose a novel network structure for Local sequential feature
Coupling Global representation learning (LCGNet) to take advantage of
convolutional operations and self-attention mechanisms for enhanced FCN
representation learning. Specifically, we first build a dynamic FCN for
each subject using an overlapped sliding window approach. We then
construct three sequential components (i.e., edge-to-vertex layer,
vertex-to-network layer, and network-to-temporality layer) with a dual
backbone branch of CNN and transformer to extract and couple from local
to global topological information of brain networks. Experimental
results on two real datasets (i.e., ADNI and ADHD-200) with rs-fMRI data
show the superiority of our LCGNet.
OI Zhou, Jie/0009-0009-3032-0105; Wang, Zhengdong/0000-0002-2509-597X;
Bian, Weixin/0000-0003-2341-5359
Z8 0
ZB 0
ZA 0
AB 对静息态功能磁共振成像 (rs-fMRI) 衍生的功能连接网络 (FCN) 的分析极大地增进了我们对脑部疾病的理解,包括阿尔茨海默病 (AD) 和注意力缺陷多动障碍 (ADHD)。先进的机器学习技术,例如卷积神经网络 (CNN),已被用来学习 FCN 的高级特征表示,以实现自动脑部疾病分类。尽管 CNN 中的卷积运算擅长提取 FCN 的局部属性,但它们通常不能很好地捕获 FCN 的全局时间表示。最近,Transformer 技术在各种任务中表现出了卓越的性能,这归因于其在捕获全局时间特征表示方面的有效自注意力机制。然而,它不能有效地模拟FCN的本地网络特征。为此,在本文中,我们提出了一种用于局部顺序特征耦合全局表示学习(LCGNet)的新颖网络结构,以利用卷积运算和自注意力机制来增强 FCN 表示学习。具体来说,我们首先使用重叠滑动窗口方法为每个主题构建动态 FCN。然后,我们使用 CNN 和 Transformer 的双主干分支构建三个顺序组件(即边到顶点层、顶点到网络层和网络到时间层),以提取和耦合局部到全局拓扑信息的大脑网络。使用 rs-fMRI 数据在两个真实数据集(即 ADNI 和 ADHD-200)上进行的实验结果表明了我们的 LCGNet 的优越性。周杰/0009-0009-3032-0105;王正东/0000-0002-2509-597X;卞微信/0000-0003-2341-5359 Z8 0 ZB 0 ZA 0
ZS 0
TC 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-07-04
UT MEDLINE:38949932
PM 38949932
ER
ZS 0 TC 0 ZR 0 Z9 0 EI 1558-254X DA 2024-07-04 UT MEDLINE:38949932 PM 38949932 ER
PT J
AU Chabouh, Georges
Denis, Louise
Bodard, Sylvain
Lager, Franck
Renault, Gilles
Chavignon, Arthur
Couture, Olivier
PT J AU Chabouh、乔治·丹尼斯、路易丝·博达尔、西尔万·拉格、弗兰克·雷诺、吉尔·夏维农、阿瑟·库图尔、奥利维尔
TI Whole organ volumetric sensing Ultrasound Localization Microscopy for
characterization of kidney structure.
TI 全器官体积传感超声定位显微镜,用于表征肾脏结构。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3411669
DT Journal Article
PD 2024-Jun-10
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3411669 DT 期刊文章 PD 2024-Jun-10 PY 2024
AB Glomeruli are the filtration units of the kidney and their function
relies heavily on their microcirculation. Despite its obvious diagnostic
importance, an accurate estimation of blood flow in the capillary bundle
within glomeruli defies the resolution of conventional imaging
modalities. Ultrasound Localization Microscopy (ULM) has demonstrated
its ability to image in-vivo deep organs in the body. Recently, the
concept of sensing ULM or sULM was introduced to classify individual
microbubble behavior based on the expected physiological conditions at
the micrometric scale. In the kidney of both rats and humans, it
revealed glomerular structures in 2D but was severely limited by planar
projection. In this work, we aim to extend sULM in 3D to image the whole
organ and in order to perform an accurate characterization of the entire
kidney structure. The extension of sULM into the 3D domain allows better
localization and more robust tracking. The 3D metrics of velocity and
pathway angular shift made glomerular mask possible. This approach
facilitated the quantification of glomerular physiological parameter
such as an interior traveled distance of approximately 7.5 ± 0.6 microns
within the glomerulus. This study introduces a technique that
characterize the kidney physiology which can serve as a method to
facilite pathology assessment. Furthermore, its potential for clinical
relevance could serve as a bridge between research and practical
application, leading to innovative diagnostics and improved patient
care..
OI Chabouh, Georges/0000-0003-0760-909X; Chavignon,
Arthur/0000-0001-7883-7482; Denis, Louise/0000-0001-6395-7377
ZR 0
TC 0
AB 肾小球是肾脏的过滤单位,其功能在很大程度上依赖于其微循环。尽管其诊断重要性显而易见,但对肾小球内毛细血管束血流量的准确估计仍无法满足传统成像方式的分辨率。超声定位显微镜 (ULM) 已证明其能够对体内深部器官进行成像。最近,引入了传感 ULM 或 sULM 的概念,以根据微米尺度的预期生理条件对个体微泡行为进行分类。在大鼠和人类的肾脏中,它显示了二维肾小球结构,但受到平面投影的严重限制。在这项工作中,我们的目标是扩展 3D sULM 以对整个器官进行成像,并对整个肾脏结构进行准确的表征。将 sULM 扩展到 3D 域可以实现更好的定位和更稳健的跟踪。速度和通路角位移的 3D 指标使肾小球掩模成为可能。这种方法有助于量化肾小球生理参数,例如肾小球内大约 7.5 ± 0.6 微米的内部行进距离。本研究介绍了一种表征肾脏生理学的技术,可作为促进病理学评估的方法。此外,其临床相关性的潜力可以作为研究和实际应用之间的桥梁,从而实现创新诊断和改善患者护理。OI Chabouh,Georges/0000-0003-0760-909X;亚瑟·查维尼翁/0000-0001-7883-7482;丹尼斯·路易斯/0000-0001-6395-7377 ZR 0 TC 0
ZS 0
ZB 0
ZA 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-06-12
UT MEDLINE:38857150
PM 38857150
ER
ZS 0 ZB 0 ZA 0 Z8 0 Z9 0 EI 1558-254X DA 2024-06-12 UT MEDLINE:38857150 PM 38857150 ER
PT J
AU Yang, Bao
Gong, Kuang
Liu, Huafeng
Li, Quanzheng
Zhu, Wentao
PT J AU Yang、包功、刘匡、李华峰、朱全正、文涛
TI Anatomically Guided PET Image Reconstruction Using Conditional
Weakly-Supervised Multi-Task Learning Integrating Self-Attention
使用集成自注意力的条件弱监督多任务学习进行 TI 解剖引导 PET 图像重建
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2098
EP 2112
DI 10.1109/TMI.2024.3356189
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2098 EP 2112 DI 10.1109/TMI.2024.3356189 DT 文章 PD 2024 年 6 月 PY 2024
AB To address the lack of high-quality training labels in positron emission
tomography (PET) imaging, weakly-supervised reconstruction methods that
generate network-based mappings between prior images and noisy targets
have been developed. However, the learned model has an intrinsic
variance proportional to the average variance of the target image. To
suppress noise and improve the accuracy and generalizability of the
learned model, we propose a conditional weakly-supervised multi-task
learning (MTL) strategy, in which an auxiliary task is introduced
serving as an anatomical regularizer for the PET reconstruction main
task. In the proposed MTL approach, we devise a novel multi-channel
self-attention (MCSA) module that helps learn an optimal combination of
shared and task-specific features by capturing both local and global
channel-spatial dependencies. The proposed reconstruction method was
evaluated on NEMA phantom PET datasets acquired at different positions
in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom
results demonstrate that our method outperforms state-of-the-art
learning-free and weakly-supervised approaches obtaining the best
noise/contrast tradeoff with a significant noise reduction of
approximately 50.0% relative to the maximum likelihood (ML)
reconstruction. The patient study results demonstrate that our method
achieves the largest noise reductions of 67.3% and 35.5% in the liver
and lung, respectively, as well as consistently small biases in 8 tumors
with various volumes and intensities. In addition, network visualization
reveals that adding the auxiliary task introduces more anatomical
information into PET reconstruction than adding only the anatomical
loss, and the developed MCSA can abstract features and retain PET image
details.
RI Gong, Kuang/W-6203-2019; Li, Quanzheng/; Zhu, Wentao/
OI Li, Quanzheng/0000-0002-9651-5820; Zhu, Wentao/0000-0001-9290-1778
ZR 0
Z8 0
ZA 0
ZB 0
TC 1
为了解决正电子发射断层扫描 (PET) 成像中缺乏高质量训练标签的问题,开发了弱监督重建方法,可以在先前图像和噪声目标之间生成基于网络的映射。然而,学习模型具有与目标图像的平均方差成比例的内在方差。为了抑制噪声并提高学习模型的准确性和泛化性,我们提出了一种条件弱监督多任务学习(MTL)策略,其中引入辅助任务作为 PET 重建主要任务的解剖正则化器。在所提出的 MTL 方法中,我们设计了一种新颖的多通道自注意力(MCSA)模块,该模块通过捕获局部和全局通道空间依赖性来帮助学习共享和特定于任务的特征的最佳组合。所提出的重建方法在 PET/CT 扫描仪不同位置采集的 NEMA 体模 PET 数据集和 26 个临床全身 PET 数据集上进行了评估。模型结果表明,我们的方法优于最先进的无学习和弱监督方法,获得最佳噪声/对比度权衡,相对于最大似然 (ML) 重建,噪声显着降低约 50.0%。患者研究结果表明,我们的方法在肝脏和肺部分别实现了最大的噪声降低,分别为 67.3% 和 35.5%,并且在 8 个不同体积和强度的肿瘤中始终保持较小的偏差。此外,网络可视化表明,与仅添加解剖损失相比,添加辅助任务将更多的解剖信息引入到 PET 重建中,并且开发的 MCSA 可以抽象特征并保留 PET 图像细节。匡日功/W-6203-2019;李全正/;朱文涛/李全正/0000-0002-9651-5820;朱文涛/0000-0001-9290-1778 ZR 0 Z8 0 ZA 0 ZB 0 TC 1
ZS 0
Z9 1
C1 Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
C1 Gordon Ctr Med Imaging, Dept Radiol, Ctr Adv Med Comp & Anal, Boston, MA
02115 USA
C1 Harvard Med Sch, Massachusetts Gen Hosp, Boston, MA 02115 USA
C1 Zhejiang Univ, Coll Opt Sci & Engn, Hangzhou 310027, Peoples R China
C1 Zhejiang Lab, Hangzhou 311121, Peoples R China
C3 Gordon Ctr Med Imaging
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100001
PM 38241121
ER
ZS 0 Z9 1 C1 Southern Med Univ, Sch Biomed Engn, 广州 510515, Peoples R China C1 Gordon Ctr Med Imaging, Dept Radiol, Ctr Adv Med Comp & Anal, Boston, MA 02115 USA C1Harvard Med Sch, Massachusetts Gen Hosp, Boston , MA 02115 USA C1 浙江大学,Coll Opt Sci & Engn,杭州 310027,人民 R 中国 C1 浙江实验室,杭州 311121,人民 R 中国 C3 Gordon Ctr Med Imaging SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100001 下午 38241121 ER
PT J
AU Payen, Thomas
Crouzet, Sebastien
Guillen, Nicolas
Chen, Yao
Chapelon, Jean-Yves
Lafon, Cyril
Catheline, Stefan
PT J AU Payen、Thomas Crouzet、Sebastien Guillen、Nicolas Chen、Yao Chapelon、Jean-Yves Lafon、Cyril Catheline、Stefan
TI Passive Elastography for Clinical HIFU Lesion Detection
用于临床 HIFU 病变检测的 TI 被动弹性成像
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1594
EP 1604
DI 10.1109/TMI.2023.3344182
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1594 EP 1604 DI 10.1109/TMI.2023.3344182 DT 文章 PD 2024 年 4 月 PY 2024
AB High-intensity Focused Ultrasound (HIFU) is a promising treatment
modality for a wide range of pathologies including prostate cancer.
However, the lack of a reliable ultrasound-based monitoring technique
limits its clinical use. Ultrasound currently provides real-time HIFU
planning, but its use for monitoring is usually limited to detecting the
backscatter increase resulting from chaotic bubble appearance. HIFU has
been shown to generate stiffening in various tissues, so elastography is
an interesting lead for ablation monitoring. However, the standard
techniques usually require the generation of a controlled push which can
be problematic in deeper organs. Passive elastography offers a potential
alternative as it uses the physiological wave field to estimate the
elasticity in tissues and not an external perturbation. This technique
was adapted to process B-mode images acquired with a clinical system. It
was first shown to faithfully assess elasticity in calibrated phantoms.
The technique was then implemented on the Focal One (R) clinical system
to evaluate its capacity to detect HIFU lesions in vitro (CNR = 9.2 dB)
showing its independence regarding the bubbles resulting from HIFU and
in vivo where the physiological wave field was successfully used to
detect and delineate lesions of different sizes in porcine liver.
Finally, the technique was performed for the very first time in four
prostate cancer patients showing strong variation in elasticity before
and after HIFU treatment (average variation of 33.0 +/- 16.0% ). Passive
elastography has shown evidence of its potential to monitor HIFU
treatment and thus help spread its use.
RI Chapelon, Jean-Yves/; Lafon, Cyril/E-4152-2016; Payen, Thomas/
OI Chapelon, Jean-Yves/0000-0003-0642-1132; Lafon,
Cyril/0000-0003-1550-970X; Payen, Thomas/0000-0003-3432-4436
ZR 0
ZA 0
AB 高强度聚焦超声 (HIFU) 是一种很有前途的治疗方法,可治疗包括前列腺癌在内的多种疾病。然而,缺乏可靠的基于超声的监测技术限制了其临床应用。超声波目前提供实时 HIFU 规划,但其监测用途通常仅限于检测由于混沌气泡出现而导致的反向散射增加。 HIFU 已被证明会在各种组织中产生僵硬,因此弹性成像是消融监测的一个有趣的线索。然而,标准技术通常需要产生受控的推动力,这在更深的器官中可能会出现问题。被动弹性成像提供了一种潜在的替代方案,因为它使用生理波场来估计组织的弹性而不是外部扰动。该技术适用于处理通过临床系统获取的 B 模式图像。它首先被证明可以忠实地评估校准模型的弹性。然后,该技术在 Focal One (R) 临床系统上实施,以评估其体外检测 HIFU 病变的能力 (CNR = 9.2 dB),显示其对于 HIFU 产生的气泡和体内成功使用生理波场的独立性检测并描绘猪肝脏中不同大小的病变。最后,该技术首次在四名前列腺癌患者身上进行,这些患者在 HIFU 治疗前后表现出强烈的弹性变化(平均变化为 33.0 +/- 16.0%)。被动弹性成像已显示出其监测 HIFU 治疗的潜力,从而有助于推广其使用。RI Chapelon,让-伊夫/;西里尔·拉丰/E-4152-2016;托马斯·佩恩/让·伊夫·OI Chapelon/0000-0003-0642-1132;西里尔·拉丰/0000-0003-1550-970X;托马斯·佩恩/0000-0003-3432-4436 ZR 0 ZA 0
ZS 0
ZB 1
TC 1
Z8 0
Z9 1
C1 Univ Lyon 1, Ctr Leon Berard, LabTAU, INSERM, F-69003 Lyon, France
C1 Univ Lyon, Ctr Leon Berard, LabTAU, Inserm, F-69003 Lyon, France
C1 Hop Edouard Herriot, Dept Urol, Hosp Civils Lyon, F-69002 Lyon, France
C1 EDAP TMS, F-69120 Vaulx En Velin, France
C1 Univ Lyon, Inst Chirurg Expt, Ctr Leon Berard, F-69003 Lyon, France
C3 EDAP TMS
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400001
PM 38109239
ER
ZS 0 ZB 1 TC 1 Z8 0 Z9 1 C1 Univ Lyon 1, Ctr Leon Berard, LabTAU, INSERM, F-69003 Lyon, France C1 Univ Lyon, Ctr Leon Berard, LabTAU, Inserm, F-69003 Lyon, France C1 Hop Edouard Herriot, Dept Urol, Hosp Civils Lyon, F-69002 Lyon, France C1 EDAP TMS, F-69120 Vaulx En Velin, France C1 Univ Lyon, Inst Chirurg Expt, Ctr Leon Berard, F-69003 Lyon, France C3 EDAP TMS SN 0278 -0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400001 PM 38109239 ER
PT J
AU Moazami, Saeed
Ray, Deep
Pelletier, Daniel
Oberai, Assad A.
PT J AU Moazami、Saeed Ray、Deep Pelletier、Daniel Oberai、Assad A.
TI Probabilistic Brain Extraction in MR Images via Conditional Generative
Adversarial Networks
通过条件生成对抗网络在 MR 图像中进行 TI 概率大脑提取
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1071
EP 1088
DI 10.1109/TMI.2023.3327942
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1071 EP 1088 DI 10.1109/TMI.2023.3327942 DT 文章 PD 2024 年 3 月 PY 2024
AB Brain extraction, or the task of segmenting the brain in MR images,
forms an essential step for many neuroimaging applications. These
include quantifying brain tissue volumes, monitoring neurological
diseases, and estimating brain atrophy. Several algorithms have been
proposed for brain extraction, including image-to-image deep learning
methods that have demonstrated significant gains in accuracy. However,
none of them account for the inherent uncertainty in brain extraction.
Motivated by this, we propose a novel, probabilistic deep learning
algorithm for brain extraction that recasts this task as a Bayesian
inference problem and utilizes a conditional generative adversarial
network (cGAN) to solve it. The input to the cGAN's generator is an MR
image of the head, and the output is a collection of likely brain images
drawn from a probability density conditioned on the input. These images
are used to generate a pixel-wise mean image, serving as the estimate
for the extracted brain, and a standard deviation image, which
quantifies the uncertainty in the prediction. We test our algorithm on
head MR images from five datasets: NFBS, CC359, LPBA, IBSR, and their
combination. Our datasets are heterogeneous regarding multiple factors,
including subjects (with and without symptoms), magnetic field
strengths, and manufacturers. Our experiments demonstrate that the
proposed approach is more accurate and robust than a widely used brain
extraction tool and at least as accurate as the other deep learning
methods. They also highlight the utility of quantifying uncertainty in
downstream applications.
RI Ray, Deep/HNQ-6738-2023; Oberai, Assad/; Ray, Deep/
OI Oberai, Assad/0000-0002-7668-7713; Ray, Deep/0000-0002-8460-9862
ZA 0
Z8 0
ZR 0
AB 大脑提取,或者说在 MR 图像中分割大脑的任务,是许多神经成像应用的重要步骤。这些包括量化脑组织体积、监测神经系统疾病和估计脑萎缩。已经提出了几种用于大脑提取的算法,包括图像到图像的深度学习方法,这些方法已经证明了准确性的显着提高。然而,它们都没有解释大脑提取中固有的不确定性。受此启发,我们提出了一种用于大脑提取的新颖的概率深度学习算法,将该任务重新定义为贝叶斯推理问题,并利用条件生成对抗网络(cGAN)来解决它。 cGAN 生成器的输入是头部的 MR 图像,输出是根据输入条件的概率密度绘制的可能大脑图像的集合。这些图像用于生成像素级平均图像(作为提取的大脑的估计)和标准差图像(用于量化预测的不确定性)。我们在来自五个数据集的头部 MR 图像上测试我们的算法:NFBS、CC359、LPBA、IBSR 及其组合。我们的数据集在多个因素方面是异构的,包括受试者(有或没有症状)、磁场强度和制造商。我们的实验表明,所提出的方法比广泛使用的大脑提取工具更准确、更稳健,并且至少与其他深度学习方法一样准确。他们还强调了量化下游应用中不确定性的效用。 RI 射线,深/HNQ-6738-2023;奥贝莱、阿萨德/;雷·深 / OI Oberai,阿萨德/0000-0002-7668-7713;射线,深/0000-0002-8460-9862 ZA 0 Z8 0 ZR 0
ZS 0
TC 0
ZB 0
Z9 0
C1 Univ Southern Calif, Viterbi Sch Engn, Dept Aerosp & Mech Engn, Los
Angeles, CA 90007 USA
C1 Univ Southern Calif, Keck Sch Med, Dept Neurol, Los Angeles, CA 90033
USA
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600001
PM 37883281
ER
ZS 0 TC 0 ZB 0 Z9 0 C1 南加州大学,Viterbi Sch Engn,航空与机械工程系,洛杉矶,CA 90007 美国 C1 南加州大学,Keck Sch Med,Neurol 系,洛杉矶,CA 90033 美国 SN 0278- 0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600001 PM 37883281 ER
PT J
AU Wang, Yijun
Lang, Rui
Li, Rui
Zhang, Junsong
PT J AU Wang, 郎一君, 李锐, 张锐, 俊松
TI NRTR: Neuron Reconstruction With Transformer From 3D Optical Microscopy
Images
TI NRTR:使用 Transformer 根据 3D 光学显微镜图像重建神经元
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 886
EP 898
DI 10.1109/TMI.2023.3323466
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 886 EP 898 DI 10.1109/TMI.2023.3323466 DT 文章 PD 2024 年 2 月 PY 2024
AB The neuron reconstruction from raw Optical Microscopy (OM) image stacks
is the basis of neuroscience. Manual annotation and semi-automatic
neuron tracing algorithms are time-consuming and inefficient. Existing
deep learning neuron reconstruction methods, although demonstrating
exemplary performance, greatly demand complex rule-based components.
Therefore, a crucial challenge is designing an end-to-end neuron
reconstruction method that makes the overall framework simpler and model
training easier. We propose a Neuron Reconstruction Transformer (NRTR)
that, discarding the complex rule-based components, views neuron
reconstruction as a direct set-prediction problem. To the best of our
knowledge, NRTR is the first image-to-set deep learning model for
end-to-end neuron reconstruction. The overall pipeline consists of the
CNN backbone, Transformer encoder-decoder, and connectivity construction
module. NRTR generates a point set representing neuron morphological
characteristics for raw neuron images. The relationships among the
points are established through connectivity construction. The point set
is saved as a standard SWC file. In experiments using the BigNeuron and
VISoR-40 datasets, NRTR achieves excellent neuron reconstruction results
for comprehensive benchmarks and outperforms competitive baselines.
Results of extensive experiments indicate that NRTR is effective at
showing that neuron reconstruction is viewed as a set-prediction
problem, which makes end-to-end model training available.
RI Zhang, Junsong/HKW-6976-2023; Lang, Rui/
OI Lang, Rui/0009-0002-3444-0461
TC 1
ZR 0
AB 从原始光学显微镜 (OM) 图像堆栈重建神经元是神经科学的基础。手动注释和半自动神经元追踪算法耗时且低效。现有的深度学习神经元重建方法虽然表现出示范性的性能,但极大地需要复杂的基于规则的组件。因此,一个关键的挑战是设计一种端到端的神经元重建方法,使整体框架更简单,模型训练更容易。我们提出了一种神经元重建变压器(NRTR),它抛弃了复杂的基于规则的组件,将神经元重建视为直接的集合预测问题。据我们所知,NRTR 是第一个用于端到端神经元重建的图像到设置深度学习模型。整个管道由 CNN 主干、Transformer 编码器-解码器和连接构建模块组成。 NRTR 生成代表原始神经元图像的神经元形态特征的点集。点之间的关系是通过连通性构建来建立的。点集保存为标准 SWC 文件。在使用 BigNeuron 和 VISoR-40 数据集的实验中,NRTR 在综合基准方面取得了出色的神经元重建结果,并且优于竞争基准。大量实验的结果表明,NRTR 有效地表明神经元重建被视为集合预测问题,这使得端到端模型训练成为可能。张日,俊松/HKW-6976-2023;郎锐/爱郎锐/0009-0002-3444-0461 TC 1 ZR 0
ZS 0
ZB 0
ZA 0
Z8 0
Z9 1
C1 Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
C1 Cent China Normal Univ, Natl Engn Lab Educ Big Data, Wuhan 430079,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400004
PM 37847618
ER
ZS 0 ZB 0 ZA 0 Z8 0 Z9 1 C1 厦门大学,科学信息,厦门 361005,人民 R 中国 C1 华师大,国家工程教育大数据实验室,武汉 430079,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400004 PM 37847618 ER
PT J
AU de Vente, Coen
Vermeer, Koenraad A.
Jaccard, Nicolas
Wang, He
Sun, Hongyi
Khader, Firas
Truhn, Daniel
Aimyshev, Temirgali
Zhanibekuly, Yerkebulan
Le, Tien-Dung
Galdran, Adrian
Ballester, Miguel Angel Gonzalez
Carneiro, Gustavo
Devika, R. G.
Sethumadhavan, Hrishikesh Panikkasseril
Puthussery, Densen
Liu, Hong
Yang, Zekang
Kondo, Satoshi
Kasai, Satoshi
Wang, Edward
Durvasula, Ashritha
Heras, Jonathan
Zapata, Miguel Angel
Araujo, Teresa
Aresta, Guilherme
Bogunovic, Hrvoje
Arikan, Mustafa
Lee, Yeong Chan
Cho, Hyun Bin
Choi, Yoon Ho
Qayyum, Abdul
Razzak, Imran
van Ginneken, Bram
Lemij, Hans G.
Sanchez, Clara I.
PT J AU de Vente, Coen Vermeer, Koenraad A. Jaccard, Nicolas Wang, He Sun, Hongyi Khader, Firas Truhn, Daniel Aimyshev, Temirgali Zhanibekuly, Yerkebulan Le, Tien-Dung Galdran, Adrian Ballester, Miguel Angel Gonzalez Carneiro, Gustavo Devika , RG Sethumadhavan, Hrishikesh Panikkasseril Puthussery, Densen Liu, Hong Yang, Zekang Kondo, Satoshi Kasai, Satoshi Wang, Edward Durvasula, Ashritha Heras, Jonathan Zapata, Miguel Angel Araujo, Teresa Aresta, Guilherme Bogunovic, Hrvoje Arikan, Mustafa Lee, Yeong Chan Cho、Hyun Bin Choi、Yoon Ho Qayyum、Abdul Razzak、Imran van Ginneken、Bram Lemij、Hans G. Sanchez、Clara I.
TI AIROGS: Artificial Intelligence for Robust Glaucoma Screening Challenge
TI AIROGS:人工智能应对稳健的青光眼筛查挑战
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 542
EP 557
DI 10.1109/TMI.2023.3313786
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 542 EP 557 DI 10.1109/TMI.2023.3313786 DT 文章 PD 2024 年 1 月 PY 2024
AB The early detection of glaucoma is essential in preventing visual
impairment. Artificial intelligence (AI) can be used to analyze color
fundus photographs (CFPs) in a cost-effective manner, making glaucoma
screening more accessible. While AI models for glaucoma screening from
CFPs have shown promising results in laboratory settings, their
performance decreases significantly in real-world scenarios due to the
presence of out-of-distribution and low-quality images. To address this
issue, we propose the Artificial Intelligence for Robust Glaucoma
Screening (AIROGS) challenge. This challenge includes a large dataset of
around 113,000 images from about 60,000 patients and 500 different
screening centers, and encourages the development of algorithms that are
robust to ungradable and unexpected input data. We evaluated solutions
from 14 teams in this paper and found that the best teams performed
similarly to a set of 20 expert ophthalmologists and optometrists. The
highest-scoring team achieved an area under the receiver operating
characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting
ungradable images on-the-fly. Additionally, many of the algorithms
showed robust performance when tested on three other publicly available
datasets. These results demonstrate the feasibility of robust AI-enabled
glaucoma screening.
RI de Vente, Coen/HLW-7902-2023; Truhn, Daniel/AAG-9359-2021; Choi, Yoon Ho/JXM-6303-2024; Araújo, Teresa/F-5629-2016; Wang, Edward/HDO-0661-2022; Razzak, Imran/AEW-5139-2022; Gutierrez, Clara/N-3580-2014; Bogunovic, Hrvoje/J-3445-2014; Qayyum, Abdul/LDF-9031-2024; Carneiro, Gustavo/; , Temirgali/; Sun, Hongyi/; de Vente, Coen/
OI Carneiro, Gustavo/0000-0002-5571-6220; , Temirgali/0000-0002-1060-4882;
Sun, Hongyi/0000-0002-7907-3557; de Vente, Coen/0000-0001-5908-8367
TC 2
ZR 0
Z8 0
AB 青光眼的早期发现对于预防视力障碍至关重要。人工智能 (AI) 可用于以经济高效的方式分析彩色眼底照片 (CFP),从而使青光眼筛查变得更加容易。虽然 CFP 的青光眼筛查 AI 模型在实验室环境中显示出良好的结果,但由于存在分布不均和低质量图像,其性能在现实场景中显着下降。为了解决这个问题,我们提出了人工智能稳健青光眼筛查 (AIROGS) 挑战。该挑战包括来自约 60,000 名患者和 500 个不同筛查中心的约 113,000 张图像的大型数据集,并鼓励开发对不可分级和意外输入数据具有鲁棒性的算法。我们在本文中评估了 14 个团队的解决方案,发现最好的团队与 20 名专业眼科医生和验光师组成的团队表现相似。得分最高的团队在动态检测不可分级图像方面获得了 0.99 的接收器操作特征曲线下面积(95% CI:0.98-0.99)。此外,在其他三个公开可用的数据集上进行测试时,许多算法表现出了强大的性能。这些结果证明了人工智能支持的强大青光眼筛查的可行性。RI 德文特,科恩/HLW-7902-2023;丹尼尔·特鲁恩/AAG-9359-2021;崔允浩/JXM-6303-2024;阿劳霍,特蕾莎/F-5629-2016;王爱德华/HDO-0661-2022;伊姆兰·拉扎克/AEW-5139-2022;克拉拉·古铁雷斯/N-3580-2014;赫尔沃耶·博古诺维奇/J-3445-2014;阿卜杜勒·卡尤姆/LDF-9031-2024;卡内罗、古斯塔沃/; , 泰米尔加利/;孙宏毅/;德文特,科恩/OI 卡内罗,古斯塔沃/0000-0002-5571-6220; ,特米尔加利/0000-0002-1060-4882;孙宏毅/0000-0002-7907-3557;德文特,科恩/0000-0001-5908-8367 TC 2 ZR 0 Z8 0
ZS 0
ZA 0
ZB 0
Z9 2
C1 Univ Amsterdam, Informat Inst, Quantitat Healthcare Anal QurAI Grp,
NL-1098 XH Amsterdam, Netherlands
C1 Amsterdam UMC Locatie AMC, Dept Biomed Engn & Phys, NL-1105 AZ
Amsterdam, Noord Holland, Netherlands
C1 Radboudumc, Dept Radiol & Nucl Med, Diagnost Image Anal Grp DIAG,
NL-6500 HB Nijmegen, Gelderland, Netherlands
C1 Rotterdam Eye Hosp, Rotterdam Ophthalm Inst, NL-3011 BH Rotterdam,
Netherlands
C1 Project Orbis Int Inc, New York, NY 10017 USA
C1 Peking Union Med Coll Hosp, Beijing 100730, Peoples R China
C1 Capital Med Univ, Xuanwu Hosp, Beijing 100053, Peoples R China
C1 Tsinghua Univ, Dept Automat, Beijing 100190, Peoples R China
C1 Univ Hosp Aachen, Dept Diagnost & Intervent Radiol, D-52074 Aachen,
Germany
C1 CMC Technol LLP, Z05T0B8, Nur Sultan, Kazakhstan
C1 KBC, B-1080 Brussels, Belgium
C1 Univ Pompeu Fabra, Dept Tecnol Informacio & Comunicac DTIC, Barcelona
08018, Spain
C1 Univ Adelaide, Australian Inst Machine Learning AIML, Adelaide, SA 5000,
Australia
C1 ICREA, Barcelona 08010, Spain
C1 Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford GU2 7XH, England
C1 Coll Engn Trivandrum, Thiruvananthapuram 695016, India
C1 Founding Minds Software, Thiruvananthapuram 682030, India
C1 Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
C1 Muroran Inst Technol, Muroran 0508585, Japan
C1 Niigata Univ Hlth & Welf, Niigata 9503102, Japan
C1 Univ Western Ontario, Schulich Sch Med & Dent, London, ON N6A 5C1,
Canada
C1 Univ La Rioja, Dept Math & Comp Sci, Logrono 26004, Spain
C1 Hosp Valle De Hebron, Sant Cugat Del Valles 08195, Spain
C1 UPRetina, Barcelona 08195, Spain
C1 Med Univ Vienna, Dept Ophthalmol & Optometry, Christian Doppler Lab
Artificial Intelligence Reti, A-1090 Vienna, Austria
C1 UCL, Inst Ophthalmol, London EC1V 9EL, England
C1 Samsung Med Ctr, Res Inst Future Med, Seoul 06351, South Korea
C1 Sungkyunkwan Univ, Samsung Med Ctr, Samsung Adv Inst Hlth Sci & Technol
SAIHST, Dept Digital Hlth, Seoul 06351, South Korea
C1 Mayo Clin, Dept Artificial Intelligence & Informat, Jacksonville, FL
32224 USA
C1 Kings Coll London, Dept Biomed Engn, London, England
C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 3125, Australia
C3 Amsterdam UMC Locatie AMC
C3 Project Orbis Int Inc
C3 CMC Technol LLP
C3 KBC
C3 Founding Minds Software
C3 UPRetina
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600018
PM 37713220
ER
ZS 0 ZA 0 ZB 0 Z9 2 C1 阿姆斯特丹大学,Informat Inst,Quantitat Healthcare Anal QurAI Grp,NL-1098 XH 阿姆斯特丹,荷兰 C1 阿姆斯特丹 UMC Locatie AMC,Dept Biomed Engn & Phys,NL-1105 AZ 阿姆斯特丹,北荷兰,荷兰C1 Radboudumc,放射科和核医学部,诊断图像肛门 Grp DIAG,NL-6500 HB 奈梅亨,格尔德兰,荷兰 C1 鹿特丹眼科医院,鹿特丹眼科研究所,NL-3011 BH 荷兰鹿特丹 C1 Project Orbis Int Inc,纽约,纽约10017 美国 C1 北京协和医院,北京 100730,人民 R 中国 C1 首都医科大学宣武医院,北京 100053,人民 R 中国 C1 清华大学自动化科,北京 100190,人民 R 中国 C1 亚琛大学医院诊断科Intervent Radiol,D-52074 亚琛,德国 C1 CMC Technol LLP,Z05T0B8,哈萨克斯坦努尔苏丹 C1 KBC,B-1080 布鲁塞尔,比利时 C1 庞培法布拉大学,技术与通信部 DTIC,巴塞罗那 08018,西班牙 C1 澳大利亚阿德莱德大学Inst Machine Learning AIML,阿德莱德,SA 5000,澳大利亚 C1 ICREA,巴塞罗那 08010,西班牙 C1 Univ Surrey,Ctr Vis Speech & Signal Proc,Guildford GU2 7XH,英格兰 C1 Coll Engn Trivandrum,特里凡得琅 695016,印度 C1 Founding Minds Software,特里凡得琅 682030 , 印度 C1 Chinese Acad Sci, Inst Comp Technol, 北京 100190, Peoples R China C1 Muroran Inst Technol, Muroran 0508585, 日本 C1 Niigata Univ Hlth & Welf, Niigata 9503102, 日本 C1 Univ Western Ontario, Schulich Sch Med & Dent, London, ON N6A 5C1,加拿大 C1 拉里奥哈大学,数学与计算科学系,洛格罗诺 26004,西班牙 C1 Hosp Valle De Hebron,Sant Cugat Del Valles 08195,西班牙 C1 UPRetina,巴塞罗那 08195,西班牙 C1 维也纳医学大学,眼科与验光系, Christian Doppler 实验室人工智能 Reti,A-1090 维也纳,奥地利 C1 UCL,眼科研究所,伦敦 EC1V 9EL,英国 C1 Samsung Med Ctr,Res Inst Future Med,首尔 06351,韩国 C1 Sungkyunkwan Univ,Samsung Med Ctr,Samsung Adv Inst Hlth Sci & Technol SAIHST,Dept Digital Hlth,Seoul 06351,韩国 C1 Mayo Clin,Dept Artificial Intelligence & Informat, Jacksonville, FL 32224 USA C1 Kings Coll London, Dept Biomed Engn, London, England C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 3125, Australia C3 阿姆斯特丹 UMC Locatie AMC C3 Project Orbis Int Inc C3 CMC Technol LLP C3 KBC C3 Founding Minds Software C3 UPRetina SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600018 PM 37713220 ER
PT J
AU Gungor, Alper
Askin, Baris
Soydan, Damla Alptekin
Top, Can Baris
Saritas, Emine Ulku
Cukur, Tolga
PT J AU Gungor、Alper Askin、Baris Soydan、Damla Alptekin Top、Can Baris Saritas、Emine Ulku Cukur、Tolga
TI DEQ-MPI: A Deep Equilibrium Reconstruction With Learned Consistency for
Magnetic Particle Imaging
TI DEQ-MPI:磁粒子成像具有学习一致性的深度平衡重建
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 321
EP 334
DI 10.1109/TMI.2023.3300704
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 321 EP 334 DI 10.1109/TMI.2023.3300704 DT 文章 PD 2024 年 1 月 PY 2024
AB Magnetic particle imaging (MPI) offers unparalleled contrast and
resolution for tracing magnetic nanoparticles. A common imaging
procedure calibrates a system matrix (SM) that is used to reconstruct
data from subsequent scans. The ill-posed reconstruction problem can be
solved by simultaneously enforcing data consistency based on the SM and
regularizing the solution based on an image prior. Traditional
hand-crafted priors cannot capture the complex attributes of MPI images,
whereas recent MPI methods based on learned priors can suffer from
extensive inference times or limited generalization performance. Here,
we introduce a novel physics-driven method for MPI reconstruction based
on a deep equilibrium model with learned data consistency (DEQ-MPI).
DEQ-MPI reconstructs images by augmenting neural networks into an
iterative optimization, as inspired by unrolling methods in deep
learning. Yet, conventional unrolling methods are computationally
restricted to few iterations resulting in non-convergent solutions, and
they use hand-crafted consistency measures that can yield suboptimal
capture of the data distribution. DEQ-MPI instead trains an implicit
mapping to maximize the quality of a convergent solution, and it
incorporates a learned consistency measure to better account for the
data distribution. Demonstrations on simulated and experimental data
indicate that DEQ-MPI achieves superior image quality and competitive
inference time to state-of-the-art MPI reconstruction methods.
RI Saritas, Emine/A-3730-2013; Top, Can Baris/AAJ-6092-2021; Askin, Baris/KHX-0159-2024; Alptekin Soydan, Damla/JEG-0266-2023; Cukur, Tolga/Z-5452-2019; Gungor, Alper/; ALPTEKIN SOYDAN, Damla/
OI Cukur, Tolga/0000-0002-2296-851X; Gungor, Alper/0000-0002-3043-9124;
ALPTEKIN SOYDAN, Damla/0000-0003-4130-1259
ZR 0
ZB 4
Z8 0
ZA 0
AB 磁粒子成像 (MPI) 为追踪磁性纳米颗粒提供无与伦比的对比度和分辨率。常见的成像程序会校准系统矩阵 (SM),该系统矩阵用于从后续扫描中重建数据。不适定重建问题可以通过同时基于 SM 强制执行数据一致性和基于图像先验对解进行正则化来解决。传统的手工制作先验无法捕获 MPI 图像的复杂属性,而最近基于学习先验的 MPI 方法可能会受到推理时间过长或泛化性能有限的影响。在这里,我们介绍了一种基于具有学习数据一致性的深度平衡模型(DEQ-MPI)的新型物理驱动 MPI 重建方法。 DEQ-MPI 受到深度学习中展开方法的启发,通过将神经网络增强为迭代优化来重建图像。然而,传统的展开方法在计算上仅限于少数迭代,导致解决方案不收敛,并且它们使用手工设计的一致性度量,可能会产生数据分布的次优捕获。相反,DEQ-MPI 训练隐式映射以最大限度地提高收敛解决方案的质量,并且它结合了学习的一致性度量以更好地解释数据分布。模拟和实验数据的演示表明,与最先进的 MPI 重建方法相比,DEQ-MPI 实现了卓越的图像质量和有竞争力的推理时间。RI 萨里塔斯,埃米内/A-3730-2013;顶部,坎巴里斯/AAJ-6092-2021;阿斯金,巴里斯/KHX-0159-2024; Alptekin Soydan,达姆拉/JEG-0266-2023;托尔加库库尔/Z-5452-2019;甘戈尔,阿尔珀/; ALPTEKIN SOYDAN,达姆拉/OI Cukur,托尔加/0000-0002-2296-851X;甘戈尔,阿尔珀/0000-0002-3043-9124;阿尔特金大豆,达姆拉/0000-0003-4130-1259 ZR 0 ZB 4 Z8 0 ZA 0
ZS 0
TC 12
Z9 12
C1 Bilkent Univ, Dept Elect & Elect Engn, TR-06800 Ankara, Turkiye
C1 Bilkent Univ, Natl Magnet Resonance Res Ctr, TR-06800 Ankara, Turkiye
C1 Aselsan Res Ctr, Ankara, Turkiye
C1 Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600001
PM 37527298
ER
ZS 0 TC 12 Z9 12 C1 Bilkent Univ,Dept Elect & Elect Engn,TR-06800 安卡拉,土耳其 C1 Bilkent Univ,国家磁共振研究中心,TR-06800 安卡拉,土耳其 C1 Aselsan Res Ctr,安卡拉,土耳其 C1 卡内基梅隆大学, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600001 PM 37527298 ER
PT J
AU Hahne, Christopher
Chabouh, Georges
Chavignon, Arthur
Couture, Olivier
Sznitman, Raphael
PT J AU Hahne、Christopher Chabouh、Georges Chavignon、Arthur Couture、Olivier Sznitman、Raphael
TI RF-ULM: Ultrasound Localization Microscopy Learned From Radio-Frequency
Wavefronts
TI RF-ULM:从射频波前学习超声定位显微镜
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3253
EP 3262
DI 10.1109/TMI.2024.3391297
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3253 EP 3262 DI 10.1109/TMI.2024.3391297 DT 文章 PD 2024 年 SEP 2024 年 PY
AB In Ultrasound Localization Microscopy (ULM), achieving high-resolution
images relies on the precise localization of contrast agent particles
across a series of beamformed frames. However, our study uncovers an
enormous potential: The process of delay-and-sum beamforming leads to an
irreversible reduction of Radio-Frequency (RF) channel data, while its
implications for localization remain largely unexplored. The rich
contextual information embedded within RF wavefronts, including their
hyperbolic shape and phase, offers great promise for guiding Deep Neural
Networks (DNNs) in challenging localization scenarios. To fully exploit
this data, we propose to directly localize scatterers in RF channel
data. Our approach involves a custom super-resolution DNN using learned
feature channel shuffling, non-maximum suppression, and a semi-global
convolutional block for reliable and accurate wavefront localization.
Additionally, we introduce a geometric point transformation that
facilitates seamless mapping to the B-mode coordinate space. To
understand the impact of beamforming on ULM, we validate the
effectiveness of our method by conducting an extensive comparison with
State-Of-The-Art (SOTA) techniques. We present the inaugural in vivo
results from a wavefront-localizing DNN, highlighting its real-world
practicality. Our findings show that RF-ULM bridges the domain shift
between synthetic and real datasets, offering a considerable advantage
in terms of precision and complexity. To enable the broader research
community to benefit from our findings.
RI Chabouh, Georges/JMQ-7104-2023; Chavignon, Arthur/; Sznitman, Raphael/; Chabouh, Georges/
OI Chavignon, Arthur/0000-0001-7883-7482; Sznitman,
Raphael/0000-0001-6791-4753; Chabouh, Georges/0000-0003-0760-909X
Z8 0
ZR 0
AB 在超声定位显微镜 (ULM) 中,获得高分辨率图像依赖于造影剂粒子在一系列波束形成帧上的精确定位。然而,我们的研究揭示了巨大的潜力:延迟求和波束成形过程会导致射频 (RF) 通道数据不可逆地减少,而其对定位的影响在很大程度上仍未被探索。 RF 波前中嵌入的丰富上下文信息(包括其双曲形状和相位)为指导深度神经网络 (DNN) 应对具有挑战性的定位场景提供了巨大的希望。为了充分利用这些数据,我们建议直接定位射频通道数据中的散射体。我们的方法涉及使用学习的特征通道改组、非极大值抑制和半全局卷积块的自定义超分辨率 DNN,以实现可靠且准确的波前定位。此外,我们引入了几何点变换,有助于无缝映射到 B 模式坐标空间。为了了解波束成形对 ULM 的影响,我们通过与最先进 (SOTA) 技术进行广泛比较来验证我们方法的有效性。我们展示了波前定位 DNN 的首次体内结果,强调了其现实世界的实用性。我们的研究结果表明,RF-ULM 弥合了合成数据集和真实数据集之间的领域转换,在精度和复杂性方面提供了相当大的优势。使更广泛的研究界能够从我们的研究结果中受益。 RI Chabouh,乔治/JMQ-7104-2023;亚瑟·查维尼翁/;拉斐尔·斯尼特曼/;乔治·沙布 / 阿瑟·沙维农 / 0000-0001-7883-7482;拉斐尔·斯尼特曼/0000-0001-6791-4753;乔治·沙布/0000-0003-0760-909X Z8 0 ZR 0
ZS 0
ZB 0
ZA 0
TC 0
Z9 0
C1 Univ Bern, ARTORG Ctr, Artificial Intelligence Med Imaging Lab, CH-3008
Bern, Switzerland
C1 Sorbonne Univ, Lab Imagerie Biomed, INSERM, CNRS, F-75006 Paris, France
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600011
PM 38640052
ER
ZS 0 ZB 0 ZA 0 TC 0 Z9 0 C1 伯尔尼大学,ARTORG Ctr,人工智能医学成像实验室,CH-3008 伯尔尼,瑞士 C1 索邦大学,Imagerie Biomed 实验室,INSERM,CNRS,F-75006 巴黎,法国 SN 0278- 0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600011 PM 38640052 ER
PT J
AU Zhu, Tao
Yin, Lin
He, Jie
Wei, Zechen
Yang, Xin
Tian, Jie
Hui, Hui
PT J AU Zhu, 尹涛, 何林, 伟杰, 杨泽辰, 田鑫, 辉杰, Hui
TI Accurate Concentration Recovery for Quantitative Magnetic Particle
Imaging Reconstruction via Nonconvex Regularization
通过非凸正则化进行定量磁粒子成像重建的 TI 精确浓度恢复
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2949
EP 2959
DI 10.1109/TMI.2024.3383468
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2949 EP 2959 DI 10.1109/TMI.2024.3383468 DT 文章 PD 2024 年 8 月 PY 2024
AB Magnetic particle imaging (MPI) uses nonlinear response signals to
noninvasively detect magnetic nanoparticles in space, and its
quantitative properties hold promise for future precise quantitative
treatments. In reconstruction, the system matrix based method
necessitates suitable regularization terms, such as Tikhonov or
non-negative fused lasso (NFL) regularization, to stabilize the
solution. While NFL regularization offers clearer edge information than
Tikhonov regularization, it carries a biased estimate of the $\mathbf
{l}_{\mathbf {{1}}}$ penalty, leading to an underestimation of the
reconstructed concentration and adversely affecting the quantitative
properties. In this paper, a new nonconvex regularization method
including min-max concave (MC) and total variation (TV) regularization
is proposed. This method utilized MC penalty to provide nearly unbiased
sparse constraints and adds the TV penalty to provide a uniform
intensity distribution of images. By combining the alternating direction
multiplication method (ADMM) and the two-step parameter selection
method, a more accurate quantitative MPI reconstruction was realized.
The performance of the proposed method was verified on the simulation
data, the Open-MPI dataset, and measured data from a homemade MPI
scanner. The results indicate that the proposed method achieves better
image quality while maintaining the quantitative properties, thus
overcoming the drawback of intensity underestimation by the NFL method
while providing edge information. In particular, for the measured data,
the proposed method reduced the relative error in the intensity of the
reconstruction results from 28% to 8%.
RI Hui, Hui/U-9994-2019; zhu, tao/KHY-3114-2024; Tao, Zhu/; Wei, Zechen/; Yin, Lin/; Hui, Hui/
OI Tao, Zhu/0000-0002-5852-091X; Wei, Zechen/0009-0002-6008-734X; Yin,
Lin/0000-0002-4325-345X; Hui, Hui/0000-0002-6732-4232
ZR 0
TC 3
ZB 2
Z8 0
AB 磁粒子成像 (MPI) 使用非线性响应信号来无创地检测空间中的磁性纳米粒子,其定量特性为未来的精确定量治疗带来了希望。在重建过程中,基于系统矩阵的方法需要适当的正则化项,例如 Tikhonov 或非负融合套索 (NFL) 正则化,以稳定解。虽然 NFL 正则化提供了比 Tikhonov 正则化更清晰的边缘信息,但它对 $\mathbf {l}_{\mathbf {{1}}}$ 惩罚有偏差估计,导致重建浓度的低估并对定量产生不利影响。特性。本文提出了一种新的非凸正则化方法,包括最小-最大凹正则化(MC)和全变分(TV)正则化。该方法利用 MC 惩罚来提供几乎无偏的稀疏约束,并添加 TV 惩罚来提供图像的均匀强度分布。通过结合交替方向乘法(ADMM)和两步参数选择方法,实现了更准确的定量MPI重建。该方法的性能在仿真数据、Open-MPI 数据集和自制 MPI 扫描仪的测量数据上得到了验证。结果表明,该方法在保持定量特性的同时获得了更好的图像质量,从而克服了NFL方法在提供边缘信息的同时低估强度的缺点。特别是,对于测量数据,该方法将重建结果强度的相对误差从28%降低到8%。日辉辉/U-9994-2019;朱涛/KHY-3114-2024;陶朱/;魏则臣/;尹林/;惠惠/朱爱涛/0000-0002-5852-091X;魏泽辰/0009-0002-6008-734X;尹林/0000-0002-4325-345X;惠惠/0000-0002-6732-4232 ZR 0 TC 3 ZB 2 Z8 0
ZS 0
ZA 0
Z9 3
C1 Chinese Acad Sci, Inst Automation, CAS Key Lab Mol Imaging, Beijing Key
Lab Mol Imaging, Beijing 100190, Peoples R China
C1 Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190,
Peoples R China
C1 Beihang Univ, Sch Med & Engn, Minist Ind & Informat Technol, Key Lab Big
Data Based Precis Med, Beijing 100190, Peoples R China
C1 Beihang Univ, Sch Biol Sci & Med Engn, Beijing 100190, Peoples R China
C1 Beihang Univ, Inst Automation, CAS Key Lab Mol Imaging,Sch Engn Med,
Beijing Key Lab Mol Imaging,Minist Industryand Inf, Beijing 100190,
Peoples R China
C1 Natl Key Lab Kidney Dis, Beijing 100853, Peoples R China
C3 Natl Key Lab Kidney Dis
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200002
PM 38557624
ER
ZS 0 ZA 0 Z9 3 C1 中国科学院自动化研究所,中科院摩尔影像重点实验室,北京市摩尔影像重点实验室,北京 100190,人民大学 C1 中国科学院人工智能学院,北京 100190,人民大学 C1 北航大学,医学与工程学院,工业与信息技术部,基于大数据的Precis Med重点实验室,北京100190,北京航空航天大学,生物科学与医学工程学院,北京100190,北京航空航天大学C1,自动化研究所,中科院分子影像重点实验室,工信部分子影像北京市重点实验室,北京100190,人民R中国C1国家肾脏重点实验室,北京100853,人民R中国C3国家肾脏重点实验室SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200002 PM 38557624 ER
PT J
AU Chen, Kecheng
Qin, Tiexin
Lee, Victor Ho-Fun
Yan, Hong
Li, Haoliang
PT J AU Chen、秦克成、李铁心、严浩芬、李红、浩亮
TI Learning Robust Shape Regularization for Generalizable Medical Image
Segmentation
TI 学习用于通用医学图像分割的鲁棒形状正则化
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2693
EP 2706
DI 10.1109/TMI.2024.3371987
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2693 EP 2706 DI 10.1109/TMI.2024.3371987 DT 文章 PD 2024 年 7 月 PY 2024
AB Generalizable medical image segmentation enables models to generalize to
unseen target domains under domain shift issues. Recent progress
demonstrates that the shape of the segmentation objective, with its high
consistency and robustness across domains, can serve as a reliable
regularization to aid the model for better cross-domain performance,
where existing methods typically seek a shared framework to render
segmentation maps and shape prior concurrently. However, due to the
inherent texture and style preference of modern deep neural networks,
the edge or silhouette of the extracted shape will inevitably be
undermined by those domain-specific texture and style interferences of
medical images under domain shifts. To address this limitation, we
devise a novel framework with a separation between the shape
regularization and the segmentation map. Specifically, we first
customize a novel whitening transform-based probabilistic shape
regularization extractor namely WT-PSE to suppress undesirable
domain-specific texture and style interferences, leading to more robust
and high-quality shape representations. Second, we deliver a Wasserstein
distance-guided knowledge distillation scheme to help the WT-PSE to
achieve more flexible shape extraction during the inference phase.
Finally, by incorporating domain knowledge of medical images, we propose
a novel instance-domain whitening transform method to facilitate a more
stable training process with improved performance. Experiments
demonstrate the performance of our proposed method on both multi-domain
and single-domain generalization.
RI Lee, Victor/I-3554-2015; zhang, chen/JES-0371-2023; Chen, Kecheng/; Li, Haoliang/; YAN, Hong/; Lee, Victor/; Qin, Tiexin/
OI Chen, Kecheng/0000-0001-6657-3221; Li, Haoliang/0000-0002-8723-8112;
YAN, Hong/0000-0001-9661-3095; Lee, Victor/0000-0002-6283-978X; Qin,
Tiexin/0000-0003-2968-7320
ZB 0
AB 可泛化的医学图像分割使模型能够泛化到域转移问题下看不见的目标域。最近的进展表明,分割目标的形状具有跨域的高度一致性和鲁棒性,可以作为可靠的正则化来帮助模型获得更好的跨域性能,其中现有方法通常寻求一个共享框架来呈现分割图和同时塑造先验。然而,由于现代深度神经网络固有的纹理和风格偏好,提取的形状的边缘或轮廓将不可避免地受到域转移下医学图像的特定领域纹理和风格干扰的破坏。为了解决这个限制,我们设计了一个新颖的框架,将形状正则化和分割图分开。具体来说,我们首先定制了一种新颖的基于白化变换的概率形状正则化提取器,即 WT-PSE,以抑制不需要的特定域纹理和风格干扰,从而获得更稳健和高质量的形状表示。其次,我们提供了 Wasserstein 距离引导的知识蒸馏方案,以帮助 WT-PSE 在推理阶段实现更灵活的形状提取。最后,通过结合医学图像的领域知识,我们提出了一种新颖的实例域白化变换方法,以促进更稳定的训练过程和更高的性能。实验证明了我们提出的方法在多域和单域泛化上的性能。RI 李维克多/I-3554-2015;张陈/JES-0371-2023;陈克诚/;李浩亮/;颜红/;李,维克多/;秦铁心/陈爱可成/0000-0001-6657-3221;李浩亮/0000-0002-8723-8112;颜红/0000-0001-9661-3095;李维克多/0000-0002-6283-978X;秦铁心/0000-0003-2968-7320 ZB 0
ZS 0
TC 0
Z8 0
ZR 0
ZA 0
Z9 0
C1 City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
C1 City Univ Hong Kong, Ctr Intelligent Multidimens Data Anal, Hong Kong,
Peoples R China
C1 Univ Hong Kong, LKS Fac Med, Sch Clin Med, Dept Clin Oncol, Hong Kong,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100003
PM 38427546
ER
ZS 0 TC 0 Z8 0 ZR 0 ZA 0 Z9 0 C1 香港城市大学,Dept Elect Engn,香港,人民 R 中国 C1 香港城市大学,Ctr 智能多维度数据分析,香港,人民 R 中国 C1 Univ 香港, LKS Fac Med, Sch Clin Med, Dept Clin Oncol, 香港, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100003 PM 38427546 ER
PT J
AU Huang, Zixun
Zhao, Rui
Leung, Frank H. F.
Banerjee, Sunetra
Lam, Kin-Man
Zheng, Yong-Ping
Ling, Sai Ho
PT J AU Huang、赵子勋、Rui Leung、Frank HF Banerjee、Sunetra Lam、Kin-Man Cheng、Yong-Ping Ling、Sai Ho
TI Landmark Localization From Medical Images With Generative Distribution
Prior
通过生成分布优先从医学图像中进行 TI 地标定位
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2679
EP 2692
DI 10.1109/TMI.2024.3371948
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2679 EP 2692 DI 10.1109/TMI.2024.3371948 DT 文章 PD 2024 年 7 月 PY 2024
AB In medical image analysis, anatomical landmarks usually contain strong
prior knowledge of their structural information. In this paper, we
propose to promote medical landmark localization by modeling the
underlying landmark distribution via normalizing flows. Specifically, we
introduce the flow-based landmark distribution prior as a learnable
objective function into a regression-based landmark localization
framework. Moreover, we employ an integral operation to make the mapping
from heatmaps to coordinates differentiable to further enhance
heatmap-based localization with the learned distribution prior. Our
proposed Normalizing Flow-based Distribution Prior (NFDP) employs a
straightforward backbone and non-problem-tailored architecture (i.e.,
ResNet18), which delivers high-fidelity outputs across three X-ray-based
landmark localization datasets. Remarkably, the proposed NFDP can do the
job with minimal additional computational burden as the normalizing
flows module is detached from the framework on inferencing. As compared
to existing techniques, our proposed NFDP provides a superior balance
between prediction accuracy and inference speed, making it a highly
efficient and effective approach. The source code of this paper is
available at https://github.com/jacksonhzx95/NFDP.
RI Banerjee, Sunetra/AAM-9534-2021; HUANG, Zixun/HKV-5871-2023; Zhao, Rui/GRS-2879-2022; Leung, Frank Hung Fat/AAQ-1956-2021; Zheng, Yong-Ping/M-1197-2015; Ling, Sai Ho/; Lam, Kin-Man/A-9352-2014; Zhao, Rui/; Banerjee, Sunetra/
OI Leung, Frank Hung Fat/0000-0003-3921-7074; Zheng,
Yong-Ping/0000-0002-3407-9226; Ling, Sai Ho/0000-0003-0849-5098; Lam,
Kin-Man/0000-0002-0422-8454; Zhao, Rui/0000-0003-2733-3617; Banerjee,
Sunetra/0000-0001-7923-4915
ZB 0
Z8 0
TC 1
AB 在医学图像分析中,解剖标志通常包含对其结构信息的强大先验知识。在本文中,我们建议通过标准化流对底层地标分布进行建模来促进医学地标定位。具体来说,我们将基于流的地标分布先验作为可学习的目标函数引入到基于回归的地标定位框架中。此外,我们采用积分运算使从热图到坐标的映射可微分,以进一步增强基于学习分布先验的基于热图的定位。我们提出的基于流的归一化分布先验 (NFDP) 采用简单的主干和非问题定制的架构(即 ResNet18),它在三个基于 X 射线的地标定位数据集上提供高保真输出。值得注意的是,由于标准化流模块与推理框架分离,所提出的 NFDP 可以以最小的额外计算负担完成这项工作。与现有技术相比,我们提出的 NFDP 在预测精度和推理速度之间提供了卓越的平衡,使其成为一种高效且有效的方法。本文的源代码可在https://github.com/jacksonhzx95/NFDP获取。 RI Banerjee,Sunetra/AAM-9534-2021;黄子勋/HKV-5871-2023;赵锐/GRS-2879-2022;梁鸿发/AAQ-1956-2021;郑永平/M-1197-2015;凌世豪/;林健文/A-9352-2014;赵锐/; Banerjee, Sunetra/ OI Leung, Frank Hung Fat/0000-0003-3921-7074;郑永平/0000-0002-3407-9226;何凌世/0000-0003-0849-5098;林健文/0000-0002-0422-8454;赵锐/0000-0003-2733-3617;巴纳吉,Sunetra/0000-0001-7923-4915 ZB 0 Z8 0 TC 1
ZS 0
ZR 0
ZA 0
Z9 1
C1 Hong Kong Polytech Univ, Dept Elect & Elect Engn, Hong Kong, Peoples R
China
C1 Hisilicon Semicond Huawei Technol, Wuhan 518129, Peoples R China
C1 Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
C1 Hong Kong Polytech Univ, Dept Biomed Engn, Hong Kong, Peoples R China
C3 Hisilicon Semicond Huawei Technol
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100024
PM 38421850
ER
ZS 0 ZR 0 ZA 0 Z9 1 C1 香港理工大学,Elect & Engn, Hong Kong, Peoples R China C1 海思半导体华为技术,武汉 518129,Peoples R China C1 Univ Technol Sydney,Sch Elect & Data Engn,悉尼,新南威尔士州 2007 年,澳大利亚 C1 香港理工大学生物医学工程系,中国香港 C3 海思半导体 华为技术 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100024 PM 38421850 ER
PT J
AU Tajbakhsh, Kiarash
Stanowska, Olga
Neels, Antonia
Perren, Aurel
Zboray, Robert
PT J AU Tajbakhsh、Kiarash Stanowska、Olga Neels、Antonia Perren、Aurel Zboray、Robert
TI 3D Virtual Histopathology by Phase-Contrast X-Ray Micro-CT for
Follicular Thyroid Neoplasms
通过相差 X 射线 Micro-CT 进行 TI 3D 虚拟组织病理学滤泡性甲状腺肿瘤
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2670
EP 2678
DI 10.1109/TMI.2024.3372602
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2670 EP 2678 DI 10.1109/TMI.2024.3372602 DT 文章 PD 2024 年 7 月 PY 2024
AB Histological analysis is the core of follicular thyroid carcinoma (FTC)
classification. The histopathological criteria of capsular and vascular
invasion define malignancy and aggressiveness of FTC. Analysis of
multiple sections is cumbersome and as only a minute tissue fraction is
analyzed during histopathology, under-sampling remains a problem.
Application of an efficient tool for complete tissue imaging in 3D would
speed-up diagnosis and increase accuracy. We show that X-ray
propagation-based imaging (XPBI) of paraffin-embedded tissue blocks is a
valuable complementary method for follicular thyroid carcinoma diagnosis
and assessment. It enables a fast, non-destructive and accurate 3D
virtual histology of the FTC resection specimen. We demonstrate that
XPBI virtual slices can reliably evaluate capsular invasions. Then we
discuss the accessible morphological information from XPBI and their
significance for vascular invasion diagnosis. We show 3D morphological
information that allow to discern vascular invasions. The results are
validated by comparing XPBI images with clinically accepted histology
slides revised by and under supervision of two experienced endocrine
pathologists.
RI Zboray, Robert/GWC-5612-2022; Tajbakhsh, Kiarash/; Zboray, Robert/; Perren, Aurel/A-9383-2018; Neels, Antonia/O-3536-2016; Stanowska, Olga/
OI Tajbakhsh, Kiarash/0000-0002-0081-8908; Zboray,
Robert/0000-0003-0811-7396; Perren, Aurel/0000-0002-6819-6092; Neels,
Antonia/0000-0001-5752-2852; Stanowska, Olga/0000-0001-8014-882X
AB 组织学分析是滤泡性甲状腺癌(FTC)分类的核心。包膜和血管侵犯的组织病理学标准定义了 FTC 的恶性和侵袭性。多个切片的分析很麻烦,并且由于在组织病理学过程中仅分析微小的组织部分,因此采样不足仍然是一个问题。应用有效的 3D 完整组织成像工具将加快诊断速度并提高准确性。我们表明,石蜡包埋组织块的基于 X 射线传播的成像 (XPBI) 是滤泡性甲状腺癌诊断和评估的一种有价值的补充方法。它能够对 FTC 切除标本进行快速、无损且准确的 3D 虚拟组织学分析。我们证明 XPBI 虚拟切片可以可靠地评估包膜侵袭。然后我们讨论 XPBI 中可获取的形态学信息及其对血管侵犯诊断的意义。我们显示 3D 形态学信息,可以辨别血管侵犯。通过将 XPBI 图像与由两位经验丰富的内分泌病理学家修改并在其监督下修改的临床可接受的组织学载玻片进行比较来验证结果。 RI Zboray,罗伯特/GWC-5612-2022;泰巴赫什、基亚拉什/;罗伯特·兹博雷/;奥雷尔·佩伦/A-9383-2018;安东尼娅·尼尔斯/O-3536-2016;奥尔加·斯坦诺斯卡/OI Tajbakhsh,基亚拉什/0000-0002-0081-8908;罗伯特·兹博雷/0000-0003-0811-7396;奥雷尔·佩伦/0000-0002-6819-6092;安东尼娅·尼尔斯/0000-0001-5752-2852;奥尔加·斯坦诺斯卡/0000-0001-8014-882X
ZS 0
TC 3
Z8 0
ZB 1
ZR 0
ZA 0
Z9 3
C1 Swiss Fed Labs Mat Sci Empa, CH-8600 Dubendorf, Switzerland
C1 Univ Fribourg, Fac Sci & Med, CH-1700 Fribourg, Switzerland
C1 Univ Bern, Inst Tissue Med & Pathol, CH-3008 Bern, Switzerland
C3 Swiss Fed Labs Mat Sci Empa
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100015
PM 38437150
ER
ZS 0 TC 3 Z8 0 ZB 1 ZR 0 ZA 0 Z9 3 C1 Swiss Fed Labs Mat Sci Empa,CH-8600 Dubendorf,瑞士 C1 Univ Fribourg,Fac Sci & Med,CH-1700 Fribourg,瑞士 C1 Univ Bern,Inst Tissue Med & Pathol, CH-3008 伯尔尼,瑞士 C3 瑞士联邦实验室 Mat Sci Empa SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100015 PM 38437150 ER
PT J AU Cui, Jiaqi Zeng, Pinxian Zeng, Xinyi Xu, Yuanyuan Wang, Peng Zhou, Jiliu Wang, Yan Shen, Dinggang
TI Prior Knowledge-guided Triple-Domain Transformer-GAN for Direct PET
Reconstruction from Low-Count Sinograms.
TI 先验知识引导的三域变压器-GAN,用于从低计数正弦图直接重建 PET。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3413832
DT Journal Article
PD 2024-Jun-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3413832 DT 期刊文章 PD 2024-Jun-13 PY 2024
AB To obtain high-quality positron emission tomography (PET) images while
minimizing radiation exposure, numerous methods have been dedicated to
acquiring standard-count PET (SPET) from low-count PET (LPET). However,
current methods have failed to take full advantage of the different
emphasized information from multiple domains, i.e., the sinogram, image,
and frequency domains, resulting in the loss of crucial details.
Meanwhile, they overlook the unique inner-structure of the sinograms,
thereby failing to fully capture its structural characteristics and
relationships. To alleviate these problems, in this paper, we proposed a
prior knowledge-guided transformer-GAN that unites triple domains of
sinogram, image, and frequency to directly reconstruct SPET images from
LPET sinograms, namely PK-TriDo. Our PK-TriDo consists of a Sinogram
Inner-Structure-based Denoising Transformer (SISD-Former) to denoise the
input LPET sinogram, a Frequency-adapted Image Reconstruction
Transformer (FaIR-Former) to reconstruct high-quality SPET images from
the denoised sinograms guided by the image domain prior knowledge, and
an Adversarial Network (AdvNet) to further enhance the reconstruction
quality via adversarial training. Specifically tailored for the PET
imaging mechanism, we injected a sinogram embedding module that
partitions the sinograms by rows and columns to obtain 1D sequences of
angles and distances to faithfully preserve the inner-structure of the
sinograms. Moreover, to mitigate high-frequency distortions and enhance
reconstruction details, we integrated global-local frequency parsers
(GLFPs) into FaIR-Former to calibrate the distributions and proportions
of different frequency bands, thus compelling the network to preserve
high-frequency details. Evaluations on three datasets with different
dose levels and imaging scenarios demonstrated that our PK-TriDo
outperforms the state-of-the-art methods.
OI Cui, Jiaqi/0000-0001-6006-4442
为了获得高质量的正电子发射断层扫描 (PET) 图像,同时最大限度地减少辐射暴露,人们采用了多种方法从低计数 PET (LPET) 中获取标准计数 PET (SPET)。然而,当前的方法未能充分利用来自多个域(即正弦图、图像和频域)的不同强调信息,导致关键细节的丢失。同时,他们忽视了正弦图独特的内部结构,从而未能充分捕捉其结构特征和关系。为了缓解这些问题,在本文中,我们提出了一种先验知识引导的 Transformer-GAN,它将正弦图、图像和频率三重域结合起来,直接从 LPET 正弦图重建 SPET 图像,即 PK-TriDo。我们的 PK-TriDo 包含一个基于正弦图内部结构的去噪变压器 (SISD-Former),用于对输入 LPET 正弦图进行去噪;以及一个频率自适应图像重建变压器 (FaIR-Former),用于从去噪正弦图重建高质量 SPET 图像以图像领域先验知识和对抗网络(AdvNet)为指导,通过对抗训练进一步提高重建质量。我们注入了一个专为 PET 成像机制定制的正弦图嵌入模块,该模块可以按行和列对正弦图进行分区,以获得角度和距离的一维序列,从而忠实地保留正弦图的内部结构。此外,为了减轻高频失真并增强重建细节,我们将全局局部频率解析器(GLFP)集成到FaIR-Former中,以校准不同频段的分布和比例,从而迫使网络保留高频细节。 对具有不同剂量水平和成像场景的三个数据集的评估表明,我们的 PK-TriDo 优于最先进的方法。崔佳琪/0000-0001-6006-4442
ZS 0
Z8 0
ZR 0
ZA 0
TC 1
ZB 0
Z9 1
EI 1558-254X
DA 2024-06-18
UT MEDLINE:38869996
PM 38869996
ER
ZS 0 Z8 0 ZR 0 ZA 0 TC 1 ZB 0 Z9 1 EI 1558-254X DA 2024-06-18 UT MEDLINE:38869996 PM 38869996 ER
PT J
AU Huang, Bangyan
Li, Tiantian
Arino-Estrada, Gerard
Dulski, Kamil
Shopa, Roman Y.
Moskal, Pawel
Stepien, Ewa
Qi, Jinyi
PT J AU Huang、Bangyan Li、Tiantian Arino-Estrada、Gerard Dulski、Kamil Shopa、Roman Y. Moskal、Pawel Stepien、Ewa Qi、Jinyi
TI SPLIT: Statistical Positronium Lifetime Image Reconstruction via
Time-Thresholding
TI SPLIT:通过时间阈值重建统计正电子寿命图像
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2148
EP 2158
DI 10.1109/TMI.2024.3357659
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2148 EP 2158 DI 10.1109/TMI.2024.3357659 DT 文章 PD 2024 年 6 月 PY 2024
AB Positron emission tomography (PET) is a widely utilized medical imaging
modality that uses positron-emitting radiotracers to visualize
biochemical processes in a living body. The spatiotemporal distribution
of a radiotracer is estimated by detecting the coincidence photon pairs
generated through positron annihilations. In human tissue, about 40% of
the positrons form positroniums prior to the annihilation. The lifetime
of these positroniums is influenced by the microenvironment in the
tissue and could provide valuable information for better understanding
of disease progression and treatment response. Currently, there are few
methods available for reconstructing high-resolution lifetime images in
practical applications. This paper presents an efficient statistical
image reconstruction method for positronium lifetime imaging (PLI). We
also analyze the random triple-coincidence events in PLI and propose a
correction method for random events, which is essential for real
applications. Both simulation and experimental studies demonstrate that
the proposed method can produce lifetime images with high numerical
accuracy, low variance, and resolution comparable to that of the
activity images generated by a PET scanner with currently available
time-of-flight resolution.
RI Stępień, Ewa/H-7471-2019; Arino-Estrada, Gerard/I-2589-2019; Qi, Jinyi/IXW-8224-2023; Moskal, Pawel/N-4231-2019; Shopa, Roman/IVH-6026-2023; li, tianian/ACG-7798-2022; Qi, Jinyi/A-1768-2010; Huang, Bangyan/; Shopa, Roman/; Arino-Estrada, Gerard/; Li, Tiantian/; Stepien, Ewa/; Moskal, Pawel/S-9151-2016; Dulski, Kamil/V-2058-2018
OI Qi, Jinyi/0000-0002-5428-0322; Huang, Bangyan/0000-0003-1721-2416;
Shopa, Roman/0000-0002-1089-5050; Arino-Estrada,
Gerard/0000-0002-6411-191X; Li, Tiantian/0000-0002-3621-5420; Stepien,
Ewa/0000-0003-3589-1715; Moskal, Pawel/0000-0002-4229-3548; Dulski,
Kamil/0000-0002-4093-8162
ZA 0
ZB 0
Z8 0
AB 正电子发射断层扫描 (PET) 是一种广泛使用的医学成像方式,它使用正电子发射放射性示踪剂来可视化活体内的生化过程。通过检测正电子湮灭产生的重合光子对来估计放射性示踪剂的时空分布。在人体组织中,大约 40% 的正电子在湮灭之前形成正电子素。这些正电子素的寿命受到组织中微环境的影响,可以为更好地了解疾病进展和治疗反应提供有价值的信息。目前,在实际应用中可用于重建高分辨率寿命图像的方法很少。本文提出了一种有效的正电子寿命成像(PLI)统计图像重建方法。我们还分析了 PLI 中的随机三重符合事件,并提出了一种随机事件的校正方法,这对于实际应用至关重要。模拟和实验研究都表明,所提出的方法可以生成具有高数值精度、低方差和分辨率的终生图像,其分辨率可与具有当前可用飞行时间分辨率的 PET 扫描仪生成的活动图像相媲美。RI Stępień,Ewa/H-7471-2019;阿里诺-埃斯特拉达,杰拉德/I-2589-2019;齐金一/IXW-8224-2023;帕维尔·莫斯卡尔/N-4231-2019;罗马·肖帕/IVH-6026-2023;李天年/ACG-7798-2022;齐锦仪/A-1768-2010;黄邦彦/;肖帕,罗马/;阿里诺-埃斯特拉达,杰拉德/;李甜甜/;埃瓦·斯特皮恩/;帕维尔·莫斯卡尔/S-9151-2016; Dulski, Kamil/V-2058-2018 OI Qi, Jinyi/0000-0002-5428-0322;黄邦彦/0000-0003-1721-2416;肖帕,罗马/0000-0002-1089-5050;阿里诺-埃斯特拉达,杰拉德/0000-0002-6411-191X;李甜甜/0000-0002-3621-5420;斯特皮恩,埃瓦/0000-0003-3589-1715;帕维尔·莫斯卡尔/0000-0002-4229-3548;卡米尔·杜尔斯基/0000-0002-4093-8162 ZA 0 ZB 0 Z8 0
ZS 0
TC 1
ZR 0
Z9 1
C1 Univ Calif Davis, Dept Biomed Engn, Davis, CA 95616 USA
C1 Barcelona Inst Sci & Technol, Inst Fis Altes Energies, Barcelona 08193,
Spain
C1 Jagiellonian Univ, Marian Smoluchowski Inst Phys & Theranost Ctr,
PL-31007 Krakow, Poland
C1 Natl Ctr Nucl Res, Dept Complex Syst, PL-05400 Otwock, Poland
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100022
PM 38261489
ER
ZS 0 TC 1 ZR 0 Z9 1 C1 加州大学戴维斯分校,生物医学工程系,戴维斯,CA 95616 美国 C1 巴塞罗那科学与技术研究所,Inst Fis Altes Energies,巴塞罗那 08193,西班牙 C1 雅盖隆大学,玛丽安·斯莫洛乔夫斯基研究所物理与 Theranost Ctr, PL-31007 克拉科夫,波兰 C1 Natl Ctr Nucl Res,部门综合系统,PL-05400 奥特沃茨克,波兰 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100022 PM 38261489 ER
PT J
AU Wu, Qian
Chen, Yufei
Liu, Wei
Yue, Xiaodong
Zhuang, Xiahai
PT J AU Wu, 陈谦, 刘宇飞, 岳伟, 庄晓东, 霞海
TI Deep Closing: Enhancing Topological Connectivity in Medical Tubular
Segmentation.
TI Deep Closing:增强医疗管状分割中的拓扑连接性。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3405982
DT Journal Article
PD 2024-May-27
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3405982 DT 期刊文章 PD 2024-May-27 PY 2024
AB Accurately segmenting tubular structures, such as blood vessels or
nerves, holds significant clinical implications across various medical
applications. However, existing methods often exhibit limitations in
achieving satisfactory topological performance, particularly in terms of
preserving connectivity. To address this challenge, we propose a novel
deep-learning approach, termed Deep Closing, inspired by the
well-established classic closing operation. Deep Closing first leverages
an AutoEncoder trained in the Masked Image Modeling (MIM) paradigm,
enhanced with digital topology knowledge, to effectively learn the
inherent shape prior of tubular structures and indicate potential
disconnected regions. Subsequently, a Simple Components Erosion module
is employed to generate topology-focused outcomes, which refines the
preceding segmentation results, ensuring all the generated regions are
topologically significant. To evaluate the efficacy of Deep Closing, we
conduct comprehensive experiments on 4 datasets: DRIVE, CHASE DB1, DCA1,
and CREMI. The results demonstrate that our approach yields considerable
improvements in topological performance compared with existing methods.
Furthermore, Deep Closing exhibits the ability to generalize and
transfer knowledge from external datasets, showcasing its robustness and
adaptability. The code for this paper has been available at:
https://github.com/5k5000/DeepClosing.
OI Wu, Qian/0000-0002-2417-8837; Liu, Wei/0000-0001-9475-6455; Chen,
Yufei/0000-0002-3645-9046
AB 准确分割管状结构(例如血管或神经)对于各种医疗应用具有重要的临床意义。然而,现有方法在实现令人满意的拓扑性能方面通常表现出局限性,特别是在保持连通性方面。为了应对这一挑战,我们提出了一种新颖的深度学习方法,称为深度闭运算,其灵感来自于成熟的经典闭运算。 Deep Closing 首先利用在掩模图像建模 (MIM) 范式中训练的自动编码器,并通过数字拓扑知识进行增强,以有效地学习管状结构的固有形状先验并指示潜在的断开区域。随后,采用简单组件侵蚀模块来生成以拓扑为中心的结果,从而细化前面的分割结果,确保所有生成的区域都具有拓扑意义。为了评估 Deep Closing 的效果,我们在 DRIVE、CHASE DB1、DCA1 和 CREMI 4 个数据集上进行了全面的实验。结果表明,与现有方法相比,我们的方法在拓扑性能方面取得了相当大的改进。此外,Deep Closing 还展示了从外部数据集中泛化和传输知识的能力,展示了其鲁棒性和适应性。本文的代码可在以下网址获取:https://github.com/5k5000/DeepClosing。吴爱谦/0000-0002-2417-8837;刘伟/0000-0001-9475-6455;陈宇飞/0000-0002-3645-9046
ZS 0
TC 0
ZB 0
Z8 0
ZR 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-05-31
UT MEDLINE:38801688
PM 38801688
ER
ZS 0 TC 0 ZB 0 Z8 0 ZR 0 ZA 0 Z9 0 EI 1558-254X DA 2024-05-31 UT MEDLINE:38801688 PM 38801688 ER
PT J
AU Cao, Chentao
Cui, Zhuo-Xu
Wang, Yue
Liu, Shaonan
Chen, Taijin
Zheng, Hairong
Liang, Dong
Zhu, Yanjie
PT J AU 曹, 崔晨涛, 王卓旭, 刘悦, 陈少南, 郑泰金, 梁海蓉, 朱东, 燕杰
TI High-Frequency Space Diffusion Model for Accelerated MRI
用于加速 MRI 的 TI 高频空间扩散模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1853
EP 1865
DI 10.1109/TMI.2024.3351702
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1853 EP 1865 DI 10.1109/TMI.2024.3351702 DT 文章 PD 2024 年 5 月 PY 2024
AB Diffusion models with continuous stochastic differential equations
(SDEs) have shown superior performances in image generation. It can
serve as a deep generative prior to solving the inverse problem in
magnetic resonance (MR) reconstruction. However, low-frequency regions
of k-space data are typically fully sampled in fast MR imaging, while
existing diffusion models are performed throughout the entire image or
k-space, inevitably introducing uncertainty in the reconstruction of
low-frequency regions. Additionally, existing diffusion models often
demand substantial iterations to converge, resulting in time-consuming
reconstructions. To address these challenges, we propose a novel SDE
tailored specifically for MR reconstruction with the diffusion process
in high-frequency space (referred to as HFS-SDE). This approach ensures
determinism in the fully sampled low-frequency regions and accelerates
the sampling procedure of reverse diffusion. Experiments conducted on
the publicly available fastMRI dataset demonstrate that the proposed
HFS-SDE method outperforms traditional parallel imaging methods,
supervised deep learning, and existing diffusion models in terms of
reconstruction accuracy and stability. The fast convergence properties
are also confirmed through theoretical and experimental validation.
RI Cao, Chentao/HNB-3678-2023; Liang, Dong/A-3335-2011; Wang, Yue/; Cao, Chentao/; Cui, Zhuo-Xu/
OI Liang, Dong/0000-0001-6257-0875; Wang, Yue/0009-0004-3564-0961; Cao,
Chentao/0000-0002-3974-3413; Cui, Zhuo-Xu/0000-0001-9283-881X
ZA 0
TC 4
ZB 1
Z8 0
具有连续随机微分方程 (SDE) 的 AB 扩散模型在图像生成方面表现出了卓越的性能。它可以作为解决磁共振(MR)重建中的逆问题之前的深度生成。然而,在快速MR成像中,k空间数据的低频区域通常被完全采样,而现有的扩散模型是在整个图像或k空间中执行的,不可避免地在低频区域的重建中引入不确定性。此外,现有的扩散模型通常需要大量迭代才能收敛,从而导致重建耗时。为了解决这些挑战,我们提出了一种专门针对高频空间扩散过程的MR重建而定制的新型SDE(称为HFS-SDE)。这种方法确保了完全采样的低频区域的确定性,并加速了反向扩散的采样过程。在公开的 fastMRI 数据集上进行的实验表明,所提出的 HFS-SDE 方法在重建精度和稳定性方面优于传统的并行成像方法、监督深度学习和现有的扩散模型。快速收敛特性也通过理论和实验验证得到证实。曹日涛/HNB-3678-2023;梁栋/A-3335-2011;王悦/;曹晨涛/;崔卓旭/ 董爱梁/0000-0001-6257-0875;王悦/0009-0004-3564-0961;曹晨涛/0000-0002-3974-3413;崔卓旭/0000-0001-9283-881X ZA 0 TC 4 ZB 1 Z8 0
ZS 0
ZR 0
Z9 4
C1 Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408,
Peoples R China
C1 Chinese Acad Sci, Shenzhen Inst Adv Technol, Res Ctr Med AI, Shenzhen
518055, Peoples R China
C1 Shenzhen Univ, Shenzhen Univ Med Sch, Sch Biomed Engn, Shenzhen 518055,
Guangdong, Peoples R China
C1 Shenzhen Univ, Comp Vis Inst, Coll Comp Sci & Software Engn, Shenzhen
518055, Guangdong, Peoples R China
C1 Chinese Acad Sci, Shenzhen Inst Adv Technol, Lauterbur Res Ctr Biomed
Imaging, Shenzhen 518055, Peoples R China
C1 Natl Ctr Appl Math Shenzhen NCAMS, Shenzhen 518000, Peoples R China
C3 Natl Ctr Appl Math Shenzhen NCAMS
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800025
PM 38194398
ER
ZS 0 ZR 0 Z9 4 C1 中国科学院人工智能学院,北京 101408,人民 R 中国 C1 中国科学院,深圳先进技术研究院,Res Ctr Med AI,深圳 518055,人民 R 中国 C1 深圳大学,深圳医学大学硕士,生物医学工程,深圳 518055,广东,人民 R 中国 C1 深圳大学,计算机科学与软件工程学院,深圳 518055,广东,人民 R 中国 C1 中国科学院,深圳先进技术研究所,劳特布尔研究中心生物医学影像,深圳 518055,人民 R 中国 C1 国家中心应用数学深圳 NCAMS,深圳 518000,人民 R 中国 C3 国家中心应用数学深圳 NCAMS SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800025 PM 381943 98急诊室
PT J AU Liu, Min Han, Yubin Wang, Jiazheng Wang, Can Wang, Yaonan Meijering, Erik
TI LSKANet: Long Strip Kernel Attention Network for Robotic Surgical Scene
Segmentation
TI LSKANet:用于机器人手术场景分割的长条核注意网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1308
EP 1322
DI 10.1109/TMI.2023.3335406
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1308 EP 1322 DI 10.1109/TMI.2023.3335406 DT 文章 PD 2024 年 4 月 PY 2024
AB Surgical scene segmentation is a critical task in Robotic-assisted
surgery. However, the complexity of the surgical scene, which mainly
includes local feature similarity (e.g., between different anatomical
tissues), intraoperative complex artifacts, and indistinguishable
boundaries, poses significant challenges to accurate segmentation. To
tackle these problems, we propose the Long Strip Kernel Attention
network (LSKANet), including two well-designed modules named Dual-block
Large Kernel Attention module (DLKA) and Multiscale Affinity Feature
Fusion module (MAFF), which can implement precise segmentation of
surgical images. Specifically, by introducing strip convolutions with
different topologies (cascaded and parallel) in two blocks and a large
kernel design, DLKA can make full use of region- and strip-like surgical
features and extract both visual and structural information to reduce
the false segmentation caused by local feature similarity. In MAFF,
affinity matrices calculated from multiscale feature maps are applied as
feature fusion weights, which helps to address the interference of
artifacts by suppressing the activations of irrelevant regions. Besides,
the hybrid loss with Boundary Guided Head (BGH) is proposed to help the
network segment indistinguishable boundaries effectively. We evaluate
the proposed LSKANet on three datasets with different surgical scenes.
The experimental results show that our method achieves new
state-of-the-art results on all three datasets with improvements of
2.6%, 1.4%, and 3.4% mIoU, respectively. Furthermore, our method is
compatible with different backbones and can significantly increase their
segmentation accuracy. Code is available at
https://github.com/YubinHan73/LSKANet.
RI Wang, Jiazheng/AAQ-5059-2020; Han, Yubin/; Wang, Can/
OI Han, Yubin/0009-0009-7051-9610; Wang, Can/0009-0005-8981-1380
ZA 0
ZB 0
ZR 0
TC 3
AB 手术场景分割是机器人辅助手术中的一项关键任务。然而,手术场景的复杂性,主要包括局部特征相似性(例如,不同解剖组织之间)、术中复杂伪影和难以区分的边界,对精确分割提出了重大挑战。为了解决这些问题,我们提出了长带核注意网络(LSKANet),包括两个精心设计的模块:双块大核注意模块(DLKA)和多尺度亲和特征融合模块(MAFF),可以实现对物体的精确分割。手术图像。具体来说,通过在两个块中引入具有不同拓扑(级联和并行)的带状卷积和大内核设计,DLKA可以充分利用区域和带状手术特征并提取视觉和结构信息,以减少造成的错误分割通过局部特征相似度。在MAFF中,根据多尺度特征图计算出的亲和力矩阵被用作特征融合权重,这有助于通过抑制不相关区域的激活来解决伪影的干扰。此外,提出了边界引导头(BGH)的混合损失来帮助有效地帮助网络分割不可区分的边界。我们在具有不同手术场景的三个数据集上评估了所提出的 LSKANet。实验结果表明,我们的方法在所有三个数据集上均取得了最新的结果,mIoU 分别提高了 2.6%、1.4% 和 3.4%。此外,我们的方法与不同的骨干网兼容,可以显着提高其分割精度。代码可在 https://github.com/YubinHan73/LSKANet 获取。日王家政/AAQ-5059-2020;韩玉斌/;王灿/韩玉斌/0009-0009-7051-9610;王灿/0009-0005-8981-1380 ZA 0 ZB 0 ZR 0 TC 3
ZS 0
Z8 0
Z9 3
C1 Hunan Univ, Coll Elect & Informat Engn, Natl Engn Res Ctr Robot Visual
Percept & Control T, Changsha 410082, Hunan, Peoples R China
C1 Hunan Univ, Int Sci & Technol Innovat Cooperat Base Biomed Ima, Changsha
410082, Hunan, Peoples R China
C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400006
PM 38015689
ER
ZS 0 Z8 0 Z9 3 C1 湖南大学信息工程学院,国家工程研究中心机器人视觉感知与控制技术中心,长沙 410082,湖南,中国人民 C1 湖南大学国际科技创新合作基地生物医学研究所,长沙410082, 湖南, Peoples R China C1 Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, 澳大利亚 SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400006 PM 38015689 ER
PT J
AU Wang, Ke
Chen, Zicong
Zhu, Mingjia
Li, Zhetao
Weng, Jian
Gu, Tianlong
PT J AU Wang, 陈科, 朱自聪, 李明佳, 翁哲涛, 顾健, 天龙
TI Score-based Counterfactual Generation for Interpretable Medical Image
Classification and Lesion Localization.
基于 TI 分数的反事实生成,用于可解释的医学图像分类和病变定位。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3375357
DT Journal Article
PD 2024-Mar-14
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3375357 DT 期刊文章 PD 2024-Mar-14 PY 2024
AB Deep neural networks (DNNs) have immense potential for precise clinical
decision-making in the field of biomedical imaging. However, accessing
high-quality data is crucial for ensuring the high-performance of DNNs.
Obtaining medical imaging data is often challenging in terms of both
quantity and quality. To address these issues, we propose a score-based
counterfactual generation (SCG) framework to create counterfactual
images from latent space, to compensate for scarcity and imbalance of
data. In addition, some uncertainties in external physical factors may
introduce unnatural features and further affect the estimation of the
true data distribution. Therefore, we integrated a learnable FuzzyBlock
into the classifier of the proposed framework to manage these
uncertainties. The proposed SCG framework can be applied to both
classification and lesion localization tasks. The experimental results
revealed a remarkable performance boost in classification tasks,
achieving an average performance enhancement of 3-5% compared to
previous state-of-the-art (SOTA) methods in interpretable lesion
localization.
OI Chen, Zicong/0000-0003-4438-8740
AB 深度神经网络 (DNN) 在生物医学成像领域的精确临床决策方面具有巨大潜力。然而,访问高质量数据对于确保 DNN 的高性能至关重要。获取医学成像数据在数量和质量方面通常都具有挑战性。为了解决这些问题,我们提出了一种基于分数的反事实生成(SCG)框架,从潜在空间创建反事实图像,以弥补数据的稀缺性和不平衡。此外,外部物理因素的一些不确定性可能会引入不自然的特征,进一步影响对真实数据分布的估计。因此,我们将可学习的 FuzzyBlock 集成到所提出框架的分类器中来管理这些不确定性。所提出的 SCG 框架可应用于分类和病变定位任务。实验结果显示,分类任务的性能显着提升,与之前最先进的(SOTA)可解释病变定位方法相比,平均性能提高了 3-5%。爱陈子聪/0000-0003-4438-8740
ZS 0
TC 0
Z8 0
ZA 0
ZR 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-03-14
UT MEDLINE:38466592
PM 38466592
ER
ZS 0 TC 0 Z8 0 ZA 0 ZR 0 ZB 0 Z9 0 EI 1558-254X DA 2024-03-14 UT MEDLINE:38466592 PM 38466592 ER
PT J
AU Jin, Liang
Gu, Shixuan
Wei, Donglai
Adhinarta, Jason Ken
Kuang, Kaiming
Zhang, Yongjie Jessica
Pfister, Hanspeter
Ni, Bingbing
Yang, Jiancheng
Li, Ming
PT J AU Jin、Liang Gu、Shixuan Wei、Donglai Adhinarta、Jason Ken Kuang、Kaming Zhu、Yongjie Jessica Pfister、Hanspeter Ni、Bingbing Yang、Jian Cheng Li、Ming
TI <i>RibSeg v2</i>: A Large-Scale Benchmark for Rib Labeling and
Anatomical Centerline Extraction
TI <i>RibSeg v2</i>:肋骨标记和解剖中心线提取的大规模基准
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 570
EP 581
DI 10.1109/TMI.2023.3313627
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 570 EP 581 DI 10.1109/TMI.2023.3313627 DT 文章 PD 2024 年 1 月 PY 2024
AB Automatic rib labeling and anatomical centerline extraction are common
prerequisites for various clinical applications. Prior studies either
use in-house datasets that are inaccessible to communities, or focus on
rib segmentation that neglects the clinical significance of rib
labeling. To address these issues, we extend our prior dataset (RibSeg)
on the binary rib segmentation task to a comprehensive benchmark, named
RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and
annotations manually inspected by experts for rib labeling and
anatomical centerline extraction. Based on the RibSeg v2, we develop a
pipeline including deep learning-based methods for rib labeling, and a
skeletonization-based method for centerline extraction. To improve
computational efficiency, we propose a sparse point cloud representation
of CT scans and compare it with standard dense voxel grids. Moreover, we
design and analyze evaluation metrics to address the key challenges of
each task. Our dataset, code, and model are available online to
facilitate open research at https://github.com/M3DV/RibSeg.
RI Yang, Jiancheng/HJG-5773-2022; Gu, Shixuan/JTZ-7955-2023; Adhinarta, Jason/; Pfister, Hanspeter/; Gu, Shixuan/; Yang, Jiancheng/; Jin, Liang/; Zhang, Yongjie Jessica/F-8733-2012
OI Adhinarta, Jason/0000-0002-6247-7475; Pfister,
Hanspeter/0000-0002-3620-2582; Gu, Shixuan/0000-0001-6532-6341; Yang,
Jiancheng/0000-0003-4455-7145; Jin, Liang/0000-0002-7552-7849; Zhang,
Yongjie Jessica/0000-0001-7436-9757
TC 1
ZA 0
ZR 0
AB 自动肋骨标记和解剖中心线提取是各种临床应用的常见先决条件。先前的研究要么使用社区无法访问的内部数据集,要么专注于肋骨分割,而忽略了肋骨标记的临床意义。为了解决这些问题,我们将二进制肋骨分割任务上的先前数据集 (RibSeg) 扩展为一个名为 RibSeg v2 的综合基准,其中包含 660 个 CT 扫描(总共 15,466 根肋骨)以及专家手动检查的注释,以进行肋骨标记和解剖中心线提取。基于 RibSeg v2,我们开发了一个管道,包括基于深度学习的肋骨标记方法和基于骨架化的中心线提取方法。为了提高计算效率,我们提出了 CT 扫描的稀疏点云表示,并将其与标准密集体素网格进行比较。此外,我们设计和分析评估指标以解决每项任务的关键挑战。我们的数据集、代码和模型可在线获取,以促进开放研究:https://github.com/M3DV/RibSeg。日阳建成/HJG-5773-2022;顾世轩/JTZ-7955-2023;阿迪纳塔,杰森/;菲斯特,汉斯彼得/;顾世轩/;杨建成/;金亮/;张永杰 Jessica/F-8733-2012 OI Adhinarta, Jason/0000-0002-6247-7475;菲斯特,汉斯彼得/0000-0002-3620-2582;顾世轩/0000-0001-6532-6341;杨建成/0000-0003-4455-7145;金亮/0000-0002-7552-7849;张永杰 Jessica/0000-0001-7436-9757 TC 1 ZA 0 ZR 0
ZS 0
Z8 0
ZB 0
Z9 1
C1 Fudan Univ, Huadong Hosp, Radiol Dept, Shanghai 200040, Peoples R China
C1 Fudan Univ, Huashan Hosp, Shanghai 200040, Peoples R China
C1 China Acad Forens Sci, Shanghai Key Lab Forens Med, Shanghai 200040,
Peoples R China
C1 China Acad Forens Sci, Key Lab Forens Sci, Minist Justice, Shanghai
200040, Peoples R China
C1 Carnegie Mellon Univ, Biomed Engn Dept, Pittsburgh, PA 15213 USA
C1 Harvard Univ, John A Paulson Sch Engn & Appl Sci, Cambridge, MA 02138
USA
C1 Boston Coll, Chestnut Hill, MA 02467 USA
C1 Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
C1 Carnegie Mellon Univ, Mech Engn Dept, Pittsburgh, PA 15213 USA
C1 Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai 200240, Peoples R
China
C1 Huawei Hisilicon, Shanghai 202150, Peoples R China
C1 Ecole Polytech Fed Lausanne, Comp Vis Lab, CH-1015 Lausanne, Switzerland
C1 Inst Funct & Mol Med Imaging, Shanghai 200240, Peoples R China
C3 China Acad Forens Sci
C3 China Acad Forens Sci
C3 Inst Funct & Mol Med Imaging
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600045
PM 37695967
ER
ZS 0 Z8 0 ZB 0 Z9 1 C1 复旦大学华东医院放射科,上海 200040,人民 R 中国 C1 复旦大学华山医院,上海 200040,人民 R 中国 C1 中国法医学科学院,上海市法医学重点实验室,上海200040, Peoples R China C1 China Acad Forens Sci, Key Lab Forens Sci, Minist Justice, Shanghai 200040, Peoples R China C1 Carnegie Mellon Univ, Biomed Engn Dept, Pittsburgh, PA 15213 USA C1Harvard Univ, John A Paulson Sch Engn & Appl Sci, Cambridge, MA 02138 USA C1 Boston Coll, Chestnut Hill, MA 02467 USA C1 Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA C1 Carnegie Mellon Univ, Mech Engn Dept, Pittsburgh, PA 15213 USA C1上海交通大学,工程系,上海 200240,人民 R 中国 C1 华为海思,上海 202150,人民 R 中国 C1 洛桑理工学院,Comp Vis Lab,CH-1015 洛桑,瑞士 C1 功能与分子医学成像研究所,上海200240,Peoples R China C3 China Acad Forens Sci C3 China Acad Forens Sci C3 Inst Funct & Mol Med Imaging SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600045 PM 37695967 ER
PT J
AU Lei, Wenhui
Su, Qi
Jiang, Tianyu
Gu, Ran
Wang, Na
Liu, Xinglong
Wang, Guotai
Zhang, Xiaofan
Zhang, Shaoting
PT J AU Lei, 苏文辉, 蒋琪, 顾天宇, 王然, 刘娜, 王兴龙, 张国泰, 张晓凡, 绍婷
TI One-Shot Weakly-Supervised Segmentation in 3D Medical Images
3D 医学图像中的 TI One-Shot 弱监督分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 175
EP 189
DI 10.1109/TMI.2023.3294975
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 175 EP 189 DI 10.1109/TMI.2023.3294975 DT 文章 PD 2024 年 1 月 PY 2024
AB Deep neural networks typically require accurate and a large number of
annotations to achieve outstanding performance in medical image
segmentation. One-shot and weakly-supervised learning are promising
research directions that reduce labeling effort by learning a new class
from only one annotated image and using coarse labels instead,
respectively. In this work, we present an innovative framework for 3D
medical image segmentation with one-shot and weakly-supervised settings.
Firstly a propagation-reconstruction network is proposed to propagate
scribbles from one annotated volume to unlabeled 3D images based on the
assumption that anatomical patterns in different human bodies are
similar. Then a multi-level similarity denoising module is designed to
refine the scribbles based on embeddings from anatomical- to
pixel-level. After expanding the scribbles to pseudo masks, we observe
the miss-classified voxels mainly occur at the border region and propose
to extract self-support prototypes for the specific refinement. Based on
these weakly-supervised segmentation results, we further train a
segmentation model for the new class with the noisy label training
strategy. Experiments on three CT and one MRI datasets show the proposed
method obtains significant improvement over the state-of-the-art methods
and performs robustly even under severe class imbalance and low
contrast. Code is publicly available at
https://github.com/LWHYC/OneShot_WeaklySeg.
RI Zhang, Xiaofan/ABG-4594-2021; lei, wenhui/LGZ-5354-2024; Zhang, Yu-Dong/HKM-7209-2023; Xinglong, LIU/IUN-7675-2023; Lei, Wenhui/; Gu, Ran/
OI Lei, Wenhui/0000-0002-2952-3441; Gu, Ran/0000-0002-2011-7623
ZB 2
Z8 0
ZR 0
TC 10
ZA 0
AB 深度神经网络通常需要准确且大量的注释才能在医学图像分割中实现出色的性能。一次性学习和弱监督学习是有前途的研究方向,它们分别通过仅从一张带注释的图像中学习新类别和使用粗标签来减少标记工作。在这项工作中,我们提出了一种具有一次性和弱监督设置的 3D 医学图像分割创新框架。首先,基于不同人体的解剖模式相似的假设,提出了一种传播重建网络,将涂鸦从一个带注释的体积传播到未标记的 3D 图像。然后,设计了一个多级相似性去噪模块,以基于从解剖级到像素级的嵌入来细化涂鸦。将涂鸦扩展到伪掩模后,我们观察到错误分类的体素主要发生在边界区域,并提出提取自支撑原型以进行特定的细化。基于这些弱监督分割结果,我们使用噪声标签训练策略进一步训练新类的分割模型。对三个 CT 和一个 MRI 数据集的实验表明,所提出的方法比最先进的方法获得了显着改进,即使在严重的类别不平衡和低对比度下也能稳健地执行。代码可在 https://github.com/LWHYC/OneShot_WeaklySeg 上公开获取。张日,小凡/ABG-4594-2021;雷文辉/LGZ-5354-2024;张玉东/HKM-7209-2023;刘兴隆/IUN-7675-2023;雷文辉/;顾然/艾雷文辉/0000-0002-2952-3441;顾冉/0000-0002-2011-7623 ZB 2 Z8 0 ZR 0 TC 10 ZA 0
ZS 0
Z9 11
C1 Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai
200240, Peoples R China
C1 Shanghai AI Lab, Shanghai 201203, Peoples R China
C1 Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 611731,
Peoples R China
C1 SenseTime Res, Shanghai 200233, Peoples R China
C3 SenseTime Res
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600015
PM 37440388
ER
ZS 0 Z9 11 C1 上海交通大学信息与工程学院,上海 200240,人民 R 中国 C1 上海人工智能实验室,上海 201203,人民 R 中国 C1 信息科学与技术大学,机械与工程学院,成都 611731 ,人民 R 中国 C1 商汤科技 Res,上海 200233,人民 R 中国 C3 商汤科技 Res SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600015 PM 37440388 ER
PT J
AU van Garderen, Karin A.
van der Voort, Sebastian R.
Wijnenga, Maarten M. J.
Incekara, Fatih
Alafandi, Ahmad
Kapsas, Georgios
Gahrmann, Renske
Schouten, Joost W.
Dubbink, Hendrikus J.
Vincent, Arnaud J. P. E.
van den Bent, Martin
French, Pim J.
Smits, Marion
Klein, Stefan
PT J AU van Garderen、Karin A. van der Voort、Sebastian R. Wijnenga、Maarten MJ Incekara、Fatih Alafandi、Ahmad Kapsas、Georgios Gahrmann、Renske Schouten、Joost W. Dubbink、Hendrikus J. Vincent、Arnaud JPE van den Bent、马丁·弗伦奇、皮姆·J·史密茨、玛丽昂·克莱因、斯特凡
TI Evaluating the Predictive Value of Glioma Growth Models for Low-Grade
Glioma After Tumor Resection
TI 评估肿瘤切除后低级别胶质瘤的胶质瘤生长模型的预测价值
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 253
EP 263
DI 10.1109/TMI.2023.3298637
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 253 EP 263 DI 10.1109/TMI.2023.3298637 DT 文章 PD 2024 年 1 月 PY 2024
AB Tumor growth models have the potential to model and predict the
spatiotemporal evolution of glioma in individual patients. Infiltration
of glioma cells is known to be faster along the white matter tracts, and
therefore structural magnetic resonance imaging (MRI) and diffusion
tensor imaging (DTI) can be used to inform the model. However, applying
and evaluating growth models in real patient data is challenging. In
this work, we propose to formulate the problem of tumor growth as a
ranking problem, as opposed to a segmentation problem, and use the
average precision (AP) as a performance metric. This enables an
evaluation of the spatial pattern that does not require a volume cut-off
value. Using the AP metric, we evaluate diffusion-proliferation models
informed by structural MRI and DTI, after tumor resection. We applied
the models to a unique longitudinal dataset of 14 patients with
low-grade glioma (LGG), who received no treatment after surgical
resection, to predict the recurrent tumor shape after tumor resection.
The diffusion models informed by structural MRI and DTI showed a small
but significant increase in predictive performance with respect to
homogeneous isotropic diffusion, and the DTI-informed model reached the
best predictive performance. We conclude there is a significant
improvement in the prediction of the recurrent tumor shape when using a
DTI-informed anisotropic diffusion model with respect to istropic
diffusion, and that the AP is a suitable metric to evaluate these
models. All code and data used in this publication are made publicly
available.
RI van der Voort, Sebastian/AAT-1825-2021; Klein, Stefan/JAX-2520-2023; van den Bent, Martin/JPK-6126-2023; Smits, Marion/E-4795-2011; Klein, Stefan/; van der Voort, Sebastian/; van Garderen, Karin/; Dubbink, Hendrikus Jan/; van den Bent, Martin/; Smits, Marion/; French, Pim/; Wijnenga, Maarten/
OI Klein, Stefan/0000-0003-4449-6784; van der Voort,
Sebastian/0000-0002-6526-8126; van Garderen, Karin/0000-0001-9610-4397;
Dubbink, Hendrikus Jan/0000-0002-2160-5207; van den Bent,
Martin/0000-0001-5710-5127; Smits, Marion/0000-0001-5563-2871; French,
Pim/0000-0002-0668-9529; Wijnenga, Maarten/0000-0002-2690-5022
TC 2
ZA 0
Z8 0
AB 肿瘤生长模型有潜力模拟和预测个体患者神经胶质瘤的时空演变。众所周知,神经胶质瘤细胞沿白质束的浸润速度更快,因此结构磁共振成像 (MRI) 和扩散张量成像 (DTI) 可用于为模型提供信息。然而,在真实患者数据中应用和评估增长模型具有挑战性。在这项工作中,我们建议将肿瘤生长问题表述为排序问题,而不是分割问题,并使用平均精度(AP)作为性能指标。这使得能够评估不需要体积截止值的空间图案。使用 AP 指标,我们评估肿瘤切除后由结构 MRI 和 DTI 提供的扩散增殖模型。我们将这些模型应用于 14 名低级别胶质瘤 (LGG) 患者的独特纵向数据集,这些患者在手术切除后未接受任何治疗,以预测肿瘤切除后复发的肿瘤形状。由结构 MRI 和 DTI 提供信息的扩散模型显示,相对于均匀各向同性扩散,预测性能有小幅但显着的提高,并且由 DTI 提供信息的模型达到了最佳预测性能。我们得出的结论是,使用基于 DTI 的各向异性扩散模型相对于各向异性扩散而言,对复发肿瘤形状的预测有显着改善,并且 AP 是评估这些模型的合适指标。本出版物中使用的所有代码和数据均公开可用。RI 范德沃特,塞巴斯蒂安/AAT-1825-2021;克莱因,斯特凡/JAX-2520-2023;范登本特,马丁/JPK-6126-2023;马里恩·斯密茨/E-4795-2011;克莱因·斯特凡/;塞巴斯蒂安·范德沃特/;范·加德伦,卡琳/;亨德里克斯·杜宾克·简/;范登本特,马丁/;斯密茨,马里昂/;法语,Pim/;维能加,马丁/OI Klein,Stefan/0000-0003-4449-6784;塞巴斯蒂安·范德沃特/0000-0002-6526-8126;范·加德伦,卡琳/0000-0001-9610-4397;杜宾克,亨德里库斯·简/0000-0002-2160-5207;范登本特,马丁/0000-0001-5710-5127;斯密茨,马里昂/0000-0001-5563-2871;法语,Pim/0000-0002-0668-9529;韦嫩加,马丁/0000-0002-2690-5022 TC 2 ZA 0 Z8 0
ZS 0
ZR 0
ZB 0
Z9 2
C1 Erasmus MC, Dept Radiol & Nucl Med, NL-3015 GD Rotterdam, Netherlands
C1 Erasmus MC, Dept Neurosurg, NL-3015 GD Rotterdam, Netherlands
C1 Parnassia Psychiat Inst, NL-2552 DH The Hague, Netherlands
C1 Kings Coll Hosp London, Dubai, U Arab Emirates
C1 Erasmus MC, Dept Pathol, NL-3015 GD Rotterdam, Netherlands
C1 Erasmus MC, Dept Neurol, NL-3015 GD Rotterdam, Netherlands
C3 Kings Coll Hosp London
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600016
PM 37490381
ER
ZS 0 ZR 0 ZB 0 Z9 2 C1 Erasmus MC,放射科和核医学科,NL-3015 GD 鹿特丹,荷兰 C1 Erasmus MC,神经外科科,NL-3015 GD 鹿特丹,荷兰 C1 Parnassia 精神病研究所,NL-2552 DH 海牙,荷兰 C1 Kings Coll Hosp 伦敦,迪拜,阿拉伯联合酋长国 C1 Erasmus MC,Dept Pathol,NL-3015 GD 鹿特丹,荷兰 C1 Erasmus MC,Dept Neurol,NL-3015 GD 鹿特丹,荷兰 C3 Kings Coll Hosp London SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600016 PM 37490381 ER
PT J
AU Lin, Yiyang
Wang, Yifeng
Fang, Zijie
Li, Zexin
Guan, Xianchao
Jiang, Danling
Zhang, Yongbing
PT J AU Lin, Yiyang Wang, Yifeng Fang, Zijie Li, Zexin Kan, Xianchacha Jiang, Danling Chang, Yongbing
TI A Multi-Perspective Self-Supervised Generative Adversarial Network for
FS to FFPE Stain Transfer.
TI 用于 FS 到 FFPE 污渍转移的多视角自监督生成对抗网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3460795
DT Journal Article
PD 2024-Sep-16
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3460795 DT 期刊文章 PD 2024-Sep-16 PY 2024
AB In clinical practice, frozen section (FS) images can be utilized to
obtain the immediate pathological results of the patients in operation
due to their fast production speed. However, compared with the
formalin-fixed and paraffin-embedded (FFPE) images, the FS images
greatly suffer from poor quality. Thus, it is of great significance to
transfer the FS image to the FFPE one, which enables pathologists to
observe high-quality images in operation. However, obtaining the paired
FS and FFPE images is quite hard, so it is difficult to obtain accurate
results using supervised methods. Apart from this, the FS to FFPE stain
transfer faces many challenges. Firstly, the number and position of
nuclei scattered throughout the image are hard to maintain during the
transfer process. Secondly, transferring the blurry FS images to the
clear FFPE ones is quite challenging. Thirdly, compared with the center
regions of each patch, the edge regions are harder to transfer. To
overcome these problems, a multi-perspective self-supervised GAN,
incorporating three auxiliary tasks, is proposed to improve the
performance of FS to FFPE stain transfer. Concretely, a nucleus
consistency constraint is designed to enable the high-fidelity of
nuclei, an FFPE guided image deblurring is proposed for improving the
clarity, and a multi-field-of-view consistency constraint is designed to
better generate the edge regions. Objective indicators and pathologists'
evaluation for experiments on the five datasets across different
countries have demonstrated the effectiveness of our method. In
addition, the validation in the downstream task of microsatellite
instability prediction has also proved the performance improvement by
transferring the FS images to FFPE ones. Our code link is
https://github.com/linyiyang98/Self-Supervised-FS2FFPE.git.
Z8 0
TC 0
ZR 0
ZB 0
ZA 0
AB 在临床实践中,冷冻切片(FS)图像由于其制作速度快,可用于获得术中患者的即时病理结果。然而,与福尔马林固定石蜡包埋(FFPE)图像相比,FS 图像质量较差。因此,将 FS 图像转移到 FFPE 图像具有重要意义,使病理学家能够在手术中观察到高质量的图像。然而,获得配对的 FS 和 FFPE 图像相当困难,因此使用监督方法很难获得准确的结果。除此之外,FS 到 FFPE 染色转移还面临许多挑战。首先,在转移过程中很难维持分散在图像中的核的数量和位置。其次,将模糊的 FS 图像转换为清晰的 FFPE 图像非常具有挑战性。第三,与每个补丁的中心区域相比,边缘区域更难转移。为了克服这些问题,提出了一种包含三个辅助任务的多视角自监督 GAN,以提高 FS 到 FFPE 染色剂转移的性能。具体来说,设计了核一致性约束以实现核的高保真度,提出了FFPE引导图像去模糊以提高清晰度,并设计了多视场一致性约束以更好地生成边缘区域。客观指标和病理学家对不同国家五个数据集实验的评估证明了我们方法的有效性。此外,下游微卫星不稳定预测任务的验证也证明了将FS图像转换为FFPE图像的性能提升。我们的代码链接是https://github。com/linyiyang98/Self-Supervised-FS2FFPE.git。Z8 0 TC 0 ZR 0 ZB 0 ZA 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-09-18
UT MEDLINE:39283778
PM 39283778
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-09-18 UT MEDLINE:39283778 PM 39283778 ER
PT J
AU Chen, Zhongyu
Bian, Yun
Shen, Erwei
Fan, Ligang
Zhu, Weifang
Shi, Fei
Shao, Chengwei
Chen, Xinjian
Xiang, Dehui
PT J AU Chen、Zhongyu Bian、沉云、范二伟、朱立刚、施伟芳、邵飞、陈成伟、项新建、德辉
TI Moment-Consistent Contrastive CycleGAN for Cross-Domain Pancreatic Image
Segmentation.
用于跨域胰腺图像分割的 TI 时刻一致对比循环 GAN。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3447071
DT Journal Article
PD 2024-Aug-21
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3447071 DT 期刊文章 PD 2024-Aug-21 PY 2024
AB CT and MR are currently the most common imaging techniques for
pancreatic cancer diagnosis. Accurate segmentation of the pancreas in CT
and MR images can provide significant help in the diagnosis and
treatment of pancreatic cancer. Traditional supervised segmentation
methods require a large number of labeled CT and MR training data, which
is usually time-consuming and laborious. Meanwhile, due to domain shift,
traditional segmentation networks are difficult to be deployed on
different imaging modality datasets. Cross-domain segmentation can
utilize labeled source domain data to assist unlabeled target domains in
solving the above problems. In this paper, a cross-domain pancreas
segmentation algorithm is proposed based on Moment-Consistent
Contrastive Cycle Generative Adversarial Networks (MC-CCycleGAN).
MC-CCycleGAN is a style transfer network, in which the encoder of its
generator is used to extract features from real images and style
transfer images, constrain feature extraction through a contrastive
loss, and fully extract structural features of input images during style
transfer while eliminate redundant style features. The multi-order
central moments of the pancreas are proposed to describe its anatomy in
high dimensions and a contrastive loss is also proposed to constrain the
moment consistency, so as to maintain consistency of the pancreatic
structure and shape before and after style transfer. Multi-teacher
knowledge distillation framework is proposed to transfer the knowledge
from multiple teachers to a single student, so as to improve the
robustness and performance of the student network. The experimental
results have demonstrated the superiority of our framework over
state-of-the-art domain adaptation methods.
OI Zhu, Weifang/0000-0001-9540-4101; Chen, Xinjian/0000-0002-0871-293X
Z8 0
TC 0
ZR 0
ZB 0
ZA 0
AB CT和MR是目前诊断胰腺癌最常用的影像技术。 CT和MR图像中胰腺的精确分割可以为胰腺癌的诊断和治疗提供重要帮助。传统的监督分割方法需要大量带标签的CT和MR训练数据,通常费时费力。同时,由于域转移,传统的分割网络很难部署在不同的成像模态数据集上。跨域分割可以利用标记的源域数据来辅助未标记的目标域解决上述问题。本文提出了一种基于矩一致对比循环生成对抗网络(MC-CCycleGAN)的跨域胰腺分割算法。 MC-CCycleGAN是一种风格迁移网络,其中生成器的编码器用于从真实图像和风格迁移图像中提取特征,通过对比损失来约束特征提取,在风格迁移过程中充分提取输入图像的结构特征,同时消除多余的风格特征。提出了胰腺的多阶中心矩来描述其高维解剖结构,并提出了对比损失来约束矩一致性,从而保持样式转移前后胰腺结构和形状的一致性。提出了多教师知识蒸馏框架,将知识从多个教师转移到单个学生,从而提高学生网络的鲁棒性和性能。实验结果证明了我们的框架相对于最先进的域适应方法的优越性。朱爱潍坊/0000-0001-9540-4101;陈新建/0000-0002-0871-293X Z8 0 TC 0 ZR 0 ZB 0 ZA 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-08-23
UT MEDLINE:39167524
PM 39167524
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-08-23 UT MEDLINE:39167524 PM 39167524 ER
PT J
AU Mou, Lei
Yan, Qifeng
Lin, Jinghui
Zhao, Yifan
Liu, Yonghuai
Ma, Shaodong
Zhang, Jiong
Lv, Wenhao
Zhou, Tao
Frangi, Alejandro F
Zhao, Yitian
PT J AU Mou, 严雷, 林其峰, 赵晶辉, 刘一凡, 马永怀, 张少东, 吕炯, 周文浩, 弗兰吉陶, 赵一田, 益田
TI COSTA: A Multi-center TOF-MRA Dataset and A Style Self-Consistency
Network for Cerebrovascular Segmentation.
TI COSTA:多中心 TOF-MRA 数据集和脑血管分割风格自洽网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3424976
DT Journal Article
PD 2024-Jul-16
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3424976 DT 期刊文章 PD 2024-Jul-16 PY 2024
AB Time-of-flight magnetic resonance angiography (TOF-MRA) is the least
invasive and ionizing radiation-free approach for cerebrovascular
imaging, but variations in imaging artifacts across different clinical
centers and imaging vendors result in inter-site and inter-vendor
heterogeneity, making its accurate and robust cerebrovascular
segmentation challenging. Moreover, the limited availability and quality
of annotated data pose further challenges for segmentation methods to
generalize well to unseen datasets. In this paper, we construct the
largest and most diverse TOF-MRA dataset (COSTA) from 8 individual
imaging centers, with all the volumes manually annotated. Then we
propose a novel network for cerebrovascular segmentation, namely CESAR,
with the ability to tackle feature granularity and image style
heterogeneity issues. Specifically, a coarse-to-fine architecture is
implemented to refine cerebrovascular segmentation in an iterative
manner. An automatic feature selection module is proposed to selectively
fuse global long-range dependencies and local contextual information of
cerebrovascular structures. A style self-consistency loss is then
introduced to explicitly align diverse styles of TOF-MRA images to a
standardized one. Extensive experimental results on the COSTA dataset
demonstrate the effectiveness of our CESAR network against
state-of-the-art methods. We have made 6 subsets of COSTA with the
source code online available, in order to promote relevant research in
the community.
RI Frangi, Alejandro/C-6500-2008; Liu, Yonghuai/
OI Frangi, Alejandro/0000-0002-2675-528X; Liu, Yonghuai/0000-0002-3774-2134
ZB 0
ZR 0
AB 飞行时间磁共振血管造影 (TOF-MRA) 是侵入性最小且无电离辐射的脑血管成像方法,但不同临床中心和成像供应商之间的成像伪影存在差异,导致站点间和供应商间的异质性,使其准确而稳健的脑血管分割具有挑战性。此外,注释数据的可用性和质量有限,这对分割方法如何泛化到未见过的数据集提出了进一步的挑战。在本文中,我们从 8 个独立成像中心构建了最大且最多样化的 TOF-MRA 数据集 (COSTA),所有数据卷均经过手动注释。然后,我们提出了一种新颖的脑血管分割网络,即 CESAR,能够解决特征粒度和图像风格异质性问题。具体来说,采用从粗到细的架构以迭代方式细化脑血管分割。提出了一种自动特征选择模块来选择性地融合脑血管结构的全局远程依赖性和局部上下文信息。然后引入风格自一致性损失,以明确地将不同风格的 TOF-MRA 图像与标准化图像对齐。 COSTA 数据集上的大量实验结果证明了我们的 CESAR 网络相对于最先进方法的有效性。我们已经制作了 COSTA 的 6 个子集并提供了在线源代码,以促进社区的相关研究。 RI 弗朗吉,亚历杭德罗/C-6500-2008;刘永怀/ OI Frangi, Alejandro/0000-0002-2675-528X;刘永怀/0000-0002-3774-2134 ZB 0 ZR 0
ZS 0
Z8 0
ZA 0
TC 0
Z9 0
EI 1558-254X
DA 2024-07-20
UT MEDLINE:39012728
PM 39012728
ER
ZS 0 Z8 0 ZA 0 TC 0 Z9 0 EI 1558-254X DA 2024-07-20 UT MEDLINE:39012728 PM 39012728 ER
PT J
AU Sharifzadeh, Mostafa
Goudarzi, Sobhan
Tang, An
Benali, Habib
Rivaz, Hassan
PT J AU Sharifzadeh、Mostafa Goudarzi、Sobhan Tang、An Benali、Habib Rivaz、Hassan
TI Mitigating Aberration-Induced Noise: A Deep Learning-Based
Aberration-to-Aberration Approach.
TI 减轻像差引起的噪声:基于深度学习的像差到像差方法。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3422027
DT Journal Article
PD 2024-Jul-03
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3422027 DT 期刊文章 PD 2024-Jul-03 PY 2024
AB One of the primary sources of suboptimal image quality in ultrasound
imaging is phase aberration. It is caused by spatial changes in sound
speed over a heterogeneous medium, which disturbs the transmitted waves
and prevents coherent summation of echo signals. Obtaining non-aberrated
ground truths in real-world scenarios can be extremely challenging, if
not impossible. This challenge hinders the performance of deep
learning-based techniques due to the domain shift between simulated and
experimental data. Here, for the first time, we propose a deep
learning-based method that does not require ground truth to correct the
phase aberration problem and, as such, can be directly trained on real
data. We train a network wherein both the input and target output are
randomly aberrated radio frequency (RF) data. Moreover, we demonstrate
that a conventional loss function such as mean square error is
inadequate for training such a network to achieve optimal performance.
Instead, we propose an adaptive mixed loss function that employs both
B-mode and RF data, resulting in more efficient convergence and enhanced
performance. Finally, we publicly release our dataset, comprising over
180,000 aberrated single plane-wave images (RF data), wherein phase
aberrations are modeled as near-field phase screens. Although not
utilized in the proposed method, each aberrated image is paired with its
corresponding aberration profile and the non-aberrated version, aiming
to mitigate the data scarcity problem in developing deep learning-based
techniques for phase aberration correction. Source code and trained
model are also available along with the dataset at
http://code.sonography.ai/main-aaa.
OI Rivaz, Hassan/0000-0001-5800-3034; Sharifzadeh,
Mostafa/0000-0001-7625-6297; Goudarzi, Sobhan/0000-0002-0306-8946
Z8 0
TC 0
AB 超声成像中图像质量欠佳的主要来源之一是相位像差。它是由异质介质上声速的空间变化引起的,它会干扰传输波并阻止回声信号的相干求和。在现实场景中获得无畸变的基本事实即使不是不可能,也是极具挑战性的。由于模拟数据和实验数据之间的域转移,这一挑战阻碍了基于深度学习的技术的性能。在这里,我们首次提出了一种基于深度学习的方法,不需要地面事实来纠正相位像差问题,因此可以直接在真实数据上进行训练。我们训练一个网络,其中输入和目标输出都是随机畸变的射频(RF)数据。此外,我们证明了传统的损失函数(例如均方误差)不足以训练这样的网络以实现最佳性能。相反,我们提出了一种采用 B 模式和 RF 数据的自适应混合损失函数,从而实现更有效的收敛并增强性能。最后,我们公开发布我们的数据集,其中包含超过 180,000 个像差单平面波图像(RF 数据),其中相位像差被建模为近场相位屏幕。虽然在所提出的方法中没有使用,但每个像差图像都与其相应的像差轮廓和无像差版本配对,旨在缓解开发基于深度学习的相位像差校正技术时的数据稀缺问题。源代码和训练模型也可与数据集一起在 http://code.sonography.ai/main-aaa 上获取。OI 里瓦兹,哈桑/0000-0001-5800-3034;穆斯塔法·谢里夫扎德/0000-0001-7625-6297;索布汉·古达尔齐/0000-0002-0306-8946 Z8 0 TC 0
ZS 0
ZB 0
ZA 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-07-06
UT MEDLINE:38959140
PM 38959140
ER
ZS 0 ZB 0 ZA 0 ZR 0 Z9 0 EI 1558-254X DA 2024-07-06 UT MEDLINE:38959140 PM 38959140 ER
PT J
AU Huang, Junzhang
Zhu, Xiongfeng
Chen, Ziyang
Lin, Guoye
Huang, Meiyan
Feng, Qianjin
PT J AU Huang, 朱俊章, 陈雄峰, 林紫阳, 黄国业, 冯美艳, 前进
TI Pathological Priors Inspired Network for Vertebral Osteophytes
Recognition
TI 病理先验启发网络用于椎体骨赘识别
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2522
EP 2536
DI 10.1109/TMI.2024.3367868
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2522 EP 2536 DI 10.1109/TMI.2024.3367868 DT 文章 PD 2024 年 7 月 PY 2024
AB Automatic vertebral osteophyte recognition in Digital Radiography is of
great importance for the early prediction of degenerative disease but is
still a challenge because of the tiny size and high inter-class
similarity between normal and osteophyte vertebrae. Meanwhile, common
sampling strategies applied in Convolution Neural Network could cause
detailed context loss. All of these could lead to an incorrect
positioning predicament. In this paper, based on important pathological
priors, we define a set of potential lesions of each vertebra and
propose a novel Pathological Priors Inspired Network (PPIN) to achieve
accurate osteophyte recognition. PPIN comprises a backbone feature
extractor integrating with a Wavelet Transform Sampling module for
high-frequency detailed context extraction, a detection branch for
locating all potential lesions and a classification branch for producing
final osteophyte recognition. The Anatomical Map-guided Filter between
two branches helps the network focus on the specific anatomical regions
via the generated heatmaps of potential lesions in the detection branch
to address the incorrect positioning problem. To reduce the inter-class
similarity, a Bilateral Augmentation Module based on the graph
relationship is proposed to imitate the clinical diagnosis process and
to extract discriminative contextual information between adjacent
vertebrae in the classification branch. Experiments on the two
osteophytes-specific datasets collected from the public VinDr-Spine
database show that the proposed PPIN achieves the best recognition
performance among multitask frameworks and shows strong generalization.
The results on a private dataset demonstrate the potential in clinical
application. The Class Activation Maps also show the powerful
localization capability of PPIN. The source codes are available in
https://github.com/Phalo/PPIN.
RI Huang, Junzhang/KIH-5946-2024; Feng, Qianjin/Y-5438-2019; Lin, Guoye/; Huang, Junzhang/; Huang, Meiyan/
OI Lin, Guoye/0000-0002-6766-0201; Huang, Junzhang/0009-0000-1017-7297;
Huang, Meiyan/0000-0003-2138-9580
ZA 0
AB 数字射线照相中的自动椎体骨赘识别对于退行性疾病的早期预测非常重要,但由于正常椎骨和骨赘椎骨之间尺寸微小且类间高度相似,因此仍然是一个挑战。同时,卷积神经网络中应用的常见采样策略可能会导致详细的上下文丢失。所有这些都可能导致定位不正确的困境。在本文中,基于重要的病理先验,我们定义了每个椎骨的一组潜在病变,并提出了一种新颖的病理先验启发网络(PPIN)以实现准确的骨赘识别。 PPIN 包括一个与小波变换采样模块集成的主干特征提取器,用于高频详细上下文提取、一个用于定位所有潜在病变的检测分支和一个用于生成最终骨赘识别的分类分支。两个分支之间的解剖图引导过滤器帮助网络通过检测分支中潜在病变生成的热图来关注特定的解剖区域,以解决不正确的定位问题。为了减少类间相似性,提出了一种基于图关系的双边增强模块来模仿临床诊断过程并提取分类分支中相邻椎骨之间的区分性上下文信息。对从公共 VinDr-Spine 数据库收集的两个特定骨赘数据集进行的实验表明,所提出的 PPIN 在多任务框架中实现了最佳的识别性能,并表现出很强的泛化性。私人数据集的结果证明了其临床应用的潜力。 类激活图也展示了PPIN强大的定位能力。源代码可在 https://github.com/Phalo/PPIN 中获取。日黄俊章/KIH-5946-2024;冯前进/Y-5438-2019;林国业/;黄俊章/;黄美艳/爱林国业/0000-0002-6766-0201;黄俊章/0009-0000-1017-7297;黄美艳/0000-0003-2138-9580 ZA 0
ZS 0
ZR 0
ZB 0
Z8 0
TC 0
Z9 0
C1 Southern Med Univ, Sch Biomed Engn, Guangdong Prov Key Lab Med Image
Proc, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou
510515, Peoples R China
C1 Sun Yat Sen Univ, Affiliated Hosp 5, Dept Spinal Surg, Zhuhai 519000,
Guangdong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100001
PM 38386579
ER
ZS 0 ZR 0 ZB 0 Z8 0 TC 0 Z9 0 C1 南方医科大学,生物医学工程学院,广东省医学影像重点实验室,广东省工程医学影像诊断技术中心,广州 510515,人民共和国 C1 中山大学广东省珠海市附属第五医院脊柱外科 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100001 PM 38386579 ER
PT J
AU Agarwal, Saurabh
Arya, K V
Meena, Yogesh Kumar
PT J AU Agarwal、Saurabh Arya、KV Meena、Yogesh Kumar
TI CNN-O-ELMNet: Optimized Lightweight and Generalized Model for Lung
Disease Classification and Severity Assessment.
TI CNN-O-ELMNet:用于肺部疾病分类和严重程度评估的优化轻量级通用模型。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3416744
DT Journal Article
PD 2024-Jun-19
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3416744 DT 期刊文章 PD 2024-Jun-19 PY 2024
AB The high burden of lung diseases on healthcare necessitates effective
detection methods. Current Computer-aided design (CAD) systems are
limited by their focus on specific diseases and computationally
demanding deep learning models. To overcome these challenges, we
introduce CNN-O-ELMNet, a lightweight classification model designed to
efficiently detect various lung diseases, surpassing the limitations of
disease-specific CAD systems and the complexity of deep learning models.
This model combines a convolutional neural network for deep feature
extraction with an optimized extreme learning machine, utilizing the
imperialistic competitive algorithm for enhanced predictions. We then
evaluated the effectiveness of CNN-O-ELMNet using benchmark datasets for
lung diseases: distinguishing pneumothorax vs. non-pneumothorax,
tuberculosis vs. normal, and lung cancer vs. healthy cases. Our findings
demonstrate that CNN-O-ELMNet significantly outperformed (p < 0.05)
state-of-the-art methods in binary classifications for tuberculosis and
cancer, achieving accuracies of 97.85% and 97.70%, respectively, while
maintaining low computational complexity with only 2481 trainable
parameters. We also extended the model to categorize lung disease
severity based on Brixia scores. Achieving a 96.20% accuracy in
multi-class assessment for mild, moderate, and severe cases, makes it
suitable for deployment in lightweight healthcare devices.
OI Meena, Yogesh Kumar/0000-0003-3429-8426; AGARWAL,
SAURABH/0009-0003-1379-0371
TC 0
ZA 0
Z8 0
ZR 0
AB 肺部疾病对医疗保健造成的沉重负担需要有效的检测方法。当前的计算机辅助设计 (CAD) 系统因其对特定疾病的关注和计算要求较高的深度学习模型而受到限制。为了克服这些挑战,我们引入了 CNN-O-ELMNet,这是一种轻量级分类模型,旨在有效检测各种肺部疾病,超越了特定疾病 CAD 系统的局限性和深度学习模型的复杂性。该模型将用于深度特征提取的卷积神经网络与优化的极限学习机相结合,利用帝国主义竞争算法来增强预测。然后,我们使用肺部疾病的基准数据集评估了 CNN-O-ELMNet 的有效性:区分气胸与非气胸、结核病与正常病例、肺癌与健康病例。我们的研究结果表明,CNN-O-ELMNet 在结核病和癌症的二元分类方面显着优于 (p < 0.05) 最先进的方法,分别实现了 97.85% 和 97.70% 的准确率,同时保持了较低的计算复杂性只有 2481 个可训练参数。我们还扩展了该模型,以根据 Brixia 评分对肺部疾病的严重程度进行分类。在轻度、中度和重度病例的多级评估中实现 96.20% 的准确率,使其适合部署在轻型医疗设备中。 OI 米娜,尤格什·库马尔/0000-0003-3429-8426;阿加瓦尔,萨乌拉布/0009-0003-1379-0371 TC 0 ZA 0 Z8 0 ZR 0
ZS 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-06-22
UT MEDLINE:38896522
PM 38896522
ER
ZS 0 ZB 0 Z9 0 EI 1558-254X DA 2024-06-22 UT MEDLINE:38896522 PM 38896522 ER
PT J
AU Teng, Yingzhi
Wu, Kai
Liu, Jing
Li, Yifan
Teng, Xiangyi
PT J AU Teng, 吴英智, 刘凯, 李静, 滕一凡, 相宜
TI Constructing High-order Functional Connectivity Networks with Temporal
Information from fMRI Data.
TI 利用 fMRI 数据的时态信息构建高阶功能连接网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3412399
DT Journal Article
PD 2024-Jun-11
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3412399 DT 期刊文章 PD 2024-Jun-11 PY 2024
AB Conducting functional connectivity analysis on functional magnetic
resonance imaging (fMRI) data presents a significant and intricate
challenge. Contemporary studies typically analyze fMRI data by
constructing high-order functional connectivity networks (FCNs) due to
their strong interpretability. However, these approaches often overlook
temporal information, resulting in suboptimal accuracy. Temporal
information plays a vital role in reflecting changes in blood
oxygenation level-dependent signals. To address this shortcoming, we
have devised a framework for extracting temporal dependencies from fMRI
data and inferring high-order functional connectivity among regions of
interest (ROIs). Our approach postulates that the current state can be
determined by the FCN and the state at the previous time, effectively
capturing temporal dependencies. Furthermore, we enhance FCN by
incorporating high-order features through hypergraph-based manifold
regularization. Our algorithm involves causal modeling of the dynamic
brain system, and the obtained directed FC reveals differences in the
flow of information under different pattern. We have validated the
significance of integrating temporal information into FCN using four
real-world fMRI datasets. On average, our framework achieves 12% higher
accuracy than non-temporal hypergraph-based and low-order FCNs, all
while maintaining a short processing time. Notably, our framework
successfully identifies the most discriminative ROIs, aligning with
previous research, thereby facilitating cognitive and behavioral
studies.
RI Wu, Kai/S-3120-2017
OI Wu, Kai/0000-0002-1852-6364
TC 0
AB 对功能磁共振成像 (fMRI) 数据进行功能连接分析是一项重大而复杂的挑战。由于其强大的可解释性,当代研究通常通过构建高阶功能连接网络(FCN)来分析功能磁共振成像数据。然而,这些方法常常忽略时间信息,导致准确性不佳。时间信息在反映血氧水平依赖性信号的变化方面起着至关重要的作用。为了解决这个缺点,我们设计了一个框架,用于从功能磁共振成像数据中提取时间依赖性并推断感兴趣区域(ROI)之间的高阶功能连接。我们的方法假设当前状态可以由 FCN 和前一个时间的状态确定,从而有效地捕获时间依赖性。此外,我们通过基于超图的流形正则化合并高阶特征来增强 FCN。我们的算法涉及动态大脑系统的因果建模,获得的有向FC揭示了不同模式下信息流的差异。我们使用四个真实世界的 fMRI 数据集验证了将时间信息集成到 FCN 中的重要性。平均而言,我们的框架比基于非时间超图和低阶 FCN 的准确度高出 12%,同时保持较短的处理时间。值得注意的是,我们的框架成功地识别了最具辨别力的投资回报率,与之前的研究相一致,从而促进了认知和行为研究。吴凯/S-3120-2017 OI 吴凯/0000-0002-1852-6364 TC 0
ZS 0
Z8 0
ZA 0
ZB 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-06-13
UT MEDLINE:38861435
PM 38861435
ER
ZS 0 Z8 0 ZA 0 ZB 0 ZR 0 Z9 0 EI 1558-254X DA 2024-06-13 UT MEDLINE:38861435 PM 38861435 ER
PT J
AU Li, Zihan
Zheng, Yuan
Shan, Dandan
Yang, Shuzhou
Li, Qingde
Wang, Beizhan
Zhang, Yuanting
Hong, Qingqi
Shen, Dinggang
PT J AU Li, 郑子涵, 单远, 杨丹丹, 李树洲, 王庆德, 张北战, 洪元廷, 沉庆奇, 丁刚
TI ScribFormer: Transformer Makes CNN Work Better for Scribble-Based
Medical Image Segmentation
TI ScribFormer:Transformer 使 CNN 更好地进行基于 Scribble 的医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2254
EP 2265
DI 10.1109/TMI.2024.3363190
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2254 EP 2265 DI 10.1109/TMI.2024.3363190 DT 文章 PD 2024 年 6 月 PY 2024
AB Most recent scribble-supervised segmentation methods commonly adopt a
CNN framework with an encoder-decoder architecture. Despite its multiple
benefits, this framework generally can only capture small-range feature
dependency for the convolutional layer with the local receptive field,
which makes it difficult to learn global shape information from the
limited information provided by scribble annotations. To address this
issue, this paper proposes a new CNN-Transformer hybrid solution for
scribble-supervised medical image segmentation called ScribFormer. The
proposed ScribFormer model has a triple-branch structure, i.e., the
hybrid of a CNN branch, a Transformer branch, and an attention-guided
class activation map (ACAM) branch. Specifically, the CNN branch
collaborates with the Transformer branch to fuse the local features
learned from CNN with the global representations obtained from
Transformer, which can effectively overcome limitations of existing
scribble-supervised segmentation methods. Furthermore, the ACAM branch
assists in unifying the shallow convolution features and the deep
convolution features to improve model's performance further. Extensive
experiments on two public datasets and one private dataset show that our
ScribFormer has superior performance over the state-of-the-art
scribble-supervised segmentation methods, and achieves even better
results than the fully-supervised segmentation methods. The code is
released at https://github.com/HUANGLIZI/ScribFormer.
RI Li, Zihan/HNC-4983-2023; Zhang, Yuan-Ting/F-1759-2013; YANG, SHUZHOU/; Li, Zihan/; ZHANG, Yuanting/; Li, Qingde/
OI YANG, SHUZHOU/0009-0001-3632-9065; Li, Zihan/0009-0004-3839-0611; ZHANG,
Yuanting/0000-0003-4150-5470; Li, Qingde/0000-0001-5998-7565
AB 最近的涂鸦监督分割方法通常采用具有编码器-解码器架构的 CNN 框架。尽管具有多种优点,但该框架通常只能捕获具有局部感受野的卷积层的小范围特征依赖性,这使得很难从涂鸦注释提供的有限信息中学习全局形状信息。为了解决这个问题,本文提出了一种新的 CNN-Transformer 混合解决方案,用于涂鸦监督医学图像分割,称为 ScribFormer。所提出的 ScribFormer 模型具有三分支结构,即 CNN 分支、Transformer 分支和注意力引导类激活图 (ACAM) 分支的混合。具体来说,CNN分支与Transformer分支协作,将从CNN学到的局部特征与从Transformer获得的全局表示融合,可以有效克服现有涂鸦监督分割方法的局限性。此外,ACAM分支有助于统一浅层卷积特征和深层卷积特征,以进一步提高模型的性能。对两个公共数据集和一个私有数据集的大量实验表明,我们的 ScribFormer 比最先进的涂鸦监督分割方法具有更优越的性能,并且比完全监督分割方法取得了更好的结果。代码发布于https://github.com/HUANGLIZI/ScribFormer。李日涵/HNC-4983-2023;张远婷/F-1759-2013;杨,蜀州/;李子涵/;张元婷/;李庆德/爱阳,蜀州/0009-0001-3632-9065;李子涵/0009-0004-3839-0611;张元婷/0000-0003-4150-5470;李庆德/0000-0001-5998-7565
ZS 0
ZR 0
ZA 0
Z8 0
ZB 0
TC 7
Z9 7
C1 Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
C1 Univ Washington, Dept Bioengn, Seattle, WA 98195 USA
C1 Xiamen Univ, Inst Artificial Intelligence, Xiamen 361005, Peoples R
China
C1 Peking Univ, Sch Elect & Comp Engn, Shenzhen 518055, Peoples R China
C1 Univ Hull, Sch Comp Sci, Kingston Upon Hull HU6 7RX, England
C1 Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
C1 Hong Kong Inst Med Engn, Hong Kong, Peoples R China
C1 Xiamen Univ, Sch Informat, Dept Digital Media Technol, Xiamen 361005,
Peoples R China
C1 Xiamen Univ, Sch Informat, Xiamen Key Lab Intelligent Storage & Comp,
Xiamen 361005, Peoples R China
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 ShanghaiTech Univ, State Key Lab Adv Med Mat & Devices, Shanghai 201210,
Peoples R China
C1 Shanghai United Imaging Intelligence Co Ltd, Shanghai 200232, Peoples R
China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 200231, Peoples R China
C3 Hong Kong Inst Med Engn
C3 Shanghai United Imaging Intelligence Co Ltd
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100026
PM 38324425
ER
ZS 0 ZR 0 ZA 0 Z8 0 ZB 0 TC 7 Z9 7 C1 厦门大学信息科学学院,厦门 361005,人民共和国 C1 华盛顿大学生物工程系,西雅图,WA 98195 美国 C1 厦门大学人工智能研究所,厦门 361005 Peoples R China C1 北京大学,Sch Elect & Comp Engn,深圳 518055,Peoples R China C1 Univ Hull,Sch Comp Sci,Kingston Upon Hull HU6 7RX,England C1 Chinese Univ Hong Kong,Dept Elect Engn,香港,Peoples R China C1 香港医学工程学院,香港,人民 R C1 厦门大学,Sch Informat,数字媒体技术系,厦门 361005,人民 R C1 厦门大学,Sch Informat,厦门市智能存储与计算重点实验室,厦门 361005 R 中国 C1 上海科技大学,Sch Biomed Engn,上海 201210,人民 R 中国 C1 上海科技大学,先进医学材料与器械国家重点实验室,上海 201210,人民 R 中国 C1 上海联影智能有限公司,上海 200232,人民 R 中国 C1上海临床研究与试验中心,上海 200231,Peoples R China C3 香港 Inst Med Engn C3 上海联影智能有限公司 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS: 001243728100026 下午 38324425 ER
PT J
AU Dong, Xiuyu
Yang, Kaifan
Liu, Jinyu
Tang, Fan
Liao, Wenjun
Zhang, Yu
Liang, Shujun
PT J AU Dong、Xiuyu Yang、Kaifan Liu、Jinyu Tang、Fan Liao、Wenjun Zhang、Yu Liang、Shujun
TI Cross-Domain Mutual-Assistance Learning Framework for Fully Automated
Diagnosis of Primary Tumor in Nasopharyngeal Carcinoma.
TI鼻咽癌原发肿瘤全自动诊断跨领域互助学习框架。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3400406
DT Journal Article
PD 2024-May-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3400406 DT 期刊文章 PD 2024-May-13 PY 2024
AB Accurate T-staging of nasopharyngeal carcinoma (NPC) holds paramount
importance in guiding treatment decisions and prognosticating outcomes
for distinct risk groups. Regrettably, the landscape of deep
learning-based techniques for T-staging in NPC remains sparse, and
existing methodologies often exhibit suboptimal performance due to their
neglect of crucial domain-specific knowledge pertinent to primary tumor
diagnosis. To address these issues, we propose a new cross-domain
mutual-assistance learning framework for fully automated diagnosis of
primary tumor using H&N MR images. Specifically, we tackle primary tumor
diagnosis task with the convolutional neural network consisting of a 3D
cross-domain knowledge perception network (CKP net) for excavated
cross-domain-invariant features emphasizing tumor intensity variations
and internal tumor heterogeneity, and a multi-domain mutual-information
sharing fusion network (M2SF net), comprising a dual-pathway
domain-specific representation module and a mutual information fusion
module, for intelligently gauging and amalgamating multi-domain,
multi-scale T-stage diagnosis-oriented features. The proposed 3D
cross-domain mutual-assistance learning framework not only embraces
task-specific multi-domain diagnostic knowledge but also automates the
entire process of primary tumor diagnosis. We evaluate our model on an
internal and an external MR images dataset in a three-fold
cross-validation paradigm. Exhaustive experimental results demonstrate
that our method outperforms the state-of-the-art algorithms, and obtains
promising performance for tumor segmentation and T-staging. These
findings underscore its potential for clinical application, offering
valuable assistance to clinicians in treatment decision-making and
prognostication for various risk groups.
ZA 0
ZR 0
Z8 0
TC 0
AB 鼻咽癌 (NPC) 的准确 T 分期对于指导不同风险群体的治疗决策和预测结果至关重要。遗憾的是,基于深度学习的鼻咽癌 T 分期技术仍然很少,而且现有的方法由于忽视了与原发性肿瘤诊断相关的关键领域特定知识,常常表现出次优的性能。为了解决这些问题,我们提出了一种新的跨领域互助学习框架,用于使用 H&N MR 图像全自动诊断原发肿瘤。具体来说,我们使用卷积神经网络来解决原发肿瘤诊断任务,该卷积神经网络由 3D 跨域知识感知网络(CKP 网络)组成,用于挖掘强调肿瘤强度变化和内部肿瘤异质性的跨域不变特征,以及多域交互网络-信息共享融合网络(M2SF net),包括双通道特定域表示模块和互信息融合模块,用于智能测量和合并多域、多尺度T阶段诊断导向特征。所提出的3D跨领域互助学习框架不仅包含特定任务的多领域诊断知识,而且还自动化了原发肿瘤诊断的整个过程。我们以三重交叉验证范例在内部和外部 MR 图像数据集上评估我们的模型。详尽的实验结果表明,我们的方法优于最先进的算法,并在肿瘤分割和 T 分期方面获得了有前景的性能。 这些发现强调了其临床应用潜力,为临床医生对各种风险群体的治疗决策和预测提供了宝贵的帮助。ZA 0 ZR 0 Z8 0 TC 0
ZS 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-05-16
UT MEDLINE:38739507
PM 38739507
ER
ZS 0 ZB 0 Z9 0 EI 1558-254X DA 2024-05-16 UT MEDLINE:38739507 PM 38739507 ER
PT J
AU Li, Zimeng
Xiao, Sa
Wang, Cheng
Li, Haidong
Zhao, Xiuchao
Duan, Caohui
Zhou, Qian
Rao, Qiuchen
Fang, Yuan
Xie, Junshuai
Shi, Lei
Guo, Fumin
Ye, Chaohui
Zhou, Xin
PT J AU Li, 肖子萌, 王飒, 李成, 赵海东, 段秀超, 周曹辉, 饶谦, 方秋辰, 谢元, 史军帅, 郭雷, 叶福民, 周朝辉, 辛
TI Encoding Enhanced Complex CNN for Accurate and Highly Accelerated MRI
TI 编码增强型复杂 CNN,实现精确且高度加速的 MRI
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1828
EP 1840
DI 10.1109/TMI.2024.3351211
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1828 EP 1840 DI 10.1109/TMI.2024.3351211 DT 文章 PD 2024 年 5 月 PY 2024
AB Magnetic resonance imaging (MRI) using hyperpolarized noble gases
provides a way to visualize the structure and function of human lung,
but the long imaging time limits its broad research and clinical
applications. Deep learning has demonstrated great potential for
accelerating MRI by reconstructing images from undersampled data.
However, most existing deep convolutional neural networks (CNN) directly
apply square convolution to k-space data without considering the
inherent properties of k-space sampling, limiting k-space learning
efficiency and image reconstruction quality. In this work, we propose an
encoding enhanced (EN2) complex CNN for highly undersampled pulmonary
MRI reconstruction. EN2 complex CNN employs convolution along either the
frequency or phase-encoding direction, resembling the mechanisms of
k-space sampling, to maximize the utilization of the encoding
correlation and integrity within a row or column of k-space. We also
employ complex convolution to learn rich representations from the
complex k-space data. In addition, we develop a feature-strengthened
modularized unit to further boost the reconstruction performance.
Experiments demonstrate that our approach can accurately reconstruct
hyperpolarized Xe-129 and H-1 lung MRI from 6-fold undersampled k-space
data and provide lung function measurements with minimal biases compared
with fully sampled images. These results demonstrate the effectiveness
of the proposed algorithmic components and indicate that the proposed
approach could be used for accelerated pulmonary MRI in research and
clinical lung disease patient care.
RI Zhou, Xin/D-9987-2016; Duan, Caohui/; Li, zimeng/
OI Duan, Caohui/0000-0002-3470-5602; Li, zimeng/0000-0003-2798-3134
AB 使用超极化惰性气体的磁共振成像(MRI)提供了一种可视化人体肺部结构和功能的方法,但成像时间长限制了其广泛的研究和临床应用。深度学习已展现出通过欠采样数据重建图像来加速 MRI 的巨大潜力。然而,大多数现有的深度卷积神经网络(CNN)直接将平方卷积应用于k空间数据,没有考虑k空间采样的固有属性,限制了k空间学习效率和图像重建质量。在这项工作中,我们提出了一种编码增强 (EN2) 复杂 CNN,用于高度欠采样的肺部 MRI 重建。 EN2 复合 CNN 采用沿频率或相位编码方向的卷积,类似于 k 空间采样机制,以最大限度地利用 k 空间行或列内的编码相关性和完整性。我们还采用复杂的卷积从复杂的 k 空间数据中学习丰富的表示。此外,我们还开发了功能强化的模块化单元,以进一步提高重建性能。实验表明,我们的方法可以根据 6 倍欠采样 k 空间数据准确重建超极化 Xe-129 和 H-1 肺 MRI,并提供与完全采样图像相比偏差最小的肺功能测量结果。这些结果证明了所提出的算法组件的有效性,并表明所提出的方法可用于研究和临床肺病患者护理中的加速肺部 MRI。日周鑫/D-9987-2016;段曹慧/;李子萌/艾段曹慧/0000-0002-3470-5602;李子萌/0000-0003-2798-3134
ZS 0
ZB 1
Z8 0
TC 1
ZA 0
ZR 0
Z9 1
C1 Chinese Acad Sci, Innovat Acad Precis Measurement Sci & Technol, State
Key Lab Magnet Resonance & Atom & Mol Phys, Wuhan Inst Phys & Math,Natl
Ctr Magnet Resonance W, Wuhan 430071, Peoples R China
C1 Huazhong Univ Sci & Technol, Sch Phys, Wuhan 430074, Peoples R China
C1 Univ Chinese Acad Sci, Beijing 100049, Peoples R China
C1 Huazhong Univ Sci & Technol, Dept Biomed Engn, Wuhan Natl Lab Optoelect,
Wuhan 430074, Peoples R China
C1 Hainan Univ, Sch Biomed Engn, Key Lab Biomed Engn Hainan Prov, Haikou
570228, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800030
PM 38194397
ER
ZS 0 ZB 1 Z8 0 TC 1 ZA 0 ZR 0 Z9 1 C1 中国科学院、创新研究院精密测量科学与技术、磁共振与原子与分子物理国家重点实验室、武汉物理与数学研究所、国家中心磁共振W、武汉430071,人民R中国C1华中科技大学物理学院,武汉430074,人民R中国C1大学中国科学院,北京100049,人民R中国C1华中科技大学生物医学工程系,武汉国家光电实验室,武汉430074,人民R中国C1海南大学,Sch Biomed Engn,海南省生物医学工程重点实验室,海口570228,人民R中国SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800030 PM 38194397 ER
PT J
AU Tay, Zhiwei
Kim, Han-Joon
Ho, John S.
Olivo, Malini
PT J AU Tay、Zhiwei Kim、Han-Joon Ho、John S. Olivo、Malini
TI A Magnetic Particle Imaging Approach for Minimally Invasive Imaging and
Sensing With Implantable Bioelectronic Circuits
TI A 磁粒子成像方法,通过可植入生物电子电路实现微创成像和传感
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1740
EP 1752
DI 10.1109/TMI.2023.3348149
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1740 EP 1752 DI 10.1109/TMI.2023.3348149 DT 文章 PD 2024 年 5 月 PY 2024
AB Minimally-invasive and biocompatible implantable bioelectronic circuits
are used for long-term monitoring of physiological processes in the
body. However, there is a lack of methods that can cheaply and
conveniently image the device within the body while simultaneously
extracting sensor information. Magnetic Particle Imaging (MPI) with zero
background signal, high contrast, and high sensitivity with quantitative
images is ideal for this challenge because the magnetic signal is not
absorbed with increasing tissue depth and incurs no radiation dose. We
show how to easily modify common implantable devices to be imaged by MPI
by encapsulating and magnetically-coupling magnetic nanoparticles
(SPIOs) to the device circuit. These modified implantable devices not
only provide spatial information via MPI, but also couple to our
handheld MPI reader to transmit sensor information by modulating
harmonic signals from magnetic nanoparticles via switching or
frequency-shifting with resistive or capacitive sensors. This paper
provides proof-of-concept of an optimized MPI imaging technique for
implantable devices to extract spatial information as well as other
information transmitted by the implanted circuit (such as biosensing)
via encoding in the magnetic particle spectrum. The 4D images present 3D
position and a changing color tone in response to a variable biometric.
Biophysical sensing via bioelectronic circuits that take advantage of
the unique imaging properties of MPI may enable a wide range of
minimally invasive applications in biomedicine and diagnosis.
RI Kim, Han-Joon/Q-8674-2017; Tay, Zhi Wei/IWV-1560-2023
OI Kim, Han-Joon/0000-0002-7681-0486; Tay, Zhi Wei/0000-0002-2717-0515
ZR 0
TC 0
AB 微创且生物相容的植入式生物电子电路用于长期监测体内的生理过程。然而,缺乏能够廉价且方便地对体内设备进行成像并同时提取传感器信息的方法。具有零背景信号、高对比度和高灵敏度的定量图像的磁粒子成像 (MPI) 是应对这一挑战的理想选择,因为磁信号不会随着组织深度的增加而被吸收,并且不会产生辐射剂量。我们展示了如何通过将磁性纳米粒子 (SPIO) 封装并磁性耦合到设备电路来轻松修改要通过 MPI 成像的常见植入设备。这些改进的植入式设备不仅通过 MPI 提供空间信息,而且还与我们的手持式 MPI 读取器耦合,通过电阻或电容传感器的切换或频移来调制来自磁性纳米颗粒的谐波信号,从而传输传感器信息。本文提供了一种优化的 MPI 成像技术的概念验证,该技术适用于植入式设备,通过磁性粒子频谱中的编码来提取空间信息以及植入电路传输的其他信息(例如生物传感)。 4D 图像呈现 3D 位置和响应可变生物特征而变化的色调。通过生物电子电路进行生物物理传感,利用 MPI 独特的成像特性,可以在生物医学和诊断领域实现广泛的微创应用。 RI Kim, Han-Joon/Q-8674-2017;郑志伟/IWV-1560-2023 OI Kim, Han-Joon/0000-0002-7681-0486;郑志伟/0000-0002-2717-0515 ZR 0 TC 0
ZS 0
ZA 0
Z8 0
ZB 0
Z9 0
C1 ASTAR, Inst Bioengn & Bioimaging IBB, Singapore 138669, Singapore
C1 Natl Inst Adv Ind Sci & Technol, Tsukuba, Ibaraki 3058564, Japan
C1 Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
C1 Kumoh Natl Inst Technol, Dept Med IT Convergence, Gumi 39253, South
Korea
C1 Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117599, Singapore
C1 Natl Univ Singapore, Inst Hlth Technol, Singapore 117599, Singapore
C1 ASTAR, ASTAR Skin Res Labs ASRL, Singapore 138669, Singapore
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800014
PM 38157469
ER
ZS 0 ZA 0 Z8 0 ZB 0 Z9 0 C1 ASTAR,Inst Bioengn & Bioimaging IBB,新加坡 138669,新加坡 C1 Natl Inst Adv Ind Sci & Technol,Tsukuba,Ibaraki 3058564,Japan C1 Natl Univ Singapore,Dept Elect & Comp Engn,新加坡117583,新加坡 C1 Kumoh Natl Inst Technol,Dept Med IT Convergence,龟尾 39253,韩国 C1 Natl Univ Singapore,Dept Elect & Comp Engn,新加坡 117599,新加坡 C1 Natl Univ Singapore,Inst Hlth Technol,新加坡 117599,新加坡 C1 ASTAR, ASTAR Skin Res Labs ASRL,新加坡 138669,新加坡 SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800014 PM 38157469 ER
PT J
AU Fu, Minghan
Zhang, Na
Huang, Zhenxing
Zhou, Chao
Zhang, Xu
Yuan, Jianmin
He, Qiang
Yang, Yongfeng
Zheng, Hairong
Liang, Dong
Wu, Fang-Xiang
Fan, Wei
Hu, Zhanli
PT J AU Fu, 张明汉, 黄娜, 周振兴, 张超, 徐远, 何建民, 杨强, 郑永丰, 梁海蓉, 吴东, 范方翔, 胡伟, 占利
TI OIF-Net: An Optical Flow Registration-Based PET/MR Cross-Modal
Interactive Fusion Network for Low-Count Brain PET Image Denoising
TI OIF-Net:基于光流配准的 PET/MR 跨模态交互式融合网络,用于低计数脑部 PET 图像去噪
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1554
EP 1567
DI 10.1109/TMI.2023.3342809
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1554 EP 1567 DI 10.1109/TMI.2023.3342809 DT 文章 PD 2024 年 4 月 PY 2024
AB The short frames of low-count positron emission tomography (PET) images
generally cause high levels of statistical noise. Thus, improving the
quality of low-count images by using image postprocessing algorithms to
achieve better clinical diagnoses has attracted widespread attention in
the medical imaging community. Most existing deep learning-based
low-count PET image enhancement methods have achieved satisfying
results, however, few of them focus on denoising low-count PET images
with the magnetic resonance (MR) image modality as guidance. The prior
context features contained in MR images can provide abundant and
complementary information for single low-count PET image denoising,
especially in ultralow-count (2.5%) cases. To this end, we propose a
novel two-stream dual PET/MR cross-modal interactive fusion network with
an optical flow pre-alignment module, namely, OIF-Net. Specifically, the
learnable optical flow registration module enables the spatial
manipulation of MR imaging inputs within the network without any extra
training supervision. Registered MR images fundamentally solve the
problem of feature misalignment in the multimodal fusion stage, which
greatly benefits the subsequent denoising process. In addition, we
design a spatial-channel feature enhancement module (SC-FEM) that
considers the interactive impacts of multiple modalities and provides
additional information flexibility in both the spatial and channel
dimensions. Furthermore, instead of simply concatenating two extracted
features from these two modalities as an intermediate fusion method, the
proposed cross-modal feature fusion module (CM-FFM) adopts
cross-attention at multiple feature levels and greatly improves the two
modalities' feature fusion procedure. Extensive experimental assessments
conducted on real clinical datasets, as well as an independent clinical
testing dataset, demonstrate that the proposed OIF-Net outperforms the
state-of-the-art methods.
RI Yang, Yongfeng/I-7680-2016; huang, zhenxing/; He, Qiang/; Liang, Dong/A-3335-2011; Fu, Minghan/
OI huang, zhenxing/0000-0001-5012-1992; He, Qiang/0000-0001-8485-4326;
Liang, Dong/0000-0001-6257-0875; Fu, Minghan/0000-0002-2224-7923
AB 低计数正电子发射断层扫描 (PET) 图像的短帧通常会导致高水平的统计噪声。因此,通过使用图像后处理算法提高低计数图像的质量以实现更好的临床诊断已引起医学影像界的广泛关注。现有的大多数基于深度学习的低计数PET图像增强方法都取得了令人满意的结果,然而,很少有方法专注于以磁共振(MR)图像模态为指导的低计数PET图像去噪。 MR 图像中包含的先验上下文特征可以为单个低计数 PET 图像去噪提供丰富且互补的信息,特别是在超低计数(2.5%)的情况下。为此,我们提出了一种带有光流预对准模块的新型双流双PET/MR跨模态交互式融合网络,即OIF-Net。具体来说,可学习的光流配准模块能够在网络内对 MR 成像输入进行空间操作,而无需任何额外的训练监督。配准后的MR图像从根本上解决了多模态融合阶段特征错位的问题,极大有利于后续的去噪过程。此外,我们设计了一个空间通道特征增强模块(SC-FEM),该模块考虑了多种模态的交互影响,并在空间和通道维度上提供了额外的信息灵活性。此外,所提出的跨模态特征融合模块(CM-FFM)不是简单地连接从这两种模态中提取的两个特征作为中间融合方法,而是在多个特征级别上采用交叉注意,并极大地改进了两种模态的特征融合过程。 对真实临床数据集以及独立临床测试数据集进行的广泛实验评估表明,所提出的 OIF-Net 优于最先进的方法。日阳永峰/I-7680-2016;黄振兴/;何强/;梁栋/A-3335-2011;付明翰/黄振兴/0000-0001-5012-1992;何强/0000-0001-8485-4326;梁栋/0000-0001-6257-0875;付明涵/0000-0002-2224-7923
ZS 0
ZR 0
Z8 0
ZA 0
ZB 0
TC 1
Z9 1
C1 Chinese Acad Sci, Shenzhen Inst Adv Technol, Paul C Lauterbur Res Ctr
Biomed Imaging, Shenzhen 518055, Peoples R China
C1 Chinese Acad Sci, Key Lab Biomed Imaging Sci & Syst, Shenzhen 518055,
Peoples R China
C1 Sun Yat sen Univ, Dept Nucl Med, Canc Ctr, Guangzhou 510060, Peoples R
China
C1 Sun Yat Sen Univ, Canc Ctr, Dept Nucl Med, Guangzhou, Peoples R China
C1 United Imaging Healthcare Grp, Cent Res Inst, Shanghai 201807, Peoples R
China
C1 Univ Saskatchewan, Dept Mech Engn, Dept Comp Sci, Saskatoon, SK S7N 5A9,
Canada
C1 Univ Saskatchewan, Div Biomed Engn, Saskatoon, SK S7N 5A9, Canada
C3 United Imaging Healthcare Grp
SN 0278-0062
EI 1558-254X
DA 2024-07-06
UT WOS:001196733400016
PM 38096101
ER
ZS 0 ZR 0 Z8 0 ZA 0 ZB 0 TC 1 Z9 1 C1 中国科学院,深圳先进技术研究所,Paul C Lauterbur Res Ctr 生物医学成像,深圳 518055,人民 R 中国 C1 中国科学院,生物医学成像科学与系统重点实验室, 深圳 518055, 人民 R 中国 C1 中山大学癌症中心核医学科, 广州 510060, 人民 R 中国 C1 中山大学癌症中心核医学科, 广州, 人民 R 中国 C1 联合影像医疗集团, Cent Res Inst, 上海 201807, Peoples R China C1 Univ Saskatchewan, Dept Mech Engn, Dept Comp Sci, Saskatoon, SK S7N 5A9, Canada C1 Univ Saskatchewan, Div Biomed Engn, Saskatoon, SK S7N 5A9, Canada C3 United Imaging Healthcare Grp SN 0278-0062 EI 1558-254X DA 2024-07-06 UT WOS:001196733400016 PM 38096101 ER
PT J
AU Yin, Ziying
Li, Guo-Yang
Zhang, Zhaoyi
Zheng, Yang
Cao, Yanping
PT J AU Yin, 李子英, 张国阳, 郑兆义, 曹阳, 燕平
TI SWENet: A Physics-Informed Deep Neural Network (PINN) for Shear Wave
Elastography
TI SWENet:用于剪切波弹性成像的物理信息深度神经网络 (PINN)
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1434
EP 1448
DI 10.1109/TMI.2023.3338178
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1434 EP 1448 DI 10.1109/TMI.2023.3338178 DT 文章 PD 2024 年 4 月 PY 2024
AB Shear wave elastography (SWE) enables the measurement of elastic
properties of soft materials in a non-invasive manner and finds broad
applications in various disciplines. The state-of-the-art SWE methods
rely on the measurement of local shear wave speeds to infer material
parameters and suffer from wave diffraction when applied to soft
materials with strong heterogeneity. In the present study, we overcome
this challenge by proposing a physics-informed neural network
(PINN)-based SWE (SWENet) method. The spatial variation of elastic
properties of inhomogeneous materials has been introduced in the
governing equations, which are encoded in SWENet as loss functions.
Snapshots of wave motions have been used to train neural networks, and
during this course, the elastic properties within a region of interest
illuminated by shear waves are inferred simultaneously. We performed
finite element simulations, tissue-mimicking phantom experiments, and ex
vivo experiments to validate the method. Our results show that the shear
moduli of soft composites consisting of matrix and inclusions of several
millimeters in cross-section dimensions with either regular or irregular
geometries can be identified with excellent accuracy. The advantages of
the SWENet over conventional SWE methods consist of using more features
of the wave motions and enabling seamless integration of multi-source
data in the inverse analysis. Given the advantages of SWENet, it may
find broad applications where full wave fields get involved to infer
heterogeneous mechanical properties, such as identifying small solid
tumors with ultrasound SWE, and differentiating gray and white matters
of the brain with magnetic resonance elastography.
RI cao, yanping/E-3381-2010; Li, Guo-Yang/Q-5391-2016; Yin, Ziying/; Zheng, Yang/W-4628-2018; Zhang, Zhaoyi/
OI Li, Guo-Yang/0000-0002-5939-9293; Yin, Ziying/0000-0003-3376-3103;
Zheng, Yang/0000-0003-3321-9677; Zhang, Zhaoyi/0000-0001-9191-3451
ZA 0
ZB 0
AB 剪切波弹性成像 (SWE) 能够以非侵入方式测量软材料的弹性特性,并在各个学科中得到广泛应用。最先进的 SWE 方法依赖于局部剪切波速度的测量来推断材料参数,并且在应用于具有强异质性的软材料时会遭受波衍射。在本研究中,我们通过提出一种基于物理信息神经网络 (PINN) 的 SWE (SWENet) 方法来克服这一挑战。非均质材料弹性特性的空间变化已被引入控制方程中,并在 SWENet 中编码为损失函数。波浪运动的快照已用于训练神经网络,在此过程中,同时推断出剪切波照射的感兴趣区域内的弹性特性。我们进行了有限元模拟、模仿组织的体模实验和离体实验来验证该方法。我们的结果表明,由基体和横截面尺寸为几毫米、具有规则或不规则几何形状的夹杂物组成的软复合材料的剪切模量可以非常准确地识别。与传统 SWE 方法相比,SWENet 的优点包括使用更多的波动特征,并在反演分析中实现多源数据的无缝集成。鉴于 SWENet 的优势,它可能会在涉及全波场来推断异质机械特性方面找到广泛的应用,例如使用超声 SWE 识别小型实体瘤,以及使用磁共振弹性成像区分大脑的灰质和白质。曹日平/E-3381-2010;李国阳/Q-5391-2016;尹子英/;郑阳/W-4628-2018;张昭仪/李国阳/0000-0002-5939-9293;尹子英/0000-0003-3376-3103;郑阳/0000-0003-3321-9677;张昭仪/0000-0001-9191-3451 ZA 0 ZB 0
ZS 0
Z8 0
TC 1
ZR 0
Z9 1
C1 Tsinghua Univ, Inst Biomech & Med Engn, Dept Engn Mech, AML, Beijing
100084, Peoples R China
C1 Massachusetts Gen Hosp, Harvard Med Sch, Boston, MA 02139 USA
C1 Massachusetts Gen Hosp, Wellman Ctr Photomed, Boston, MA 02139 USA
C1 Peking Univ, Coll Engn, Dept Mech & Engn Sci, Beijing 100871, Peoples R
China
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400028
PM 38032772
ER
ZS 0 Z8 0 TC 1 ZR 0 Z9 1 C1 清华大学生物技术与医学工程学院,工程机械系,AML,北京 100084,Peoples R China C1 Massachusetts Gen Hosp,Harvard Med Sch,Boston,MA 02139 USA C1 Massachusetts Gen Hosp , Wellman Ctr Photomed, Boston, MA 02139 USA C1 Peking Univ, Coll Engn, Dept Mech & Engn Sci, 北京 100871, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400028 PM 38032772急诊室
PT J
AU Ji, Wen
Chung, Albert C. S.
PT J AU Ji, Wen Chung, Albert CS
TI Unsupervised Domain Adaptation for Medical Image Segmentation Using
Transformer With Meta Attention
使用具有元注意力的 Transformer 进行 TI 无监督域适应医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 820
EP 831
DI 10.1109/TMI.2023.3322581
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 820 EP 831 DI 10.1109/TMI.2023.3322581 DT 文章 PD 2024 年 2 月 PY 2024
AB Image segmentation is essential to medical image analysis as it provides
the labeled regions of interest for the subsequent diagnosis and
treatment. However, fully-supervised segmentation methods require
high-quality annotations produced by experts, which is laborious and
expensive. In addition, when performing segmentation on another
unlabeled image modality, the segmentation performance will be adversely
affected due to the domain shift. Unsupervised domain adaptation (UDA)
is an effective way to tackle these problems, but the performance of the
existing methods is still desired to improve. Also, despite the
effectiveness of recent Transformer-based methods in medical image
segmentation, the adaptability of Transformers is rarely investigated.
In this paper, we present a novel UDA framework using a Transformer for
building a cross-modality segmentation method with the advantages of
learning long-range dependencies and transferring attentive information.
To fully utilize the attention learned by the Transformer in UDA, we
propose Meta Attention (MA) and use it to perform a fully
attention-based alignment scheme, which can learn the hierarchical
consistencies of attention and transfer more discriminative information
between two modalities. We have conducted extensive experiments on
cross-modality segmentation using three datasets, including a whole
heart segmentation dataset (MMWHS), an abdominal organ segmentation
dataset, and a brain tumor segmentation dataset. The promising results
show that our method can significantly improve performance compared with
the state-of-the-art UDA methods.
RI Chung, Albert C. S./GZB-0224-2022; Ji, Wen/
OI Ji, Wen/0000-0002-0219-7586
ZA 0
Z8 0
ZB 0
TC 1
AB 图像分割对于医学图像分析至关重要,因为它为后续的诊断和治疗提供了标记的感兴趣区域。然而,完全监督的分割方法需要专家提供高质量的注释,这是费力且昂贵的。此外,当对另一种未标记的图像模态执行分割时,分割性能将由于域移位而受到不利影响。无监督域适应(UDA)是解决这些问题的有效方法,但现有方法的性能仍有待提高。此外,尽管最近基于 Transformer 的方法在医学图像分割中非常有效,但 Transformer 的适应性却很少被研究。在本文中,我们提出了一种新颖的 UDA 框架,使用 Transformer 构建跨模态分割方法,具有学习远程依赖性和传输注意力信息的优点。为了充分利用 UDA 中 Transformer 学到的注意力,我们提出了元注意力(MA),并用它来执行完全基于注意力的对齐方案,该方案可以学习注意力的层次一致性并在两种模态之间传递更多区分信息。我们使用三个数据集进行了跨模态分割的广泛实验,包括全心脏分割数据集(MMWHS)、腹部器官分割数据集和脑肿瘤分割数据集。有希望的结果表明,与最先进的 UDA 方法相比,我们的方法可以显着提高性能。 RI Chung,阿尔伯特 CS/GZB-0224-2022;季文/ OI 季文/0000-0002-0219-7586 ZA 0 Z8 0 ZB 0 TC 1
ZS 0
ZR 0
Z9 1
C1 Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Clear Water Bay,
Hong Kong, Peoples R China
C1 Univ Exeter, Dept Comp Sci, Exeter EX4 4SB, England
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400003
PM 37801381
ER
ZS 0 ZR 0 Z9 1 C1 香港科技大学,计算科学与工程系,清水湾,香港,人民 R 中国 C1 埃克塞特大学,计算科学系,埃克塞特 EX4 4SB,英格兰 SN 0278-0062 EI 1558- 254X DA 2024-05-25 UT WOS:001203303400003 PM 37801381 ER
PT J
AU Pak, Daniel H.
Liu, Minliang
Kim, Theodore
Liang, Liang
Caballero, Andres
Onofrey, John
Ahn, Shawn S.
Xu, Yilin
McKay, Raymond
Sun, Wei
Gleason, Rudolph
Duncan, James S.
PT J AU Pak、Daniel H. Liu、Minliang Kim、Theodore Liang、Liang Caballero、Andres Onofrey、John Ahn、Shawn S. Xu、Yilin McKay、Raymond Sun、Wei Gleason、Rudolph Duncan、James S.
TI Patient-Specific Heart Geometry Modeling for Solid Biomechanics Using
Deep Learning
使用深度学习进行固体生物力学的 TI 患者特定心脏几何建模
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 203
EP 215
DI 10.1109/TMI.2023.3294128
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 203 EP 215 DI 10.1109/TMI.2023.3294128 DT 文章 PD 2024 年 1 月 PY 2024
AB Automated volumetric meshing of patient-specific heart geometry can help
expedite various biomechanics studies, such as post-intervention stress
estimation. Prior meshing techniques often neglect important modeling
characteristics for successful downstream analyses, especially for thin
structures like the valve leaflets. In this work, we present DeepCarve
(Deep Cardiac Volumetric Mesh): a novel deformation-based deep learning
method that automatically generates patient-specific volumetric meshes
with high spatial accuracy and element quality. The main novelty in our
method is the use of minimally sufficient surface mesh labels for
precise spatial accuracy and the simultaneous optimization of isotropic
and anisotropic deformation energies for volumetric mesh quality. Mesh
generation takes only 0.13 seconds/scan during inference, and each mesh
can be directly used for finite element analyses without any manual
post-processing. Calcification meshes can also be subsequently
incorporated for increased simulation accuracy. Numerous stent
deployment simulations validate the viability of our approach for
large-batch analyses.
RI Onofrey, John/T-9841-2019; Liu, Minliang/ABE-1950-2020; Caballero, Andrés/AAQ-7567-2020; Xu, Yilin/AAB-5090-2022; Liu, Minliang/; Pak, Daniel H./; Ahn, Shawn/; Xu, Yilin/; Caballero, Andres/; Onofrey, John/
OI Liu, Minliang/0000-0001-6240-5116; Pak, Daniel H./0000-0003-3425-428X;
Ahn, Shawn/0000-0002-5961-3376; Xu, Yilin/0009-0002-5696-7475;
Caballero, Andres/0000-0001-8640-9672; Onofrey, John/0000-0002-9432-0448
Z8 0
ZB 0
ZA 0
ZR 0
AB 针对患者特定心脏几何形状的自动体积网格划分有助于加快各种生物力学研究,例如干预后应力估计。先前的网格划分技术常常忽略成功下游分析的重要建模特征,特别是对于瓣膜小叶等薄结构。在这项工作中,我们提出了 DeepCarve(深度心脏体积网格):一种新颖的基于变形的深度学习方法,可自动生成具有高空间精度和元素质量的患者特定体积网格。我们方法的主要新颖之处在于使用最小足够的表面网格标签来实现精确的空间精度,并同时优化各向同性和各向异性变形能以实现体积网格质量。推理过程中网格生成/扫描仅需0.13秒,每个网格可直接用于有限元分析,无需任何手动后处理。随后还可以合并钙化网格以提高模拟精度。大量的支架部署模拟验证了我们的大批量分析方法的可行性。 RI 奥诺弗里,约翰/T-9841-2019;刘敏良/ABE-1950-2020;安德烈斯·卡瓦列罗/AAQ-7567-2020;徐宜林/AAB-5090-2022;刘敏良/;帕克,丹尼尔·H./;安,肖恩/;徐依琳/;安德烈斯·卡瓦列罗/;约翰·奥诺弗里/刘敏亮/0000-0001-6240-5116;帕克,丹尼尔·H./0000-0003-3425-428X;安,肖恩/0000-0002-5961-3376;徐宜林/0009-0002-5696-7475;安德烈斯·卡瓦列罗/0000-0001-8640-9672;约翰·奥诺弗里/0000-0002-9432-0448 Z8 0 ZB 0 ZA 0 ZR 0
ZS 0
TC 0
Z9 0
C1 Yale Univ, Dept Biomed Engn, New Haven, CT 06520 USA
C1 Yale Univ, Dept Comp Sci, New Haven, CT 06520 USA
C1 Yale Univ, Dept Radiol & Biomed Imaging, New Haven, CT 06520 USA
C1 Georgia Inst Technol, Dept Biomed Engn, Atlanta, GA 30332 USA
C1 Univ Miami, Dept Comp Sci, Coral Gables, FL 33146 USA
C1 Hartford Hosp, Intervent Cardiol, Hartford, CT 06102 USA
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600004
PM 37432807
ER
ZS 0 TC 0 Z9 0 C1 耶鲁大学,生物医学工程系,纽黑文,CT 06520 美国 C1 耶鲁大学,计算机科学系,纽黑文,CT 06520 美国 C1 耶鲁大学,放射与生物医学成像系,纽黑文,CT 06520 美国C1 佐治亚理工学院,生物医学工程系,亚特兰大,GA 30332 美国 C1 迈阿密大学,计算科学系,科勒尔盖布尔斯,佛罗里达州 33146 美国 C1 哈特福德医院,Intervent Cardiol,哈特福德,CT 06102 美国 SN 0278-0062 EI 1558-254X DA 2024 -03-13 UT WOS:001158081600004 PM 37432807 ER
PT J
AU Xie, Qingsong
Li, Yuexiang
He, Nanjun
Ning, Munan
Ma, Kai
Wang, Guoxing
Lian, Yong
Zheng, Yefeng
PT J AU Xie, 李庆松, 何跃翔, 宁南军, 马木楠, 王凯, 连国兴, 郑勇, 叶峰
TI Unsupervised Domain Adaptation for Medical Image Segmentation by
Disentanglement Learning and Self-Training
通过解缠学习和自训练进行 TI 无监督域适应医学图像分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 4
EP 14
DI 10.1109/TMI.2022.3192303
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 4 EP 14 DI 10.1109/TMI.2022.3192303 DT 文章 PD 2024 年 1 月 PY 2024
AB Unsupervised domain adaption (UDA), which aims to enhance the
segmentation performance of deep models on unlabeled data, has recently
drawn much attention. In this paper, we propose a novel UDA method
(namely DLaST) for medical image segmentation via disentanglement
learning and self-training. Disentanglement learning factorizes an image
into domain-invariant anatomy and domain-specific modality components.
To make the best of disentanglement learning, we propose a novel shape
constraint to boost the adaptation performance. The self-training
strategy further adaptively improves the segmentation performance of the
model for the target domain through adversarial learning and pseudo
label, which implicitly facilitates feature alignment in the anatomy
space. Experimental results demonstrate that the proposed method
outperforms the state-of-the-art UDA methods for medical image
segmentation on three public datasets, i.e., a cardiac dataset, an
abdominal dataset and a brain dataset. The code will be released soon.
RI Ma, kai/KSL-8338-2024; Lian, Yong/B-9859-2008; Li, Yuexiang/AAZ-2676-2020; Zheng, Yefeng/ABG-7053-2020; Xie, Qingsong/; Ma, Kai/; He, Nanjun/; Zheng, Yefeng/; Lian, Yong/
OI Xie, Qingsong/0000-0001-6974-2019; Ma, Kai/0000-0003-2805-3692; He,
Nanjun/0000-0003-3105-6499; Zheng, Yefeng/0000-0003-2195-2847; Lian,
Yong/0000-0002-5289-5219
TC 23
ZR 0
AB无监督域适应(UDA)旨在增强深度模型在未标记数据上的分割性能,最近引起了广泛关注。在本文中,我们提出了一种通过解纠缠学习和自训练进行医学图像分割的新型 UDA 方法(即 DLaST)。解缠结学习将图像分解为领域不变的解剖结构和领域特定的模态组件。为了充分利用解缠结学习,我们提出了一种新颖的形状约束来提高适应性能。自训练策略通过对抗性学习和伪标签进一步自适应地提高了目标域模型的分割性能,这隐式地促进了解剖空间中的特征对齐。实验结果表明,所提出的方法在三个公共数据集(心脏数据集、腹部数据集和大脑数据集)上优于最先进的医学图像分割 UDA 方法。该代码即将发布。马凯/KSL-8338-2024;连勇/B-9859-2008;李月香/AAZ-2676-2020;郑业峰/ABG-7053-2020;谢庆松/;马凯/;他,南君/;郑业峰/;连勇/谢青松/0000-0001-6974-2019;马凯/0000-0003-2805-3692;何南君/0000-0003-3105-6499;郑业峰/0000-0003-2195-2847;连勇/0000-0002-5289-5219 TC 23 ZR 0
ZS 0
Z8 2
ZA 0
ZB 4
Z9 25
C1 Shanghai Jiao Tong Univ, Dept Micronano Elect, Shanghai 200240, Peoples
R China
C1 Guangxi Med Univ, Med AI Res MARS Grp, Nanning 530021, Peoples R China
C1 Tencent Jarvis Lab, Shenzhen 518000, Peoples R China
C3 Tencent Jarvis Lab
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600044
PM 35853072
ER
ZS 0 Z8 2 ZA 0 ZB 4 Z9 25 C1 上海交通大学微纳电子工程系,上海 200240,人民 R 中国 C1 广西医科大学,医学人工智能研究中心 MARS 集团,南宁 530021,人民 R 中国 C1 腾讯贾维斯实验室,深圳 518000 , Peoples R 中国 C3 腾讯 Jarvis Lab SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600044 PM 35853072 ER
PT J
AU Zhang, Jiaojiao
Zhang, Shuo
Shen, Xiaoqian
Lukasiewicz, Thomas
Xu, Zhenghua
PT J AU 张, 张娇娇, 沉硕, 小倩 Lukasiewicz, 徐托马斯, 正华
TI Multi-ConDoS: Multimodal Contrastive Domain Sharing Generative
Adversarial Networks for Self-Supervised Medical Image Segmentation
TI Multi-ConDoS:用于自监督医学图像分割的多模态对比域共享生成对抗网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 76
EP 95
DI 10.1109/TMI.2023.3290356
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 76 EP 95 DI 10.1109/TMI.2023.3290356 DT 文章 PD 2024 年 1 月 PY 2024
AB Existing self-supervised medical image segmentation usually encounters
the domain shift problem (i.e., the input distribution of pre-training
is different from that of fine-tuning) and/or the multimodality problem
(i.e., it is based on single-modal data only and cannot utilize the
fruitful multimodal information of medical images). To solve these
problems, in this work, we propose multimodal contrastive domain sharing
(Multi-ConDoS) generative adversarial networks to achieve effective
multimodal contrastive self-supervised medical image segmentation.
Compared to the existing self-supervised approaches, Multi-ConDoS has
the following three advantages: (i) it utilizes multimodal medical
images to learn more comprehensive object features via multimodal
contrastive learning; (ii) domain translation is achieved by integrating
the cyclic learning strategy of CycleGAN and the cross-domain
translation loss of Pix2Pix; (iii) novel domain sharing layers are
introduced to learn not only domain-specific but also domain-sharing
information from the multimodal medical images. Extensive experiments on
two publicly multimodal medical image segmentation datasets show that,
with only 5% (resp., 10%) of labeled data, Multi-ConDoS not only greatly
outperforms the state-of-the-art self-supervised and semi-supervised
medical image segmentation baselines with the same ratio of labeled
data, but also achieves similar (sometimes even better) performances as
fully supervised segmentation methods with 50% (resp., 100%) of labeled
data, which thus proves that our work can achieve superior segmentation
performances with very low labeling workload. Furthermore, ablation
studies prove that the above three improvements are all effective and
essential for Multi-ConDoS to achieve this very superior performance.
RI Zhang, Jiaojiao/AAN-1038-2020; Shen, Xiaoqian/
OI Shen, Xiaoqian/0000-0001-6284-520X
TC 10
ZA 0
ZB 4
AB 现有的自监督医学图像分割通常会遇到域移位问题(即预训练的输入分布与微调的输入分布不同)和/或多模态问题(即基于单模态数据)只能并且不能利用医学图像丰富的多模态信息)。为了解决这些问题,在这项工作中,我们提出了多模态对比域共享(Multi-ConDoS)生成对抗网络,以实现有效的多模态对比自监督医学图像分割。与现有的自监督方法相比,Multi-ConDoS具有以下三个优点:(i)它利用多模态医学图像通过多模态对比学习来学习更全面的对象特征; (ii)通过整合CycleGAN的循环学习策略和Pix2Pix的跨域翻译损失来实现领域翻译; (iii)引入新颖的域共享层,不仅可以从多模态医学图像中学习特定域的信息,还可以学习域共享的信息。对两个公开多模态医学图像分割数据集的大量实验表明,仅使用 5%(分别是 10%)的标记数据,Multi-ConDoS 不仅大大优于最先进的自监督和半监督医学图像分割基线具有相同比例的标记数据,但也实现了与具有 50%(或 100%)标记数据的完全监督分割方法相似(有时甚至更好)的性能,从而证明我们的工作可以实现卓越的性能。具有非常低的标记工作量的分割性能。 此外,消融研究证明,上述三项改进对于 Multi-ConDoS 实现如此卓越的性能来说都是有效且必不可少的。张日娇娇/AAN-1038-2020;沉小倩/ OI 沉小倩/0000-0001-6284-520X TC 10 ZA 0 ZB 4
ZS 0
ZR 0
Z8 0
Z9 10
C1 Hebei Univ Technol, Sch Hlth Sci & Biomed Engn, State Key Lab Reliabil &
Intelligence Elect Equipm, Tianjin 300130, Peoples R China
C1 Hebei Univ Technol, Sch Hlth Sci & Biomed Engn, Tianjin Key Lab
Bioelectromagnet Technol & Intelli, Tianjin 300130, Peoples R China
C1 King Abdullah Univ Sci & Technol, Dept Comp Sci, Jedda 23955, Saudi
Arabia
C1 TU Wien, Inst Log & Computat, A-1040 Vienna, Austria
C1 Univ Oxford, Dept Comp Sci, Oxford OX1 3AZ, England
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600014
PM 37379176
ER
ZS 0 ZR 0 Z8 0 Z9 10 C1 河北工业大学高等科学与生物医学工程可靠性与智能电子设备国家重点实验室,天津 300130,人民共和国 C1 河北工业大学高等科学与生物医学工程天津市重点实验室生物电磁体技术与智能, 天津 300130, Peoples R China C1 阿卜杜拉国王科技大学, 计算机科学系, Jedda 23955, 沙特阿拉伯 C1 TU Wien, Inst Log & Computat, A-1040 Vienna, Austria C1 牛津大学, 计算机科学系,牛津 OX1 3AZ,英格兰 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600014 PM 37379176 ER
PT J
AU Zhang, Qiyang
Hu, Yingying
Zhao, Yumo
Cheng, Jing
Fan, Wei
Hu, Debin
Shi, Fuxiao
Cao, Shuangliang
Zhou, Yun
Yang, Yongfeng
Liu, Xin
Zheng, Hairong
Liang, Dong
Hu, Zhanli
PT J AU 张, 胡启阳, 赵莹莹, 程雨墨, 范静, 胡伟, 石德斌, 曹福晓, 周双良, 杨云, 刘永丰, 郑欣, 梁海荣, 胡东, 占利
TI Deep Generalized Learning Model for PET Image Reconstruction
用于 PET 图像重建的 TI 深度广义学习模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 122
EP 134
DI 10.1109/TMI.2023.3293836
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 122 EP 134 DI 10.1109/TMI.2023.3293836 DT 文章 PD 2024 年 1 月 PY 2024
AB Low-count positron emission tomography (PET) imaging is challenging
because of the ill-posedness of this inverse problem. Previous studies
have demonstrated that deep learning (DL) holds promise for achieving
improved low-count PET image quality. However, almost all data-driven DL
methods suffer from fine structure degradation and blurring effects
after denoising. Incorporating DL into the traditional iterative
optimization model can effectively improve its image quality and recover
fine structures, but little research has considered the full relaxation
of the model, resulting in the performance of this hybrid model not
being sufficiently exploited. In this paper, we propose a learning
framework that deeply integrates DL and an alternating direction of
multipliers method (ADMM)-based iterative optimization model. The
innovative feature of this method is that we break the inherent forms of
the fidelity operators and use neural networks to process them. The
regularization term is deeply generalized. The proposed method is
evaluated on simulated data and real data. Both the qualitative and
quantitative results show that our proposed neural network method can
outperform partial operator expansion-based neural network methods,
neural network denoising methods and traditional methods.
RI Zhang, Yanbo/IQU-6845-2023; yang, yongfeng/JXY-8082-2024; Liang, Dong/A-3335-2011; qiyang, Zhang/; Cheng, Jing/
OI qiyang, Zhang/0000-0001-6185-3080; Cheng, Jing/0000-0001-9098-8048
TC 3
Z8 0
ZB 0
AB 由于逆问题的不适定性,低计数正电子发射断层扫描 (PET) 成像具有挑战性。先前的研究表明,深度学习 (DL) 有望提高低计数 PET 图像质量。然而,几乎所有数据驱动的深度学习方法在去噪后都会遭受精细结构退化和模糊效应的影响。将深度学习融入传统的迭代优化模型中可以有效提高其图像质量并恢复精细结构,但很少有研究考虑模型的完全松弛,导致这种混合模型的性能没有得到充分的发挥。在本文中,我们提出了一种深度集成深度学习和基于交替方向乘子法(ADMM)的迭代优化模型的学习框架。该方法的创新点在于我们打破了保真算子的固有形式,采用神经网络对其进行处理。正则化项是深度概括的。所提出的方法在模拟数据和实际数据上进行了评估。定性和定量结果都表明,我们提出的神经网络方法可以优于基于部分算子扩展的神经网络方法、神经网络去噪方法和传统方法。张日彦博/IQU-6845-2023;杨永峰/JXY-8082-2024;梁栋/A-3335-2011;张启阳/;程静/张启阳/0000-0001-6185-3080;程静/0000-0001-9098-8048 TC 3 Z8 0 ZB 0
ZS 0
ZA 0
ZR 0
Z9 3
C1 Chinese Acad Sci, Lauterbur Res Ctr Biomed Imaging, Shenzhen Inst Adv
Technol, Shenzhen 518055, Peoples R China
C1 Natl Innovat Ctr High Performance Med Devices, Shenzhen 518055, Peoples
R China
C1 Sun Yat Sen Univ Canc Ctr, Dept Nucl Med, Guangzhou 510060, Peoples R
China
C1 Chinese Acad Sci, Lauterbur Res Ctr Biomed Imaging, Shenzhen Inst Adv
Technol, Shenzhen 518055, Peoples R China
C1 Cent Res Inst, United Imaging Healthcare Grp, Shanghai 201807, Peoples R
China
C3 Natl Innovat Ctr High Performance Med Devices
C3 Cent Res Inst
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600007
PM 37428658
ER
ZS 0 ZA 0 ZR 0 Z9 3 C1 中国科学院,劳特布尔研究中心生物医学成像,深圳先进技术研究院,深圳 518055,人民 R 中国 C1 国家创新中心高性能医疗器械,深圳 518055,人民 R 中国 C1 中山大学癌症中心,核医学中心,广州 510060,人民 R 中国 C1 中国科学院,劳特布尔生物医学影像中心,深圳先进技术研究所,深圳 518055,人民 R 中国 C1 中心研究所,联影医疗集团,上海 201807,人民 R中国 C3 Natl Innovat Ctr 高性能医疗器械 C3 Cent Res Inst SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600007 PM 37428658 ER
PT J
AU Zhang, Shengjie
Shen, Xin
Chen, Xiang
Yu, Ziqi
Ren, Bohan
Yang, Haibo
Zhang, Xiao-Yong
Zhou, Yuan
PT J AU 张, 沉胜杰, 陈鑫, 宇翔, 任子奇, 杨博涵, 张海波, 周小勇, 袁
TI CQformer: Learning Dynamics Across Slices in Medical Image Segmentation.
TI CQformer:医学图像分割中跨切片的学习动态。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3477555
DT Journal Article
PD 2024-Oct-10
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3477555 DT 期刊文章 PD 2024 年 10 月 10 日 PY 2024
AB Prevalent studies on deep learning-based 3D medical image segmentation
capture the continuous variation across 2D slices mainly via
convolution, Transformer, inter-slice interaction, and time series
models. In this work, via modeling this variation by an ordinary
differential equation (ODE), we propose a cross instance query-guided
Transformer architecture (CQformer) that leverages features from
preceding slices to improve the segmentation performance of subsequent
slices. Its key components include a cross-attention mechanism in an ODE
formulation, which bridges the features of contiguous 2D slices of the
3D volumetric data. In addition, a regression head is employed to
shorten the gap between the bottleneck and the prediction layer.
Extensive experiments on 7 datasets with various modalities (CT, MRI)
and tasks (organ, tissue, and lesion) demonstrate that CQformer
outperforms previous state-of-the-art segmentation algorithms on 6
datasets by 0.44%-2.45%, and achieves the second highest performance of
88.30% on the BTCV dataset. The code will be publicly available after
acceptance.
AB 基于深度学习的 3D 医学图像分割的流行研究主要通过卷积、Transformer、切片间交互和时间序列模型来捕获 2D 切片的连续变化。在这项工作中,通过常微分方程(ODE)对这种变化进行建模,我们提出了一种跨实例查询引导的 Transformer 架构(CQformer),该架构利用先前切片的特征来提高后续切片的分割性能。其关键组件包括 ODE 公式中的交叉注意机制,该机制桥接了 3D 体积数据的连续 2D 切片的特征。此外,还采用回归头来缩短瓶颈和预测层之间的差距。对具有不同模式(CT、MRI)和任务(器官、组织和病变)的 7 个数据集进行的广泛实验表明,CQformer 在 6 个数据集上的性能优于之前最先进的分割算法 0.44%-2.45%,并实现了在 BTCV 数据集上表现第二高,达到 88.30%。该代码将在接受后公开。
ZS 0
ZB 0
ZR 0
Z8 0
ZA 0
TC 0
Z9 0
EI 1558-254X
DA 2024-10-12
UT MEDLINE:39388328
PM 39388328
ER
ZS 0 ZB 0 ZR 0 Z8 0 ZA 0 TC 0 Z9 0 EI 1558-254X DA 2024-10-12 UT MEDLINE:39388328 PM 39388328 ER
PT J
AU Chen, Zhi
Liu, Yongguo
Zhang, Yun
Zhu, Jiajing
Li, Qiaoqin
Wu, Xindong
PT J AU Chen, 刘志, 张永国, 朱云, Jiajing Li, 吴巧勤, Xindong
TI Enhanced Multimodal Low-rank Embedding based Feature Selection Model for
Multimodal Alzheimer's Disease Diagnosis.
TI 增强型多模态低阶嵌入基于特征选择模型,用于多模态阿尔茨海默病诊断。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3464861
DT Journal Article
PD 2024-Sep-20
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3464861 DT 期刊文章 PD 2024 年 9 月 20 日 PY 2024
AB Identification of Alzheimer's disease (AD) with multimodal neuroimaging
data has been receiving increasing attention. However, the presence of
numerous redundant features and corrupted neuroimages within multimodal
datasets poses significant challenges for existing methods. In this
paper, we propose a feature selection method named Enhanced Multimodal
Low-rank Embedding (EMLE) for multimodal AD diagnosis. Unlike previous
methods utilizing convex relaxations of the ℓ2,0-norm, EMLE exploits an
ℓ2,gamma-norm regularized projection matrix to obtain an embedding
representation and select informative features jointly for each
modality. The ℓ2,gamma-norm, employing an upper-bounded nonconvex
Minimax Concave Penalty (MCP) function to characterize sparsity, offers
a superior approximation for the ℓ2,0-norm compared to other convex
relaxations. Next, a similarity graph is learned based on the
self-expressiveness property to increase the robustness to corrupted
data. As the approximation coefficient vectors of samples from the same
class should be highly correlated, an MCP function introduced norm,
i.e., matrix gamma-norm, is applied to constrain the rank of the graph.
Furthermore, recognizing that diverse modalities should share an
underlying structure related to AD, we establish a consensus graph for
all modalities to unveil intrinsic structures across multiple
modalities. Finally, we fuse the embedding representations of all
modalities into the label space to incorporate supervisory information.
The results of extensive experiments on the Alzheimer's Disease
Neuroimaging Initiative datasets verify the discriminability of the
features selected by EMLE.
OI Wu, Xindong/0000-0003-2396-1704; zhang, yun/0000-0001-8716-4179
TC 0
AB 利用多模态神经影像数据识别阿尔茨海默病 (AD) 已受到越来越多的关注。然而,多模态数据集中存在大量冗余特征和损坏的神经图像,对现有方法提出了重大挑战。在本文中,我们提出了一种用于多模态 AD 诊断的特征选择方法,称为增强型多模态低秩嵌入(EMLE)。与之前利用 ℓ2,0-范数的凸松弛的方法不同,EMLE 利用 ℓ2,gamma-范数正则化投影矩阵来获得嵌入表示并为每种模态联合选择信息特征。 ℓ2,gamma-范数采用上界非凸极小极大凹罚分 (MCP) 函数来表征稀疏性,与其他凸松弛相比,为 ℓ2,0-范数提供了更好的近似。接下来,基于自我表达特性学习相似图,以提高对损坏数据的鲁棒性。由于同一类样本的逼近系数向量应该高度相关,因此采用引入范数的MCP函数,即矩阵伽马范数来约束图的秩。此外,认识到不同的模式应该共享与 AD 相关的底层结构,我们为所有模式建立了一个共识图,以揭示跨多种模式的内在结构。最后,我们将所有模态的嵌入表示融合到标签空间中以纳入监督信息。对阿尔茨海默病神经影像计划数据集进行的大量实验结果验证了 EMLE 所选特征的可区分性。吴新东/0000-0003-2396-1704;张云/0000-0001-8716-4179 TC 0
ZS 0
ZB 0
ZA 0
Z8 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-09-22
UT MEDLINE:39302791
PM 39302791
ER
ZS 0 ZB 0 ZA 0 Z8 0 ZR 0 Z9 0 EI 1558-254X DA 2024-09-22 UT MEDLINE:39302791 PM 39302791 ER
PT J
AU Li, Wen
Cao, Fuzhi
An, Nan
Wang, Wenli
Wang, Chunhui
Xu, Weinan
Gao, Yang
Ning, Xiaolin
PT J AU Li, 曹文, 安福志, 王楠, 王文丽, 徐春辉, 高伟楠, 宁阳, 小林
TI Source Extent Estimation in OPM-MEG: A Two-Stage Champagne Approach.
OPM-MEG 中的 TI 源范围估计:两阶段香槟法。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3462415
DT Journal Article
PD 2024-Sep-17
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3462415 DT 期刊文章 PD 2024-Sep-17 PY 2024
AB The accurate estimation of source extent using magnetoencephalography
(MEG) is important for the study of preoperative functional localization
in epilepsy. Conventional source imaging techniques tend to produce
diffuse or focused source estimates that fail to capture the source
extent accurately. To address this issue, we propose a novel method
called the two-stage Champagne approach (TS-Champagne). TS-Champagne
divides source extent estimation into two stages. In the first stage,
the Champagne algorithm with noise learning (Champagne-NL) is employed
to obtain an initial source estimate. In the second stage, spatial basis
functions are constructed from the initial source estimate. These
spatial basis functions consist of potential activation source centers
and their neighbors, and serve as spatial priors, which are incorporated
into Champagne-NL to obtain a final source estimate. We evaluated the
performance of TS-Champagne through numerical simulations. TS-Champagne
achieved more robust performance under various conditions (i.e., varying
source extent, number of sources, signal-to-noise level, and correlation
coefficients between sources) than Champagne-NL and several benchmark
methods. Furthermore, auditory and median nerve stimulation experiments
were conducted using a 31-channel optically pumped magnetometer
(OPM)-MEG system. The validation results indicated that the
reconstructed source activity was spatially and temporally consistent
with the neurophysiological results of previous OPM-MEG studies, further
demonstrating the feasibility of TS-Champagne for practical
applications.
OI An, Nan/0000-0001-9919-0745
TC 0
ZA 0
AB 使用脑磁图 (MEG) 准确估计源范围对于癫痫术前功能定位的研究非常重要。传统的源成像技术往往会产生漫射或聚焦源估计,而无法准确捕获源范围。为了解决这个问题,我们提出了一种称为两阶段香槟法(TS-Champagne)的新方法。 TS-Champagne 将源范围估计分为两个阶段。在第一阶段,采用带有噪声学习的香槟算法(Champagne-NL)来获得初始源估计。在第二阶段,根据初始源估计构建空间基函数。这些空间基函数由潜在的激活源中心及其邻居组成,并充当空间先验,将其合并到 Champagne-NL 中以获得最终的源估计。我们通过数值模拟评估了 TS-Champagne 的性能。与 Champagne-NL 和几种基准方法相比,TS-Champagne 在各种条件(即变化的源范围、源数量、信噪比以及源之间的相关系数)下实现了更稳健的性能。此外,使用 31 通道光泵磁力计 (OPM)-MEG 系统进行听觉和正中神经刺激实验。验证结果表明,重建的源活动在空间和时间上与之前OPM-MEG研究的神经生理学结果一致,进一步证明了TS-Champagne实际应用的可行性。安安南/0000-0001-9919-0745 TC 0 ZA 0
ZS 0
ZR 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-09-19
UT MEDLINE:39288067
PM 39288067
ER
ZS 0 ZR 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-09-19 UT MEDLINE:39288067 PM 39288067 ER
PT J
AU Zhou, Huajun
Zhou, Fengtao
Chen, Hao
PT J AU 周, 周华军, 陈风涛, 郝
TI Cohort-Individual Cooperative Learning for Multimodal Cancer Survival
Analysis.
TI 队列-多模式癌症生存分析的个体合作学习。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3455931
DT Journal Article
PD 2024-Sep-06
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3455931 DT 期刊文章 PD 2024-Sep-06 PY 2024
AB Recently, we have witnessed impressive achievements in cancer survival
analysis by integrating multimodal data, e.g., pathology images and
genomic profiles. However, the heterogeneity and high dimensionality of
these modalities pose significant challenges for extracting
discriminative representations while maintaining good generalization. In
this paper, we propose a Cohortindividual Cooperative Learning (CCL)
framework to advance cancer survival analysis by collaborating knowledge
decomposition and cohort guidance. Specifically, first, we propose a
Multimodal Knowledge Decomposition (MKD) module to explicitly decompose
multimodal knowledge into four distinct components: redundancy, synergy
and uniqueness of the two modalities. Such a comprehensive decomposition
can enlighten the models to perceive easily overlooked yet important
information, facilitating an effective multimodal fusion. Second, we
propose a Cohort Guidance Modeling (CGM) to mitigate the risk of
overfitting task-irrelevant information. It can promote a more
comprehensive and robust understanding of the underlying multimodal
data, while avoiding the pitfalls of overfitting and enhancing the
generalization ability of the model. By cooperating the knowledge
decomposition and cohort guidance methods, we develop a robust
multimodal survival analysis model with enhanced discrimination and
generalization abilities. Extensive experimental results on five cancer
datasets demonstrate the effectiveness of our model in integrating
multimodal data for survival analysis. The code will be publicly
available soon.
RI Chen, Hao/V-4299-2019
OI Chen, Hao/0000-0002-8400-3780
Z8 0
TC 0
ZR 0
ZB 0
AB 最近,我们通过整合病理图像和基因组图谱等多模态数据,在癌症生存分析方面取得了令人瞩目的成就。然而,这些模态的异质性和高维性对提取判别性表示同时保持良好的泛化提出了重大挑战。在本文中,我们提出了一个队列个体合作学习(CCL)框架,通过协作知识分解和队列指导来推进癌症生存分析。具体来说,首先,我们提出了一个多模态知识分解(MKD)模块,将多模态知识显式分解为四个不同的组成部分:两种模态的冗余、协同和独特性。这种全面的分解可以启发模型感知容易被忽视但重要的信息,促进有效的多模态融合。其次,我们提出了队列指导模型(CGM)来减轻过度拟合与任务无关的信息的风险。它可以促进对底层多模态数据更全面、更稳健的理解,同时避免过度拟合的陷阱并增强模型的泛化能力。通过配合知识分解和队列指导方法,我们开发了一个强大的多模态生存分析模型,具有增强的辨别力和泛化能力。对五个癌症数据集的广泛实验结果证明了我们的模型在整合多模式数据进行生存分析方面的有效性。该代码很快就会公开。 RI 陈浩/V-4299-2019 OI 陈浩/0000-0002-8400-3780 Z8 0 TC 0 ZR 0 ZB 0
ZS 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-09-08
UT MEDLINE:39240739
PM 39240739
ER
ZS 0 ZA 0 Z9 0 EI 1558-254X DA 2024-09-08 UT MEDLINE:39240739 PM 39240739 ER
PT J
AU Lerendegui, Marcelo
Riemer, Kai
Papageorgiou, Georgios
Wang, Bingxue
Arthur, Lachlan
Chavignon, Arthur
Zhang, Tao
Couture, Olivier
Huang, Pingtong
Ashikuzzaman, Md
Dencks, Stefanie
Dunsby, Chris
Helfield, Brandon
Jensen, Jorgen Arendt
Lisson, Thomas
Lowerison, Matthew R.
Rivaz, Hassan
Samir, Anthony E.
Schmitz, Georg
Schoen, Scott
van Sloun, Ruud
Song, Pengfei
Stevens, Tristan
Yan, Jipeng
Sboros, Vassilis
Tang, Meng-Xing
PT J AU Lerendegui、Marcelo Riemer、Kai Papageorgiou、Georgios Wang、Bingxue Arthur、Lachlan Chavignon、Arthur 张、Tao Couture、Olivier Huang、Pingtong Ashikuzzaman、Md Dencks、Stefanie Dunsby、Chris Helfield、Brandon Jensen、Jorgen Arendt Lisson、Thomas Lowerison , Matthew R. Rivaz, Hassan Samir, Anthony E. Schmitz, Georg Schoen, Scott van Sloun, Ruud Song, Pengfei Stevens, Tristan Yan, Jipeng Sboros, Vassilis Tang, 孟兴
TI ULTRA-SR Challenge: Assessment of Ultrasound Localization and TRacking
Algorithms for Super-Resolution Imaging
TI ULTRA-SR 挑战赛:超分辨率成像超声定位和跟踪算法评估
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2970
EP 2987
DI 10.1109/TMI.2024.3388048
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2970 EP 2987 DI 10.1109/TMI.2024.3388048 DT 文章 PD 2024 年 8 月 PY 2024
AB With the widespread interest and uptake of super-resolution ultrasound
(SRUS) through localization and tracking of microbubbles, also known as
ultrasound localization microscopy (ULM), many localization and tracking
algorithms have been developed. ULM can image many centimeters into
tissue in-vivo and track microvascular flow non-invasively with
sub-diffraction resolution. In a significant community effort, we
organized a challenge, Ultrasound Localization and TRacking Algorithms
for Super-Resolution (ULTRA-SR). The aims of this paper are threefold:
to describe the challenge organization, data generation, and winning
algorithms; to present the metrics and methods for evaluating challenge
entrants; and to report results and findings of the evaluation.
Realistic ultrasound datasets containing microvascular flow for
different clinical ultrasound frequencies were simulated, using vascular
flow physics, acoustic field simulation and nonlinear bubble dynamics
simulation. Based on these datasets, 38 submissions from 24 research
groups were evaluated against ground truth using an evaluation framework
with six metrics, three for localization and three for tracking. In-vivo
mouse brain and human lymph node data were also provided, and
performance assessed by an expert panel. Winning algorithms are
described and discussed. The publicly available data with ground truth
and the defined metrics for both localization and tracking present a
valuable resource for researchers to benchmark algorithms and software,
identify optimized methods/software for their data, and provide insight
into the current limits of the field. In conclusion, Ultra-SR challenge
has provided benchmarking data and tools as well as direct comparison
and insights for a number of the state-of-the art localization and
tracking algorithms.
RI Ashikuzzaman, Md/GSI-9042-2022; Yan, Jipeng/HHM-3382-2022; Song, Pengfei/G-9326-2015; Jensen, Jørgen Arendt/GXH-0647-2022; Arthur, Lachlan/; Sboros, Vassilis/; Chavignon, Arthur/; Riemer, Kai/; Jensen, Jorgen Arendt/A-8734-2017; Lerendegui, Marcelo/; Wang, Bingxue/
OI Arthur, Lachlan/0009-0004-1906-567X; Sboros,
Vassilis/0000-0002-9133-7252; Chavignon, Arthur/0000-0001-7883-7482;
Riemer, Kai/0000-0002-5631-9649; Jensen, Jorgen
Arendt/0000-0002-7896-3136; Lerendegui, Marcelo/0000-0001-6722-4605;
Wang, Bingxue/0000-0003-4649-4598
AB 随着通过微泡定位和跟踪的超分辨率超声 (SRUS)(也称为超声定位显微镜 (ULM))受到广泛关注和采用,许多定位和跟踪算法已经被开发出来。 ULM 可以将体内组织成像许多厘米,并以亚衍射分辨率非侵入性地跟踪微血管流动。在社区的一项重大努力中,我们组织了一项挑战:超分辨率超声定位和跟踪算法 (ULTRA-SR)。本文的目的有三个:描述挑战组织、数据生成和获胜算法;提出评估挑战者的指标和方法;并报告评估结果和结果。使用血管流物理、声场模拟和非线性气泡动力学模拟,模拟了包含不同临床超声频率的微血管流的真实超声数据集。基于这些数据集,使用具有六个指标(三个用于本地化、三个用于跟踪)的评估框架,根据真实情况对来自 24 个研究小组的 38 份提交内容进行了评估。还提供了体内小鼠大脑和人类淋巴结数据,并由专家小组评估了性能。描述并讨论了获胜算法。具有真实事实的公开数据以及定义的定位和跟踪指标为研究人员提供了宝贵的资源,可以对算法和软件进行基准测试,确定针对其数据的优化方法/软件,并提供对该领域当前限制的洞察。 总之,Ultra-SR 挑战赛提供了基准数据和工具,以及对许多最先进的定位和跟踪算法的直接比较和见解。RI 阿希库扎曼,MD/GSI-9042-2022;严继鹏/HHM-3382-2022;宋鹏飞/G-9326-2015;詹森,约尔根·阿伦特/GXH-0647-2022;亚瑟·拉克兰/;斯博罗斯,瓦西利斯/;亚瑟·查维尼翁/;里默,凯/;詹森,约根·阿伦特/A-8734-2017;勒伦德吉、马塞洛/;王冰雪/ OI Arthur, Lachlan/0009-0004-1906-567X;斯博罗斯,瓦西利斯/0000-0002-9133-7252;亚瑟·查维尼翁/0000-0001-7883-7482;里默,凯/0000-0002-5631-9649;詹森,约根·阿伦特/0000-0002-7896-3136;马塞洛·勒伦德吉/0000-0001-6722-4605;王冰雪/0000-0003-4649-4598
ZS 0
ZB 0
TC 1
ZR 0
ZA 0
Z8 0
Z9 1
C1 Imperial Coll London, Dept Bioengn, London SW7 2AZ, England
C1 Heriot Watt Univ, Inst Biol Chem Biophys & Bioengn, Edinburgh EH14 4AS,
Scotland
C1 Sorbonne Univ, Lab Imagerie Biomed, CNRS, INSERM, F-75006 Paris, France
C1 Zhejiang Univ, Affiliated Hosp 2, Dept Ultrasound, Hangzhou 310027,
Zhejiang, Peoples R China
C1 Concordia Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
C1 Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
C1 Ruhr Univ Bochum, Fac Elect Engn & Informat Technol, D-44801 Bochum,
Germany
C1 Imperial Coll London, Dept Phys, London SW7 2AZ, England
C1 Concordia Univ, Dept Phys, Montreal, PQ H3G 2W1, Canada
C1 Concordia Univ, Dept Biol, Montreal, PQ H3G 2W1, Canada
C1 Tech Univ Denmark, Dept Hlth Technol, DK-2800 Copenhagen, Denmark
C1 Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61820 USA
C1 Univ Illinois, Beckman Inst, Urbana, IL 61820 USA
C1 Harvard Med Sch, Massachusetts Gen Hosp, Boston, MA 02114 USA
C1 Eindhoven Univ Technol, Dept Elect Engn, NL-5612 AZ Eindhoven,
Netherlands
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200025
PM 38607705
ER
ZS 0 ZB 0 TC 1 ZR 0 ZA 0 Z8 0 Z9 1 C1 Imperial Coll London, Dept Bioengn, London SW7 2AZ, England C1 Heriot Watt Univ, Inst Biol Chem Biophys & Bioengn, Edinburgh EH14 4AS, Scotland C1 Sorbonne Univ, Lab Imagerie Biomed, CNRS, INSERM, F-75006 Paris, France C1 浙江大学,附属医院 2,超声科,杭州 310027,浙江,人民 R China C1 协和大学,Elect & Comp Engn,蒙特利尔,PQ,加拿大 C1 约翰·霍普金斯大学, Dept Elect & Comp Engn, Baltimore, MD 21218 USA C1 Ruhr Univ Bochum, Fac Elect Engn & Informat Technol, D-44801 Bochum, 德国 C1 Imperial Coll London, Dept Phys, London SW7 2AZ, England C1 Concordia Univ, Dept Phys,蒙特利尔,PQ H3G 2W1,加拿大 C1 Concordia Univ,生物系,蒙特利尔,PQ H3G 2W1,加拿大 C1 Tech Univ 丹麦,Dept Hlth Technol,DK-2800 哥本哈根,丹麦 C1 伊利诺伊大学,Dept Elect & Comp Engn,厄巴纳,IL 61820美国 C1 伊利诺伊大学,贝克曼研究所,厄巴纳,IL 61820 美国 C1 哈佛医学院,马萨诸塞州 Gen Hosp,波士顿,MA 02114 美国 C1 埃因霍温大学技术学院,Dept Elect Engn,NL-5612 AZ 埃因霍温,荷兰 SN 0278-0062 EI 1558- 254X DA 2024-08-18 UT WOS:001285367200025 PM 38607705 ER
PT J
AU Ye, Yiwen
Zhang, Jianpeng
Chen, Ziyang
Xia, Yong
PT J AU Ye, 张一文, 陈建鹏, 夏紫阳, 勇
TI CADS: A Self-supervised Learner via Cross-modal Alignment and Deep
Self-distillation for CT Volume Segmentation.
TI CADS:通过跨模式对齐和深度自蒸馏进行 CT 体积分割的自我监督学习器。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3431916
DT Journal Article
PD 2024-Jul-22
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3431916 DT 期刊文章 PD 2024-Jul-22 PY 2024
AB Self-supervised learning (SSL) has long had great success in advancing
the field of annotation-efficient learning. However, when applied to CT
volume segmentation, most SSL methods suffer from two limitations,
including rarely using the information acquired by different imaging
modalities and providing supervision only to the bottleneck encoder
layer. To address both limitations, we design a pretext task to align
the information in each 3D CT volume and the corresponding 2D generated
X-ray image and extend self-distillation to deep self-distillation.
Thus, we propose a self-supervised learner based on Cross-modal
Alignment and Deep Self-distillation (CADS) to improve the encoder's
ability to characterize CT volumes. The cross-modal alignment is a more
challenging pretext task that forces the encoder to learn better image
representation ability. Deep self-distillation provides supervision to
not only the bottleneck layer but also shallow layers, thus boosting the
abilities of both. Comparative experiments show that, during
pre-training, our CADS has lower computational complexity and GPU memory
cost than competing SSL methods. Based on the pre-trained encoder, we
construct PVT-UNet for 3D CT volume segmentation. Our results on seven
downstream tasks indicate that PVT-UNet outperforms state-of-the-art SSL
methods like MOCOv3 and DiRA, as well as prevalent medical image
segmentation methods like nnUNet and CoTr. Code and pre-trained weight
will be available at https://github.com/yeerwen/CADS.
OI Ye, Yiwen/0000-0003-2189-6865
ZB 0
AB 自监督学习 (SSL) 长期以来在推进高效注释学习领域取得了巨大成功。然而,当应用于 CT 体积分割时,大多数 SSL 方法都存在两个局限性,包括很少使用不同成像方式获取的信息以及仅向瓶颈编码器层提供监督。为了解决这两个限制,我们设计了一个借口任务来对齐每个 3D CT 体积中的信息和相应的 2D 生成的 X 射线图像,并将自蒸馏扩展到深度自蒸馏。因此,我们提出了一种基于跨模态对齐和深度自蒸馏(CADS)的自监督学习器,以提高编码器表征 CT 体积的能力。跨模态对齐是一项更具挑战性的借口任务,它迫使编码器学习更好的图像表示能力。深度自蒸馏不仅可以对瓶颈层进行监督,还可以对浅层进行监督,从而提高两者的能力。对比实验表明,在预训练期间,我们的 CADS 比竞争的 SSL 方法具有更低的计算复杂度和 GPU 内存成本。基于预训练的编码器,我们构建了用于 3D CT 体积分割的 PVT-UNet。我们在七个下游任务上的结果表明,PVT-UNet 的性能优于 MOCOv3 和 DiRA 等最先进的 SSL 方法,以及 nnUNet 和 CoTr 等流行的医学图像分割方法。代码和预训练权重将在 https://github.com/yeerwen/CADS 上提供。叶叶艺文/0000-0003-2189-6865 ZB 0
ZS 0
ZR 0
TC 0
Z8 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-07-24
UT MEDLINE:39037875
PM 39037875
ER
ZS 0 ZR 0 TC 0 Z8 0 ZA 0 Z9 0 EI 1558-254X DA 2024-07-24 UT MEDLINE:39037875 PM 39037875 ER
PT J
AU Smith, Nathaniel J.
Newton, David T.
Gunderman, David
Hutchins, Gary D.
PT J AU 史密斯、纳撒尼尔·牛顿、大卫·T·冈德曼、大卫·哈钦斯、加里·D.
TI A Comparison of Arterial Input Function Interpolation Methods for Patlak
Plot Analysis of <SUP>68</SUP>Ga-PSMA-11 PET Prostate Cancer Studies
TI A <SUP>68</SUP>Ga-PSMA-11 PET 前列腺癌研究 Patlak 图分析的动脉输入函数插值方法比较
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2411
EP 2419
DI 10.1109/TMI.2024.3357799
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2411 EP 2419 DI 10.1109/TMI.2024.3357799 DT 文章 PD 2024 年 7 月 PY 2024
AB Positron emission tomography (PET) imaging enables quantitative
assessment of tissue physiology. Dynamic pharmacokinetic analysis of PET
images requires accurate estimation of the radiotracer plasma input
function to derive meaningful parameter estimates, and small
discrepancies in parameter estimation can mimic subtle physiologic
tissue variation. This study evaluates the impact of input function
interpolation method on the accuracy of Patlak kinetic parameter
estimation through simulations modeling the pharmacokinetic properties
of [Ga-68]-PSMA-11. This study evaluated both trained and untrained
methods. Although the mean kinetic parameter accuracy was similar across
all interpolation models, the trained node weighting interpolation model
estimated accurate kinetic parameters with reduced overall variability
relative to standard linear interpolation. Trained node weighting
interpolation reduced kinetic parameter estimation variance by a
magnitude approximating the underlying physiologic differences between
normal and diseased prostatic tissue. Overall, this analysis suggests
that trained node weighting improves the reliability of Patlak kinetic
parameter estimation for [Ga-68]-PSMA-11 PET.
RI Gunderman, David/AAB-4629-2022; Smith, Nathaniel/; Gunderman, David/
OI Smith, Nathaniel/0000-0002-8144-8362; Gunderman,
David/0000-0003-4154-0911
TC 0
Z8 0
ZR 0
ZB 0
ZA 0
AB 正电子发射断层扫描 (PET) 成像能够对组织生理学进行定量评估。 PET 图像的动态药代动力学分析需要准确估计放射性示踪剂血浆输入函数,以获得有意义的参数估计,参数估计中的微小差异可以模拟微妙的生理组织变化。本研究通过对 [Ga-68]-PSMA-11 的药代动力学特性进行模拟建模,评估输入函数插值法对 Patlak 动力学参数估计准确性的影响。这项研究评估了经过训练和未经训练的方法。尽管所有插值模型的平均动力学参数精度相似,但经过训练的节点加权插值模型估计了准确的动力学参数,相对于标准线性插值,整体变异性降低。经过训练的节点加权插值减少了动力学参数估计方差,其幅度接近正常和患病前列腺组织之间的潜在生理差异。总体而言,该分析表明,经过训练的节点权重提高了 [Ga-68]-PSMA-11 PET 的 Patlak 动力学参数估计的可靠性。 RI 冈德曼,大卫/AAB-4629-2022;史密斯,纳撒尼尔/;大卫·冈德曼/纳撒尼尔·OI·史密斯/0000-0002-8144-8362;大卫·冈德曼/0000-0003-4154-0911 TC 0 Z8 0 ZR 0 ZB 0 ZA 0
ZS 0
Z9 0
C1 Indiana Univ Sch Med, Dept Radiol & Imaging Sci, Indianapolis, IN 46202
USA
C1 Purdue Univ, Weldon Sch Biomed Engn, W Lafayette, IN 47907 USA
C1 Purdue Univ, Dept Stat, W Lafayette, IN 47907 USA
C1 Indiana Univ Sch Med, Lafayette, IN 47907 USA
C3 Indiana Univ Sch Med
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100006
PM 38306263
ER
ZS 0 Z9 0 C1 Indiana Univ Sch Med, Dept Radiol & Imaging Sci, Indianapolis, IN 46202 USA C1 Purdue Univ, Weldon Sch Biomed Engn, W Lafayette, IN 47907 USA C1 Purdue Univ, Dept Stat, W Lafayette, IN 47907 USA C1印第安纳大学医学院,拉斐特,IN 47907 美国 C3 印第安纳大学医学院 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100006 PM 38306263 ER
PT J
AU Ni, Guangming
Wu, Renxiong
Zheng, Fei
Li, Meixuan
Huang, Shaoyan
Ge, Xin
Liu, Linbo
Liu, Yong
PT J AU Ni, 吴光明, 郑仁雄, 李飞, 黄美轩, 葛少彦, 刘鑫, 刘林波, 勇
TI Toward Ground-Truth Optical Coherence Tomography via Three-Dimensional
Unsupervised Deep Learning Processing and Data
TI 通过三维无监督深度学习处理和数据实现地面真实光学相干断层扫描
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2395
EP 2407
DI 10.1109/TMI.2024.3363416
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2395 EP 2407 DI 10.1109/TMI.2024.3363416 DT 文章 PD 2024 年 6 月 PY 2024
AB Optical coherence tomography (OCT) can perform non-invasive
high-resolution three-dimensional (3D) imaging and has been widely used
in biomedical fields, while it is inevitably affected by coherence
speckle noise which degrades OCT imaging performance and restricts its
applications. Here we present a novel speckle-free OCT imaging strategy,
named toward-ground-truth OCT (tGT-OCT), that utilizes unsupervised 3D
deep-learning processing and leverages OCT 3D imaging features to
achieve speckle-free OCT imaging. Specifically, our proposed tGT-OCT
utilizes an unsupervised 3D-convolution deep-learning network trained
using random 3D volumetric data to distinguish and separate speckle from
real structures in 3D imaging volumetric space; moreover, tGT-OCT
effectively further reduces speckle noise and reveals structures that
would otherwise be obscured by speckle noise while preserving spatial
resolution. Results derived from different samples demonstrated the
high-quality speckle-free 3D imaging performance of tGT-OCT and its
advancement beyond the previous state-of-the-art. The code is available
online: https://github.com/Voluntino/tGT-OCT.
RI 葛, 鑫/GWW-0062-2022; Liu, Yong/; Liu, Linbo/O-2572-2018
OI Liu, Yong/0000-0002-2510-9470; Liu, Linbo/0000-0001-6691-9253
TC 1
ZR 0
ZB 0
Z8 0
ZA 0
AB光学相干断层扫描(OCT)可以进行非侵入性高分辨率三维(3D)成像,已广泛应用于生物医学领域,但它不可避免地受到相干散斑噪声的影响,降低了OCT成像性能并限制了其应用。在这里,我们提出了一种新颖的无散斑 OCT 成像策略,称为面向地面实况 OCT (tGT-OCT),该策略利用无监督 3D 深度学习处理并利用 OCT 3D 成像功能来实现无散斑 OCT 成像。具体来说,我们提出的 tGT-OCT 利用使用随机 3D 体积数据训练的无监督 3D 卷积深度学习网络来区分和分离 3D 成像体积空间中真实结构的散斑;此外,tGT-OCT 有效地进一步降低了散斑噪声,揭示了原本会被散斑噪声掩盖的结构,同时保留了空间分辨率。来自不同样本的结果证明了 tGT-OCT 的高质量无散斑 3D 成像性能及其超越先前最先进技术的进步。该代码可在线获取:https://github.com/Voluntino/tGT-OCT。葛鑫/GWW-0062-2022;刘勇/;刘林波/O-2572-2018 OI 刘勇/0000-0002-2510-9470;刘林波/0000-0001-6691-9253 TC 1 ZR 0 ZB 0 Z8 0 ZA 0
ZS 0
Z9 1
C1 Univ Elect Sci & Technol China, Sch Optoelect Sci & Engn, Chengdu
611731, Peoples R China
C1 Sun Yat Sen Univ, Sch Sci, Shenzhen Campus, Shenzhen 510275, Peoples R
China
C1 Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798,
Singapore
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100030
PM 38324426
ER
ZS 0 Z9 1 C1 中国科技大学光电科学与工程学院,成都 611731,人民 R 中国 C1 中山大学科学技术学院,深圳校区,深圳 510275,人民 R 中国 C1 南洋理工大学,科学与工程学院Elect Engn,新加坡 639798,新加坡 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100030 PM 38324426 ER
PT J
AU Haeusele, Jakob
Schmid, Clemens
Viermetz, Manuel
Gustschin, Nikolai
Lasser, Tobias
Koehler, Thomas
Pfeiffer, Franz
PT J AU Haeusele、Jakob Schmid、Clemens Viermetz、Manuel Gustschin、Nikolai Lasser、Tobias Koehler、Thomas Pfeiffer、Franz
TI Robust Sample Information Retrieval in Dark-Field Computed Tomography
with a Vibrating Talbot-Lau Interferometer.
使用振动 Talbot-Lau 干涉仪在暗场计算机断层扫描中进行 TI 鲁棒样本信息检索。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3400593
DT Journal Article
PD 2024-May-13
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3400593 DT 期刊文章 PD 2024-May-13 PY 2024
AB X-ray computed tomography (CT) is a crucial tool for non-invasive
medical diagnosis that uses differences in materials' attenuation
coefficients to generate contrast and provide 3D information.
Grating-based dark-field-contrast X-ray imaging is an innovative
technique that utilizes small-angle scattering to generate additional
co-registered images with additional microstructural information. While
it is already possible to perform human chest dark-field radiography, it
is assumed that its diagnostic value increases when performed in a
tomographic setup. However, the susceptibility of Talbot-Lau
interferometers to mechanical vibrations coupled with a need to minimize
data acquisition times has hindered its application in clinical routines
and the combination of X-ray dark-field imaging and large field-of-view
(FOV) tomography in the past. In this work, we propose a processing
pipeline to address this issue in a human-sized clinical dark-field CT
prototype. We present the corrective measures that are applied in the
employed processing and reconstruction algorithms to mitigate the
effects of vibrations and deformations of the interferometer gratings.
This is achieved by identifying spatially and temporally variable
vibrations in air reference scans. By translating the found correlations
to the sample scan, we can identify and mitigate relevant fluctuation
modes for scans with arbitrary sample sizes. This approach effectively
eliminates the requirement for sample-free detector area, while still
distinctly separating fluctuation and sample information. As a result,
samples of arbitrary dimensions can be reconstructed without being
affected by vibration artifacts. To demonstrate the viability of the
technique for human-scale objects, we present reconstructions of an
anthropomorphic thorax phantom.
OI Koehler, Thomas/0000-0002-1986-0057; Pfeiffer,
Franz/0000-0001-6665-4363; Haeusele, Jakob/0000-0002-6032-8775
Z8 0
ZR 0
ZB 0
TC 0
ZA 0
AB X 射线计算机断层扫描 (CT) 是无创医学诊断的重要工具,它利用材料衰减系数的差异来生成对比度并提供 3D 信息。基于光栅的暗场对比 X 射线成像是一种创新技术,利用小角度散射生成具有附加微观结构信息的附加共同配准图像。虽然已经可以进行人体胸部暗场放射线摄影,但假设在断层摄影设置中进行时其诊断价值会增加。然而,Talbot-Lau 干涉仪对机械振动的敏感性加上最大限度地减少数据采集时间的需要阻碍了其在临床常规以及 X 射线暗场成像和大视场 (FOV) 断层扫描的结合中的应用在过去。在这项工作中,我们提出了一种处理流程,以在人体大小的临床暗场 CT 原型中解决这个问题。我们提出了在所采用的处理和重建算法中应用的纠正措施,以减轻干涉仪光栅的振动和变形的影响。这是通过识别空气参考扫描中空间和时间变化的振动来实现的。通过将发现的相关性转化为样本扫描,我们可以识别并减轻任意样本大小扫描的相关波动模式。这种方法有效地消除了对无样品检测器区域的要求,同时仍然清晰地分离波动和样品信息。因此,可以重建任意维度的样本,而不受振动伪影的影响。 为了证明该技术对人体尺度物体的可行性,我们展示了拟人化胸部模型的重建。OI 克勒,托马斯/0000-0002-1986-0057;弗兰茨·菲佛/0000-0001-6665-4363;哈尤塞勒,雅各布/0000-0002-6032-8775 Z8 0 ZR 0 ZB 0 TC 0 ZA 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-05-16
UT MEDLINE:38739509
PM 38739509
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-05-16 UT MEDLINE:38739509 PM 38739509 ER
PT J
AU Wang, Yan
Zhen, Liangli
Tan, Tien-En
Fu, Huazhu
Feng, Yangqin
Wang, Zizhou
Xu, Xinxing
Goh, Rick Siow Mong
Ng, Yipin
Calhoun, Claire
Tan, Gavin Siew Wei
Sun, Jennifer K.
Liu, Yong
Ting, Daniel Shu Wei
PT J AU Wang、Yanzhen、Liangli Tan、Tien-En Fu、Huazhu Feng、Yangqin Wang、Zi Zhou Xu、Xinshing Goh、Rick Siow Mong Ng、Yipin Calhoun、Claire Tan、Gavin Siew Wei Sun、Jennifer K. Liu、Yong丁舒伟
TI Geometric Correspondence-Based Multimodal Learning for Ophthalmic Image
Analysis
用于眼科图像分析的基于 TI 几何对应的多模态学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1945
EP 1957
DI 10.1109/TMI.2024.3352602
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1945 EP 1957 DI 10.1109/TMI.2024.3352602 DT 文章 PD 2024 年 5 月 PY 2024
AB Color fundus photography (CFP) and Optical coherence tomography (OCT)
images are two of the most widely used modalities in the clinical
diagnosis and management of retinal diseases. Despite the widespread use
of multimodal imaging in clinical practice, few methods for automated
diagnosis of eye diseases utilize correlated and complementary
information from multiple modalities effectively. This paper explores
how to leverage the information from CFP and OCT images to improve the
automated diagnosis of retinal diseases. We propose a novel multimodal
learning method, named geometric correspondence-based multimodal
learning network (GeCoM-Net), to achieve the fusion of CFP and OCT
images. Specifically, inspired by clinical observations, we consider the
geometric correspondence between the OCT slice and the CFP region to
learn the correlated features of the two modalities for robust fusion.
Furthermore, we design a new feature selection strategy to extract
discriminative OCT representations by automatically selecting the
important feature maps from OCT slices. Unlike the existing multimodal
learning methods, GeCoM-Net is the first method that formulates the
geometric relationships between the OCT slice and the corresponding
region of the CFP image explicitly for CFP and OCT fusion. Experiments
have been conducted on a large-scale private dataset and a publicly
available dataset to evaluate the effectiveness of GeCoM-Net for
diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and
glaucoma. The empirical results show that our method outperforms the
current state-of-the-art multimodal learning methods by improving the
AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection,
respectively.
RI Zhen, Liangli/HCG-8485-2022; Tan, Tien-En/; Wang, Yan/GZL-5659-2022; Fu, Huazhu/A-1411-2014; Zhen, Liangli/
OI Tan, Tien-En/0000-0002-9869-5159; Wang, Yan/0000-0003-3907-9388; Fu,
Huazhu/0000-0002-9702-5524; Zhen, Liangli/0000-0003-0481-3298
ZR 0
TC 1
ZB 0
AB 彩色眼底摄影 (CFP) 和光学相干断层扫描 (OCT) 图像是视网膜疾病临床诊断和治疗中使用最广泛的两种模式。尽管多模态成像在临床实践中广泛使用,但很少有自动诊断眼部疾病的方法能够有效利用来自多种模态的相关和互补信息。本文探讨了如何利用 CFP 和 OCT 图像的信息来改进视网膜疾病的自动诊断。我们提出了一种新颖的多模态学习方法,称为基于几何对应的多模态学习网络(GeCoM-Net),以实现 CFP 和 OCT 图像的融合。具体来说,受临床观察的启发,我们考虑 OCT 切片和 CFP 区域之间的几何对应关系,以了解两种模式的相关特征以实现稳健融合。此外,我们设计了一种新的特征选择策略,通过自动从 OCT 切片中选择重要的特征图来提取有区别的 OCT 表示。与现有的多模态学习方法不同,GeCoM-Net 是第一个明确制定 OCT 切片与 CFP 图像相应区域之间的几何关系以进行 CFP 和 OCT 融合的方法。我们在大规模私人数据集和公开数据集上进行了实验,以评估 GeCoM-Net 在诊断糖尿病黄斑水肿 (DME)、视力受损 (VA) 和青光眼方面的有效性。实证结果表明,我们的方法优于当前最先进的多模态学习方法,将 DME、VA 和青光眼检测的 AUROC 分数分别提高了 0.4%、1.9% 和 2.9%。日震梁丽/HCG-8485-2022;谭天恩/;王艳/GZL-5659-2022;付华柱/A-1411-2014;甄良丽/ OI Tan, Tien-En/0000-0002-9869-5159;王艳/0000-0003-3907-9388;付华柱/0000-0002-9702-5524;甄良丽/0000-0003-0481-3298 ZR 0 TC 1 ZB 0
ZS 0
Z8 0
ZA 0
Z9 1
C1 Agcy Sci Technol & Res, Inst High Performance Comp, Singapore, Singapore
C1 Singapore Eye Res Inst, Singapore 169856, Singapore
C1 Singapore Natl Eye Ctr, Singapore 168751, Singapore
C1 Jaeb Ctr Hlth Res, Tampa, FL 33647 USA
C1 Joslin Diabet Ctr, Beetham Eye Inst, Harvard Dept Ophthalmol, Boston, MA
02215 USA
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800027
PM 38206778
ER
ZS 0 Z8 0 ZA 0 Z9 1 C1 Agcy Sci Technol & Res, Inst High Performance Comp, Singapore, Singapore C1 Singapore Eye Res Inst, Singapore 169856, Singapore C1 Singapore Natl Eye Ctr, Singapore 168751, Singapore C1 Jaeb Ctr Hlth Res, Tampa , FL 33647 USA C1 Joslin Diabet Ctr, Beetham Eye Inst, 哈佛大学眼科, Boston, MA 02215 USA SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800027 PM 38206778 ER
PT J
AU Huang, Kun
Ma, Xiao
Zhang, Zetian
Zhang, Yuhan
Yuan, Songtao
Fu, Huazhu
Chen, Qiang
PT J AU Huang, 马坤, 张晓, 张泽天, 袁雨涵, 付松涛, 陈华柱, 强
TI Diverse Data Generation for Retinal Layer Segmentation with Potential
Structure Modelling.
通过电位结构建模进行视网膜层分割的 TI 多样化数据生成。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3384484
DT Journal Article
PD 2024-Apr-08
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3384484 DT 期刊文章 PD 2024-Apr-08 PY 2024
AB Accurate retinal layer segmentation on optical coherence tomography
(OCT) images is hampered by the challenges of collecting OCT images with
diverse pathological characterization and balanced distribution. Current
generative models can produce high-realistic images and corresponding
labels without quantitative limitations by fitting distributions of real
collected data. Nevertheless, the diversity of their generated data is
still limited due to the inherent imbalance of training data. To address
these issues, we propose an image-label pair generation framework that
generates diverse and balanced potential data from imbalanced real
samples. Specifically, the framework first generates diverse layer
masks, and then generates plausible OCT images corresponding to these
layer masks using two customized diffusion probabilistic models
respectively. To learn from imbalanced data and facilitate balanced
generation, we introduce pathological-related conditions to guide the
generation processes. To enhance the diversity of the generated
image-label pairs, we propose a potential structure modeling technique
that transfers the knowledge of diverse sub-structures from lowly- or
non-pathological samples to highly pathological samples. We conducted
extensive experiments on two public datasets for retinal layer
segmentation. Firstly, our method generates OCT images with higher image
quality and diversity compared to other generative methods. Furthermore,
based on the extensive training with the generated OCT images,
downstream retinal layer segmentation tasks demonstrate improved
results. The code is publicly available at:
https://github.com/nicetomeetu21/GenPSM.
RI Fu, Huazhu/A-1411-2014; Chen, Qiang/; Yuan, Songtao/; Ma, Xiao/
OI Fu, Huazhu/0000-0002-9702-5524; Chen, Qiang/0000-0002-6685-2447; Yuan,
Songtao/0000-0001-9212-0664; Ma, Xiao/0000-0002-1842-5029
AB 光学相干断层扫描 (OCT) 图像上精确的视网膜层分割受到收集具有不同病理特征和平衡分布的 OCT 图像的挑战的阻碍。当前的生成模型可以通过拟合真实收集数据的分布来生成高度真实的图像和相应的标签,而不受数量限制。然而,由于训练数据固有的不平衡,他们生成的数据的多样性仍然受到限制。为了解决这些问题,我们提出了一种图像标签对生成框架,该框架可以从不平衡的真实样本中生成多样化且平衡的潜在数据。具体来说,该框架首先生成不同的层掩模,然后分别使用两个定制的扩散概率模型生成与这些层掩模相对应的可信 OCT 图像。为了从不平衡数据中学习并促进平衡生成,我们引入病理相关条件来指导生成过程。为了增强生成的图像标签对的多样性,我们提出了一种潜在的结构建模技术,该技术将不同子结构的知识从低度或非病理样本转移到高度病理样本。我们对两个用于视网膜层分割的公共数据集进行了广泛的实验。首先,与其他生成方法相比,我们的方法生成的 OCT 图像具有更高的图像质量和多样性。此外,基于对生成的 OCT 图像的广泛训练,下游视网膜层分割任务显示出改进的结果。该代码可在以下网址公开获取:https://github.com/nicetomeetu21/GenPSM。日付华珠/A-1411-2014;陈强/;袁松涛/;马晓/付爱华珠/0000-0002-9702-5524;陈强/0000-0002-6685-2447;袁松涛/0000-0001-9212-0664;马晓/0000-0002-1842-5029
ZS 0
Z8 0
ZB 0
ZA 0
TC 1
ZR 0
Z9 1
EI 1558-254X
DA 2024-04-10
UT MEDLINE:38587957
PM 38587957
ER
ZS 0 Z8 0 ZB 0 ZA 0 TC 1 ZR 0 Z9 1 EI 1558-254X DA 2024-04-10 UT MEDLINE:38587957 PM 38587957 ER
PT J
AU Chen, Jiachen
Li, Mengyang
Han, Hu
Zhao, Zhiming
Chen, Xilin
PT J AU Chen, Jiachen Li, 韩梦阳, 赵虎, 陈志明, Xilin
TI SurgNet: Self-Supervised Pretraining With Semantic Consistency for
Vessel and Instrument Segmentation in Surgical Images
TI SurgNet:具有语义一致性的自监督预训练,用于手术图像中的血管和器械分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1513
EP 1525
DI 10.1109/TMI.2023.3341948
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1513 EP 1525 DI 10.1109/TMI.2023.3341948 DT 文章 PD 2024 年 4 月 PY 2024
AB Blood vessel and surgical instrument segmentation is a fundamental
technique for robot-assisted surgical navigation. Despite the
significant progress in natural image segmentation, surgical image-based
vessel and instrument segmentation are rarely studied. In this work, we
propose a novel self-supervised pretraining method (SurgNet) that can
effectively learn representative vessel and instrument features from
unlabeled surgical images. As a result, it allows for precise and
efficient segmentation of vessels and instruments with only a small
amount of labeled data. Specifically, we first construct a region
adjacency graph (RAG) based on local semantic consistency in unlabeled
surgical images and use it as a self-supervision signal for pseudo-mask
segmentation. We then use the pseudo-mask to perform guided masked image
modeling (GMIM) to learn representations that integrate structural
information of intraoperative objectives more effectively. Our
pretrained model, paired with various segmentation methods, can be
applied to perform vessel and instrument segmentation accurately using
limited labeled data for fine-tuning. We build an Intraoperative Vessel
and Instrument Segmentation (IVIS) dataset, comprised of similar to 3
million unlabeled images and over 4,000 labeled images with manual
vessel and instrument annotations to evaluate the effectiveness of our
self-supervised pretraining method. We also evaluated the
generalizability of our method to similar tasks using two public
datasets. The results demonstrate that our approach outperforms the
current state-of-the-art (SOTA) self-supervised representation learning
methods in various surgical image segmentation tasks.
RI Li, Mengyang/AAU-5608-2021; Chen, Xilin/A-1409-2012; Chen, Xilin/I-4153-2014; Li, Mengyang/
OI Chen, Xilin/0000-0003-3024-4404; Li, Mengyang/0000-0001-8973-0445
TC 0
Z8 0
ZR 0
AB 血管和手术器械分割是机器人辅助手术导航的一项基本技术。尽管自然图像分割取得了重大进展,但基于手术图像的血管和器械分割的研究却很少。在这项工作中,我们提出了一种新颖的自监督预训练方法(SurgNet),可以有效地从未标记的手术图像中学习代表性血管和器械特征。因此,它只需少量的标记数据即可对血管和仪器进行精确有效的分割。具体来说,我们首先基于未标记的手术图像中的局部语义一致性构建区域邻接图(RAG),并将其用作伪掩模分割的自监督信号。然后,我们使用伪掩模执行引导掩模图像建模(GMIM),以学习更有效地整合术中目标结构信息的表示。我们的预训练模型与各种分割方法相结合,可以使用有限的标记数据进行微调,从而准确地执行血管和器械分割。我们构建了一个术中血管和器械分割 (IVIS) 数据集,其中包含大约 300 万张未标记图像和超过 4,000 张带有手动血管和器械注释的标记图像,以评估我们的自监督预训练方法的有效性。我们还使用两个公共数据集评估了我们的方法对类似任务的通用性。结果表明,我们的方法在各种手术图像分割任务中优于当前最先进的(SOTA)自监督表示学习方法。李梦阳/AAU-5608-2021;陈西林/A-1409-2012;陈西林/I-4153-2014;李梦阳/陈爱希林/0000-0003-3024-4404;李梦阳/0000-0001-8973-0445 TC 0 Z8 0 ZR 0
ZS 0
ZA 0
ZB 0
Z9 0
C1 Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc,
Beijing 100190, Peoples R China
C1 Univ Chinese Acad Sci, Beijing 100049, Peoples R China
C1 Chinese Peoples Liberat Army Gen Hosp, Med Ctr 1, Beijing 100853,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-06
UT WOS:001196733400019
PM 38090838
ER
ZS 0 ZA 0 ZB 0 Z9 0 C1 中国科学院计算技术研究所智能信息工程重点实验室,北京 100190 C1 中国科学院大学,北京 100049 C1 中国人民解放军将军医院,医学博士Ctr 1, 北京 100853, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-07-06 UT WOS:001196733400019 PM 38090838 ER
PT J
AU Lin, Chen
Zhu, Zhenfeng
Zhao, Yawei
Zhang, Ying
He, Kunlun
Zhao, Yao
PT J AU Lin, 朱晨, 赵振峰, 张亚伟, 何瑛, 赵昆仑, 姚
TI SGT plus plus : Improved Scene Graph-Guided Transformer for Surgical
Report Generation
TI SGT plus plus:用于生成手术报告的改进场景图引导变压器
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1337
EP 1346
DI 10.1109/TMI.2023.3335909
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1337 EP 1346 DI 10.1109/TMI.2023.3335909 DT 文章 PD 2024 年 4 月 PY 2024
AB Automatically recording surgical procedures and generating surgical
reports are crucial for alleviating surgeons' workload and enabling them
to concentrate more on the operations. Despite some achievements, there
still exist several issues for the previous works: 1) failure to model
the interactive relationship between surgical instruments and tissue;
and 2) neglect of fine-grained differences within different surgical
images in the same surgery. To address these two issues, we propose an
improved scene graph-guided Transformer, also named by SGT++, to
generate more accurate surgical report, in which the complex
interactions between surgical instruments and tissue are learnt from
both explicit and implicit perspectives. Specifically, to facilitate the
understanding of the surgical scene graph under a graph learning
framework, a simple yet effective approach is proposed for homogenizing
the input heterogeneous scene graph. For the homogeneous scene graph
that contains explicit structured and fine-grained semantic
relationships, we design an attention-induced graph transformer for node
aggregation via an explicit relation-aware encoder. In addition, to
characterize the implicit relationships about the instrument, tissue,
and the interaction between them, the implicit relational attention is
proposed to take full advantage of the prior knowledge from the
interactional prototype memory. With the learnt explicit and implicit
relation-aware representations, they are then coalesced to obtain the
fused relation-aware representations contributing to generating reports.
Some comprehensive experiments on two surgical datasets show that the
proposed STG++ model achieves state-of-the-art results.
RI Lin, Chen/; He, Kunlun/IXN-1108-2023; Zhao, Yao/; Zhang, Ying/
OI Lin, Chen/0000-0003-4739-2999; He, Kunlun/0000-0002-3335-5700; Zhao,
Yao/0000-0002-8581-9554; Zhang, Ying/0000-0003-4906-5828
ZR 0
ZB 0
Z8 0
ZA 0
TC 0
AB 自动记录手术过程并生成手术报告对于减轻外科医生的工作量并使他们更加专注于手术至关重要。尽管取得了一些成果,但之前的工作仍然存在一些问题:1)未能模拟手术器械与组织之间的交互关系; 2)忽略同一手术中不同手术图像的细粒度差异。为了解决这两个问题,我们提出了一种改进的场景图引导 Transformer(也称为 SGT++),以生成更准确的手术报告,其中从显式和隐式角度学习手术器械和组织之间的复杂相互作用。具体来说,为了便于在图学习框架下理解手术场景图,提出了一种简单而有效的方法来均匀化输入异构场景图。对于包含显式结构化和细粒度语义关系的同构场景图,我们通过显式关系感知编码器设计了一种用于节点聚合的注意力诱导图转换器。此外,为了表征仪器、组织以及它们之间的交互的隐式关系,提出了隐式关系注意,以充分利用交互原型记忆中的先验知识。利用学习到的显式和隐式关系感知表示,然后将它们合并以获得有助于生成报告的融合关系感知表示。对两个手术数据集的一些综合实验表明,所提出的 STG++ 模型取得了最先进的结果。日林,陈/;何昆仑/IXN-1108-2023;赵姚/;张英/陈爱林/0000-0003-4739-2999;何昆仑/0000-0002-3335-5700;赵姚/0000-0002-8581-9554;张英/0000-0003-4906-5828 ZR 0 ZB 0 Z8 0 ZA 0 TC 0
ZS 0
Z9 0
C1 Beijing Jiaotong Univ, Beijing Key Lab Adv Informat Sci & Network
Technol, Beijing 100044, Peoples R China
C1 Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R
China
C1 Chinese Peoples Liberat Army Gen Hosp, Med Big Data Res Ctr, Beijing
100853, Peoples R China
C1 Nankai Univ, Coll Comp Sci, Tianjin 300350, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-06
UT WOS:001196733400010
PM 38015688
ER
ZS 0 Z9 0 C1 北京交通大学,北京市高级信息科学与网络技术重点实验室,北京 100044,人民 R 中国 C1 北京交通大学,信息科学研究院,北京 100044,人民 R 中国 C1 中国人民解放军将军医院,Medi Big Data Res Ctr, 北京 100853, Peoples R China C1 Nankai Univ, Coll Comp Sci, 天津 300350, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-07-06 UT WOS:001196733400010 PM 38015688 ER
PT J
AU Grohl, Janek
Else, Thomas R.
Hacker, Lina
Bunce, Ellie V.
Sweeney, Paul W.
Bohndiek, Sarah E.
PT J AU Grohl、Janek Else、Thomas R. Hacker、Lina Bunce、Ellie V. Sweeney、Paul W. Bohndiek、Sarah E.
TI Moving Beyond Simulation: Data-Driven Quantitative Photoacoustic Imaging
Using Tissue-Mimicking Phantoms
TI 超越模拟:使用模拟组织模型进行数据驱动的定量光声成像
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1214
EP 1224
DI 10.1109/TMI.2023.3331198
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1214 EP 1224 DI 10.1109/TMI.2023.3331198 DT 文章 PD 2024 年 3 月 PY 2024
AB Accurate measurement of optical absorption coefficients from
photoacoustic imaging (PAI) data would enable direct mapping of
molecular concentrations, providing vital clinical insight. The
ill-posed nature of the problem of absorption coefficient recovery has
prohibited PAI from achieving this goal in living systems due to the
domain gap between simulation and experiment. To bridge this gap, we
introduce a collection of experimentally well-characterised imaging
phantoms and their digital twins. This first-of-a-kind phantom data set
enables supervised training of a U-Net on experimental data for
pixel-wise estimation of absorption coefficients. We show that training
on simulated data results in artefacts and biases in the estimates,
reinforcing the existence of a domain gap between simulation and
experiment. Training on experimentally acquired data, however, yielded
more accurate and robust estimates of optical absorption coefficients.
We compare the results to fluence correction with a Monte Carlo model
from reference optical properties of the materials, which yields a
quantification error of approximately 20%. Application of the trained
U-Nets to a blood flow phantom demonstrated spectral biases when
training on simulated data, while application to a mouse model
highlighted the ability of both learning-based approaches to recover the
depth-dependent loss of signal intensity. We demonstrate that training
on experimental phantoms can restore the correlation of signal
amplitudes measured in depth. While the absolute quantification error
remains high and further improvements are needed, our results highlight
the promise of deep learning to advance quantitative PAI.
RI Bohndiek, Sarah/S-9416-2019; Sweeney, Paul/K-6090-2019; Else, Thomas/HMP-2090-2023; Else, Thomas/; Bunce, Ellie Victoria/; Bohndiek, Sarah/
OI Else, Thomas/0000-0002-2652-4190; Bunce, Ellie
Victoria/0009-0005-3179-9307; Bohndiek, Sarah/0000-0003-0371-8635
AB 通过光声成像 (PAI) 数据精确测量光吸收系数将能够直接绘制分子浓度图,从而提供重要的临床洞察力。由于模拟和实验之间的领域差距,吸收系数恢复问题的不适定性质阻碍了 PAI 在生命系统中实现这一目标。为了弥补这一差距,我们引入了一系列经过实验充分表征的成像模型及其数字双胞胎。这个史无前例的模型数据集能够根据实验数据对 U-Net 进行监督训练,以逐像素估计吸收系数。我们表明,对模拟数据的训练会导致估计中的伪影和偏差,从而强化了模拟与实验之间域差距的存在。然而,对实验获得的数据进行训练可以产生更准确、更稳健的光吸收系数估计。我们将结果与根据材料的参考光学特性使用蒙特卡罗模型进行的注量校正进行比较,产生大约 20% 的量化误差。将经过训练的 U-Net 应用于血流模型,在模拟数据上进行训练时表现出光谱偏差,而应用于小鼠模型则强调了两种基于学习的方法恢复与深度相关的信号强度损失的能力。我们证明,对实验体模的训练可以恢复深度测量的信号幅度的相关性。虽然绝对量化误差仍然很高,需要进一步改进,但我们的结果凸显了深度学习在推进定量 PAI 方面的前景。RI Bohndiek,莎拉/S-9416-2019;保罗·斯威尼/K-6090-2019;否则,托马斯/HMP-2090-2023;否则,托马斯/;邦斯,埃莉·维多利亚/;莎拉·博恩迪克/托马斯·OI Else/0000-0002-2652-4190;邦斯,埃莉·维多利亚/0009-0005-3179-9307;莎拉·博恩迪克/0000-0003-0371-8635
ZS 0 ZR 0 ZA 0 TC 1 Z8 0 ZB 0 Z9 1 C1 Canc Res UK Cambridge Inst, Cambridge CB2 0RE, England C1 Univ Cambridge, Dept Phys, Cambridge CB3 0HE, England C1 Univ Cambridge, Dept Phys, Cambridge OX3 7DQ, England C1 Univ Cambridge, Canc Res UK Cambridge Inst, Cambridge OX3 7DQ, England SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600015 PM 37938947 ER
PT J
AU Sewani, Alykhan
Roa, Carlos-Felipe
Zhou, James J.
Alawneh, Yara
Quadri, Amaar
Gilliland-Rocque, Rene
Cherin, Emmanuel
Dueck, Andrew
Demore, Christine
Wright, Graham
Tavallaei, M. Ali
PT J AU Sewani、Alykhan Roa、Carlos-Felipe Zhou、James J. Alawneh、Yara Quadri、Amaar Gilliland-Rocque、Rene Cherin、Emmanuel Dueck、Andrew Demore、Christine Wright、Graham Tavallaei、M. Ali
TI The CathEye: A Forward-Looking Ultrasound Catheter for Image-Guided
Cardiovascular Procedures
TI CathEye:用于图像引导心血管手术的前瞻性超声导管
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 449
EP 458
DI 10.1109/TMI.2023.3309781
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 449 EP 458 DI 10.1109/TMI.2023.3309781 DT 文章 PD 2024 年 1 月 PY 2024
AB Catheter based procedures are typically guided by X-Ray, which suffers
from low soft tissue contrast and only provides 2D projection images of
a 3D volume. Intravascular ultrasound (IVUS) can serve as a
complementary imaging technique. Forward viewing catheters are useful
for visualizing obstructions along the path of the catheter. The CathEye
system mechanically steers a single-element transducer to generate a
forward-looking surface reconstruction from an irregularly spaced 2-D
scan pattern. The steerable catheter leverages an expandable frame with
cables to manipulate the distal end independently of vessel tortuosity.
The tip position is estimated by measuring the cable displacements and
used to create surface reconstructions of the imaging workspace with the
single-element transducer. CathEye's imaging capabilities were tested
with an agar phantom and an ex vivo chronic total occlusion (CTO) sample
while the catheter was confined to various tortuous paths. The CathEye
maintained similar scan patterns regardless of path tortuosity and was
able to recreate major features of the imaging targets, such as holes
and extrusions. The feasibility of forward-looking IVUS with the CathEye
is demonstrated in this study. The CathEye mechanism can be applied to
other imaging modalities with field-of-view (FOV) limitations and
represents the basis for an interventional device fully integrated with
image guidance.
RI Tavallaei, M. Ali/JJF-3917-2023; Alawneh, Yara/; Sewani, Alykhan/; Cherin, Emmanuel/; Roa, Carlos Felipe/; Gilliland-Rocque, Rene/
OI Alawneh, Yara/0000-0002-2146-9575; Sewani, Alykhan/0000-0001-8037-5184;
Cherin, Emmanuel/0000-0002-6787-025X; Roa, Carlos
Felipe/0000-0001-5781-9592; Gilliland-Rocque, Rene/0000-0002-1777-7217
ZB 0
Z8 0
TC 1
基于 AB 导管的手术通常由 X 射线引导,其软组织对比度较低,并且仅提供 3D 体积的 2D 投影图像。血管内超声(IVUS)可以作为一种补充成像技术。前视导管可用于观察导管路径上的障碍物。 CathEye 系统机械地引导单元件传感器,根据不规则间隔的二维扫描图案生成前视表面重建。可操纵导管利用带有电缆的可扩展框架来独立于血管弯曲度来操纵远端。通过测量电缆位移来估计尖端位置,并用于使用单元件传感器创建成像工作空间的表面重建。 CathEye 的成像能力通过琼脂模型和离体慢性完全闭塞 (CTO) 样本进行了测试,同时导管被限制在各种曲折路径中。无论路径曲折如何,CathEye 都保持相似的扫描模式,并且能够重新创建成像目标的主要特征,例如孔和挤压。本研究证明了使用 CathEye 进行前瞻性 IVUS 的可行性。 CathEye 机制可应用于具有视场 (FOV) 限制的其他成像模式,并代表了与图像引导完全集成的介入设备的基础。 RI Tavallaei,M.阿里/JJF-3917-2023;阿拉瓦奈,亚拉/;塞瓦尼,阿里汗/;伊曼纽尔·切林/;罗阿,卡洛斯·费利佩/; Gilliland-Rocque,Rene/OI Alawneh,Yara/0000-0002-2146-9575;塞瓦尼,阿里汗/0000-0001-8037-5184;伊曼纽尔·切林/0000-0002-6787-025X;罗阿,卡洛斯·费利佩/0000-0001-5781-9592;雷内·吉利兰-罗克/0000-0002-1777-7217 ZB 0 Z8 0 TC 1
ZS 0
ZR 0
ZA 0
Z9 1
C1 Toronto Metropolitan Univ, Formerly Ryerson Univ, Dept Elect Comp &
Biomed Engn, Toronto, ON M5B 2K3, Canada
C1 Sunnybrook Res Inst, Toronto, ON M4N 3M5, Canada
C1 Univ Toronto, Dept Phys Sci, Toronto, ON M5S 1A1, Canada
C1 Univ Waterloo, Dept Mech & Mechatron Engn, Waterloo, ON N2L 3G1, Canada
C1 Univ Toronto, Dept Med Biophys, Toronto, ON M5S 1A1, Canada
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600025
PM 37643100
ER
ZS 0 ZR 0 ZA 0 Z9 1 C1 多伦多都会大学,原瑞尔森大学,Elect Comp & Biomed Engn,多伦多,ON M5B 2K3,加拿大 C1 Sunnybrook Res Inst,多伦多,ON M4N 3M5,加拿大 C1 多伦多大学,物理科学系,多伦多,ON M5S 1A1,加拿大 C1 滑铁卢大学,机械与机电工程系,滑铁卢,ON N2L 3G1,加拿大 C1 多伦多大学,医学生物物理学系,多伦多,ON M5S 1A1,加拿大 SN 0278-0062 EI 1558-254X DA 2024 -03-13 UT WOS:001158081600025 PM 37643100 ER
PT J
AU Fu, Junhu
Chen, Ke
Dou, Qi
Gao, Yun
He, Yiping
Zhou, Pinghong
Lin, Shengli
Wang, Yuanyuan
Guo, Yi
PT J AU Fu, 陈俊虎, 窦克, 高奇, 何云, 周一平, 林平红, 王胜利, 郭媛媛, 易
TI IPNet: An Interpretable Network with Progressive Loss for Whole-stage
Colorectal Disease Diagnosis.
TI IPNet:用于全阶段结直肠疾病诊断的渐进式损失的可解释网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3459910
DT Journal Article
PD 2024-Sep-19
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3459910 DT 期刊文章 PD 2024-Sep-19 PY 2024
AB Colorectal cancer plays a dominant role in cancer-related deaths,
primarily due to the absence of obvious early-stage symptoms.
Whole-stage colorectal disease diagnosis is crucial for assessing lesion
evolution and determining treatment plans. However, locality difference
and disease progression lead to intra-class disparities and inter-class
similarities for colorectal lesion representation. In addition,
interpretable algorithms explaining the lesion progression are still
lacking, making the prediction process a "black box". In this paper, we
propose IPNet, a dual-branch interpretable network with progressive loss
for whole-stage colorectal disease diagnosis. The dual-branch
architecture captures unbiased features representing diverse localities
to suppress intra-class variation. The progressive loss function
considers inter-class relationship, using prior knowledge of disease
evolution to guide classification. Furthermore, a novel Grain-CAM is
designed to interpret IPNet by visualizing pixel-wise attention maps
from shallow to deep layers, providing regions semantically related to
IPNet's progressive classification. We conducted whole-stage diagnosis
on two image modalities, i.e., colorectal lesion classification on
129,893 endoscopic optical images and rectal tumor T-staging on 11,072
endoscopic ultrasound images. IPNet is shown to surpass other
state-of-the-art algorithms, accordingly achieving an accuracy of 93.15%
and 89.62%. Especially, it establishes effective decision boundaries for
challenges like polyp vs. adenoma and T2 vs. T3. The results demonstrate
an explainable attempt for colorectal lesion classification at a
whole-stage level, and rectal tumor T-staging by endoscopic ultrasound
is also unprecedentedly explored. IPNet is expected to be further
applied, assisting physicians in whole-stage disease diagnosis and
enhancing diagnostic interpretability.
ZA 0
Z8 0
ZR 0
ZB 0
TC 0
AB 结直肠癌在癌症相关死亡中占主导地位,这主要是由于没有明显的早期症状。全阶段结直肠疾病诊断对于评估病变演变和确定治疗方案至关重要。然而,局部差异和疾病进展导致结直肠病变表征的类内差异和类间相似性。此外,仍然缺乏解释病变进展的可解释算法,使得预测过程成为“黑匣子”。在本文中,我们提出了 IPNet,一种具有渐进损失的双分支可解释网络,用于全阶段结直肠疾病诊断。双分支架构捕获代表不同位置的无偏特征,以抑制类内变异。渐进损失函数考虑类间关系,利用疾病进化的先验知识来指导分类。此外,一种新颖的 Grain-CAM 旨在通过可视化从浅层到深层的像素级注意力图来解释 IPNet,提供与 IPNet 渐进分类语义相关的区域。我们对两种图像模式进行了全阶段诊断,即对129,893张内镜光学图像进行结直肠病变分类,对11,072张内镜超声图像进行直肠肿瘤T分期。 IPNet 被证明超越了其他最先进的算法,相应地达到了 93.15% 和 89.62% 的准确率。特别是,它为息肉与腺瘤以及 T2 与 T3 等挑战建立了有效的决策边界。结果表明,在全阶段水平上对结直肠病变分类进行了可解释的尝试,并且通过内镜超声对直肠肿瘤T分期也进行了前所未有的探索。 IPNet有望得到进一步应用,辅助医生进行全阶段疾病诊断,增强诊断的可解释性。ZA 0 Z8 0 ZR 0 ZB 0 TC 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-09-21
UT MEDLINE:39298304
PM 39298304
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-09-21 UT MEDLINE:39298304 PM 39298304 ER
PT J
AU Zhang, Yirui
Zou, Yanni
Liu, Peter X
PT J AU 张, 邹一瑞, 刘燕妮, Peter X
TI Point Cloud Registration in Laparoscopic Liver Surgery Using Keypoint
Correspondence Registration Network.
使用关键点对应注册网络进行腹腔镜肝脏手术中的 TI 点云注册。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3457228
DT Journal Article
PD 2024-Sep-10
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3457228 DT 期刊文章 PD 2024-Sep-10 PY 2024
AB Laparoscopic liver surgery is a newly developed minimally invasive
technique and represents an inevitable trend in the future development
of surgical methods. By using augmented reality (AR) technology to
overlay preoperative CT models with intraoperative laparoscopic videos,
surgeons can accurately locate blood vessels and tumors, significantly
enhancing the safety and precision of surgeries. Point cloud
registration technology is key to achieving this effect. However, there
are two major challenges in registering the CT model with the point
cloud surface reconstructed from intraoperative laparoscopy. First, the
surface features of the organ are not prominent. Second, due to the
limited field of view of the laparoscope, the reconstructed surface
typically represents only a very small portion of the entire organ. To
address these issues, this paper proposes the keypoint correspondence
registration network (KCR-Net). This network first uses the neighborhood
feature fusion module (NFFM) to aggregate and interact features from
different regions and structures within a pair of point clouds to obtain
comprehensive feature representations. Then, through correspondence
generation, it directly generates keypoints and their corresponding
weights, with keypoints located in the common structures of the point
clouds to be registered, and corresponding weights learned automatically
by the network. This approach enables accurate point cloud registration
even under conditions of extremely low overlap. Experiments conducted on
the ModelNet40, 3Dircadb, DePoLL demonstrate that our method achieves
excellent registration accuracy and is capable of meeting the
requirements of real-world scenarios.
OI Zhang, Yirui/0009-0005-4839-0049
ZB 0
ZR 0
TC 0
AB腹腔镜肝脏手术是一种新兴的微创技术,代表了未来手术方法发展的必然趋势。通过使用增强现实(AR)技术将术前CT模型与术中腹腔镜视频叠加,外科医生可以准确定位血管和肿瘤,显着提高手术的安全性和精准度。点云配准技术是实现这一效果的关键。然而,将 CT 模型与术中腹腔镜重建的点云表面配准存在两个主要挑战。首先,器官的表面特征不突出。其次,由于腹腔镜的视野有限,重建的表面通常仅代表整个器官的很小一部分。为了解决这些问题,本文提出了关键点对应注册网络(KCR-Net)。该网络首先使用邻域特征融合模块(NFFM)来聚合和交互来自一对点云内不同区域和结构的特征,以获得全面的特征表示。然后,通过对应生成,直接生成关键点及其对应的权重,关键点位于要注册的点云的公共结构中,并由网络自动学习对应的权重。即使在重叠度极低的情况下,这种方法也能实现准确的点云配准。在ModelNet40、3Dircadb、DePoLL上进行的实验表明,我们的方法具有出色的配准精度,能够满足实际场景的要求。 OI 张一瑞/0009-0005-4839-0049 ZB 0 ZR 0 TC 0
ZS 0
Z8 0
ZA 0
Z9 0
EI 1558-254X
DA 2024-09-13
UT MEDLINE:39255087
PM 39255087
ER
ZS 0 Z8 0 ZA 0 Z9 0 EI 1558-254X DA 2024-09-13 UT MEDLINE:39255087 PM 39255087 ER
PT J
AU Kaji, Shizuo
Tanabe, Naoya
Maetani, Tomoki
Shiraishi, Yusuke
Sakamoto, Ryo
Oguma, Tsuyoshi
Suzuki, Katsuhiro
Terada, Kunihiko
Fukui, Motonari
Muro, Shigeo
Sato, Susumu
Hirai, Toyohiro
PT J AU Kaji、Shizuo Tanabe、Naoya Maetani、Tomoki Shiraishi、Yusuke Sakamoto、Ryo Oguma、Tsuyoshi Suzuki、Katsuhiro Terada、Kunihiko Fukui、Motonari Muro、Shigeo Sato、Susumu Hirai、Toyohiro
TI Quantification of Airway Structures by Persistent Homology
通过持久同源性对气道结构进行 TI 量化
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2758
EP 2768
DI 10.1109/TMI.2024.3376683
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2758 EP 2768 DI 10.1109/TMI.2024.3376683 DT 文章 PD 2024 年 8 月 PY 2024
AB We propose two types of novel morphological metrics for quantifying the
geometry of tubular structures on computed tomography (CT) images. We
apply our metrics to identify irregularities in the airway of patients
with chronic obstructive pulmonary disease (COPD) and demonstrate that
they provide complementary information to the conventional metrics used
to assess COPD, such as the tissue density distribution in lung
parenchyma and the wall area ratio of the segmented airway. The
three-dimensional shape of the airway and its abstraction as a rooted
tree with the root at the trachea carina are automatically extracted
from a lung CT volume, and the two metrics are computed based on a
mathematical tool called persistent homology; treeH0 quantifies the
distribution of branch lengths to assess the complexity of the tree-like
structure and radialH0 quantifies the irregularities in the luminal
radius along the airway. We show our metrics are associated with
clinical outcomes.
RI Sato, Susumu/R-8976-2019; Kaji, Shizuo/AAC-2341-2020; Kaji, Shizuo/; Fukui, Motonari/
OI Kaji, Shizuo/0000-0002-7856-6536; Fukui, Motonari/0000-0001-7584-4839
ZR 0
Z8 0
ZA 0
ZB 0
TC 0
AB 我们提出了两种新颖的形态学指标,用于量化计算机断层扫描 (CT) 图像上管状结构的几何形状。我们应用我们的指标来识别慢性阻塞性肺病 (COPD) 患者气道的不规则性,并证明它们为用于评估 COPD 的传统指标提供了补充信息,例如肺实质中的组织密度分布和壁面积比的分段气道。气道的三维形状及其抽象为根部位于气管隆突的有根树,是从肺部 CT 体积中自动提取的,并且这两个指标是基于称为持久同源性的数学工具计算的; treeH0 量化分支长度的分布以评估树状结构的复杂性,radialH0 量化沿气道的管腔半径的不规则性。我们表明我们的指标与临床结果相关。佐藤 RI,Susumu/R-8976-2019;梶静雄/AAC-2341-2020;梶静雄/;福井元成/大井梶静雄/0000-0002-7856-6536;福井元成/0000-0001-7584-4839 ZR 0 Z8 0 ZA 0 ZB 0 TC 0
ZS 0
Z9 0
C1 Kyushu Univ, Inst Math Ind, Fukuoka 8190395, Japan
C1 Kyoto Univ, Grad Sch Med, Dept Resp Med, Sakyo Ku, Kyoto 6068507, Japan
C1 Kyoto Univ, Grad Sch Med, Dept Diagnost Imaging & Nucl Med, Sakyo Ku,
Kyoto 6068507, Japan
C1 Kyoto Prevent Med Ctr, Nakagyo Ku, Kyoto 6048491, Japan
C1 Terada Clin Resp Med & Gen Practice, Himeji, Hyogo 6700849, Japan
C1 Kitano Hosp, Tazuke Kofukai Med Res Inst, Resp Dis Ctr, Kitaku, Osaka
5308480, Japan
C1 Nara Med Univ, Dept Resp Med, Kashihara, Nara 6348521, Japan
C3 Kyoto Prevent Med Ctr
C3 Terada Clin Resp Med & Gen Practice
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200009
PM 38478453
ER
ZS 0 Z9 0 C1 九州大学,数学工业研究所,福冈 8190395,日本 C1 京都大学,医学研究生院,呼吸医学部,Sakyo Ku,京都 6068507,日本 C1 京都大学,医学研究生院,诊断成像与核医学部, Sakyo Ku,京都 6068507,日本 C1 京都 Prevent Med Ctr,中京区,京都 6048491,日本 C1 Terada Clin Resp Med & Gen Practice,姬路,兵库县 6700849,日本 C1 Kitano Hosp,Tazuke Kofukai Med Res Inst,Resp Dis Ctr,Kitaku , 大阪 5308480, 日本 C1 Nara Med Univ, Dept Resp Med, Kashihara, Nara 6348521, Japan C3 京都 Prevent Med Ctr C3 Terada Clin Resp Med & Gen Practice SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS: 001285367200009 下午 38478453 ER
PT J
AU Chen, Haomin
Dreizin, David
Gomez, Catalina
Zapaishchykova, Anna
Unberath, Mathias
PT J AU Chen、Haomin Dreizin、David Gomez、Catalina Zapaishchykova、Anna Unberath、Mathias
TI Interpretable Severity Scoring of Pelvic Trauma Through Automated
Fracture Detection and Bayesian Inference.
TI 通过自动骨折检测和贝叶斯推理对骨盆创伤进行可解释的严重程度评分。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3428836
DT Journal Article
PD 2024-Jul-22
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3428836 DT 期刊文章 PD 2024-Jul-22 PY 2024
AB Pelvic ring disruptions result from blunt injury mechanisms and are
potentially lethal mainly due to associated injuries and massive pelvic
hemorrhage. The severity of pelvic fractures in trauma victims is
frequently assessed by grading the fracture according to the Tile AO/OTA
classification in whole-body Computed Tomography (CT) scans. Due to the
high volume of whole-body CT scans generated in trauma centers, the
overall information content of a single whole-body CT scan and low
manual CT reading speed, an automatic approach to Tile classification
would provide substantial value, e. g., to prioritize the reading
sequence of the trauma radiologists or enable them to focus on other
major injuries in multi-trauma patients. In such a high-stakes scenario,
an automated method for Tile grading should ideally be transparent such
that the symbolic information provided by the method follows the same
logic a radiologist or orthopedic surgeon would use to determine the
fracture grade. This paper introduces an automated yet interpretable
pelvic trauma decision support system to assist radiologists in fracture
detection and Tile grading. To achieve interpretability despite
processing high-dimensional whole-body CT images, we design a
neurosymbolic algorithm that operates similarly to human interpretation
of CT scans. The algorithm first detects relevant pelvic fractures on
CTs with high specificity using Faster-RCNN. To generate robust fracture
detections and associated detection (un)certainties, we perform
test-time augmentation of the CT scans to apply fracture detection
several times in a self-ensembling approach. The fracture detections are
interpreted using a structural causal model based on clinical best
practices to infer an initial Tile grade. We apply a Bayesian causal
model to recover likely co-occurring fractures that may have been
rejected initially due to the highly specific operating point of the
detector, resulting in an updated list of detected fractures and
corresponding final Tile grade. Our method is transparent in that it
provides fracture location and types, as well as information on
important counterfactuals that would invalidate the system's
recommendation. Our approach achieves an AUC of 0.89/0.74 for
translational and rotational instability,which is comparable to
radiologist performance. Despite being designed for human-machine
teaming, our approach does not compromise on performance compared to
previous black-box methods.
OI Chen, Haomin/0000-0003-0785-1855; Zapaishchykova,
Anna/0000-0001-6860-9160
ZB 0
ZA 0
TC 0
ZR 0
AB 骨盆环破裂是由钝性损伤机制引起的,主要由于相关损伤和盆腔大出血而可能致命。创伤受害者骨盆骨折的严重程度经常根据全身计算机断层扫描 (CT) 扫描中的 Tile AO/OTA 分类对骨折进行分级来评估。由于创伤中心生成的全身 CT 扫描量大、单次全身 CT 扫描的总体信息内容以及手动 CT 读取速度低,Tile 分类的自动方法将提供巨大的价值,例如,优先考虑创伤放射科医生的阅读顺序或使他们能够专注于多创伤患者的其他重大损伤。在这种高风险的情况下,理想情况下,Tile 分级的自动化方法应该是透明的,以便该方法提供的符号信息遵循放射科医生或整形外科医生用来确定骨折分级的相同逻辑。本文介绍了一种自动化但可解释的骨盆创伤决策支持系统,可协助放射科医生进行骨折检测和 Tile 分级。为了在处理高维全身 CT 图像的情况下实现可解释性,我们设计了一种神经符号算法,其操作类似于人类对 CT 扫描的解释。该算法首先使用 Faster-RCNN 在 CT 上以高特异性检测相关骨盆骨折。为了生成可靠的断裂检测和相关的检测(不确定)确定性,我们对 CT 扫描进行测试时间增强,以自组装方法多次应用断裂检测。使用基于临床最佳实践的结构因果模型来解释断裂检测,以推断初始 Tile 等级。 我们应用贝叶斯因果模型来恢复可能同时发生的裂缝,这些裂缝最初可能由于探测器的高度特定的操作点而被拒绝,从而产生检测到的裂缝和相应的最终瓷砖等级的更新列表。我们的方法是透明的,因为它提供了裂缝位置和类型,以及有关会使系统建议无效的重要反事实的信息。我们的方法在平移和旋转不稳定性方面实现了 0.89/0.74 的 AUC,与放射科医生的表现相当。尽管是为人机协作而设计的,但与以前的黑盒方法相比,我们的方法并没有影响性能。陈爱浩民/0000-0003-0785-1855;安娜·扎帕什奇科娃/0000-0001-6860-9160 ZB 0 ZA 0 TC 0 ZR 0
ZS 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-07-24
UT MEDLINE:39037876
PM 39037876
ER
ZS 0 Z8 0 Z9 0 EI 1558-254X DA 2024-07-24 UT MEDLINE:39037876 PM 39037876 ER
PT J
AU Zhou, Lifang
Jiang, Yu
Li, Weisheng
Hu, Jun
Zheng, Shenhai
PT J AU 周、蒋丽芳、李宇、胡伟胜、郑军、沉海
TI Shape-Scale Co-Awareness Network for 3D Brain Tumor Segmentation
用于 3D 脑肿瘤分割的 TI Shape-Scale 协同感知网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2495
EP 2508
DI 10.1109/TMI.2024.3368531
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2495 EP 2508 DI 10.1109/TMI.2024.3368531 DT 文章 PD 2024 年 7 月 PY 2024
AB The accurate segmentation of brain tumor is significant in clinical
practice. Convolutional Neural Network (CNN)-based methods have made
great progress in brain tumor segmentation due to powerful local
modeling ability. However, brain tumors are frequently pattern-agnostic,
i.e. variable in shape, size and location, which can not be effectively
matched by traditional CNN-based methods with local and regular
receptive fields. To address the above issues, we propose a shape-scale
co-awareness network (S2CA-Net) for brain tumor segmentation, which can
efficiently learn shape-aware and scale-aware features simultaneously to
enhance pattern-agnostic representations. Primarily, three key
components are proposed to accomplish the co-awareness of shape and
scale. The Local-Global Scale Mixer (LGSM) decouples the extraction of
local and global context by adopting the CNN-Former parallel structure,
which contributes to obtaining finer hierarchical features. The
Multi-level Context Aggregator (MCA) enriches the scale diversity of
input patches by modeling global features across multiple receptive
fields. The Multi-Scale Attentive Deformable Convolution (MS-ADC) learns
the target deformation based on the multiscale inputs, which motivates
the network to enforce feature constraints both in terms of scale and
shape for optimal feature matching. Overall, LGSM and MCA focus on
enhancing the scale-awareness of the network to cope with the size and
location variations, while MS-ADC focuses on capturing deformation
information for optimal shape matching. Finally, their effective
integration prompts the network to perceive variations in shape and
scale simultaneously, which can robustly tackle the variations in
patterns of brain tumors. The experimental results on BraTS 2019, BraTS
2020, MSD BTS Task and BraTS2023-MEN show that S2CA-Net has superior
overall performance in accuracy and efficiency compared to other
state-of-the-art methods. Code: https://github.com/jiangyu945/S2CA-Net.
RI Zheng, Shenhai/AAV-2062-2020; Zhou, Lifang/AAA-6979-2021
Z8 0
TC 0
ZB 0
ZR 0
ZA 0
AB 脑肿瘤的准确分割在临床实践中具有重要意义。基于卷积神经网络(CNN)的方法凭借强大的局部建模能力,在脑肿瘤分割方面取得了巨大进展。然而,脑肿瘤通常是模式不可知的,即形状、大小和位置可变,这不能通过具有局部和规则感受野的基于 CNN 的传统方法有效匹配。为了解决上述问题,我们提出了一种用于脑肿瘤分割的形状尺度协同感知网络(S2CA-Net),它可以有效地同时学习形状感知和尺度感知特征,以增强与模式无关的表示。首先,提出了三个关键组成部分来实现形状和尺度的共同意识。局部-全局尺度混合器(LGSM)通过采用CNN-Former并行结构解耦局部和全局上下文的提取,这有助于获得更精细的层次特征。多级上下文聚合器 (MCA) 通过对多个感受野的全局特征进行建模,丰富了输入块的尺度多样性。多尺度注意力变形卷积(MS-ADC)根据多尺度输入学习目标变形,这促使网络在尺度和形状方面强制实施特征约束,以实现最佳特征匹配。总体而言,LGSM 和 MCA 侧重于增强网络的尺度感知,以应对尺寸和位置变化,而 MS-ADC 侧重于捕获变形信息以实现最佳形状匹配。最后,它们的有效整合促使网络同时感知形状和尺度的变化,这可以稳健地应对脑肿瘤模式的变化。 在BraTS 2019、BraTS 2020、MSD BTS Task和BraTS2023-MEN上的实验结果表明,与其他最先进的方法相比,S2CA-Net在准确性和效率方面具有优越的整体性能。代码:https://github.com/jianyu945/S2CA-Net。日正,申海/AAV-2062-2020;周丽芳/AAA-6979-2021 Z8 0 TC 0 ZB 0 ZR 0 ZA 0
ZS 0
Z9 0
C1 Chongqing Univ Posts & Telecommun, Key Lab Image Cognit, Chongqing
400065, Peoples R China
C1 Chongqing Univ Posts & Telecommun, Coll Software, Chongqing 400065,
Peoples R China
C1 Guizhou Univ, Key Lab Adv Mfg Technol, Minist Educ, Guiyang 550025,
Guizhou, Peoples R China
C1 Third Mil Med Univ, Southwest Hosp, Dept Neurol, Chongqing 400065,
Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100005
PM 38386578
ER
ZS 0 Z9 0 C1 重庆邮电大学,图像认知重点实验室,重庆 400065,人民 R 中国 C1 重庆邮电大学,大学软件,重庆 400065,人民 R 中国 C1 贵州大学,先进制造技术重点实验室,部委教育, 贵阳 550025, 贵州, 人民路 C1 第三军医大学, 西南医院, 神经科, 重庆 400065, 人民路 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100005 PM 38386578 ER
PT J
AU Wang, Puyang
Guo, Dazhou
Zheng, Dandan
Zhang, Minghui
Yu, Haogang
Sun, Xin
Ge, Jia
Gu, Yun
Lu, Le
Ye, Xianghua
Jin, Dakai
PT J AU Wang、郭濮阳、郑大洲、张丹丹、余明辉、孙浩刚、葛鑫、谷佳、卢云、叶乐、金向华、大凯
TI Accurate Airway Tree Segmentation in CT Scans via Anatomy-aware
Multi-class Segmentation and Topology-guided Iterative Learning.
TI 通过解剖感知多类分割和拓扑引导的迭代学习在 CT 扫描中进行精确气道树分割。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3419707
DT Journal Article
PD 2024-Jun-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3419707 DT 期刊文章 PD 2024-Jun-26 PY 2024
AB Intrathoracic airway segmentation in computed tomography is a
prerequisite for various respiratory disease analyses such as chronic
obstructive pulmonary disease, asthma and lung cancer. Due to the low
imaging contrast and noises execrated at peripheral branches, the
topological-complexity and the intra-class imbalance of airway tree, it
remains challenging for deep learning-based methods to segment the
complete airway tree (on extracting deeper branches). Unlike other
organs with simpler shapes or topology, the airway's complex tree
structure imposes an unbearable burden to generate the "ground truth"
label (up to 7 or 3 hours of manual or semi-automatic annotation per
case). Most of the existing airway datasets are incompletely
labeled/annotated, thus limiting the completeness of computer-segmented
airway. In this paper, we propose a new anatomy-aware multi-class airway
segmentation method enhanced by topology-guided iterative self-learning.
Based on the natural airway anatomy, we formulate a simple yet highly
effective anatomy-aware multi-class segmentation task to intuitively
handle the severe intra-class imbalance of the airway. To solve the
incomplete labeling issue, we propose a tailored iterative self-learning
scheme to segment toward the complete airway tree. For generating
pseudo-labels to achieve higher sensitivity (while retaining similar
specificity), we introduce a novel breakage attention map and design a
topology-guided pseudo-label refinement method by iteratively connecting
breaking branches commonly existed from initial pseudo-labels. Extensive
experiments have been conducted on four datasets including two public
challenges. The proposed method achieves the top performance in both
EXACT'09 challenge using average score and ATM'22 challenge on weighted
average score. In a public BAS dataset and a private lung cancer
dataset, our method significantly improves previous leading approaches
by extracting at least (absolute) 6.1% more detected tree length and
5.2% more tree branches, while maintaining comparable precision.
OI Lu, Le/0000-0002-6799-9416; Gu, Yun/0000-0002-4199-0675; Wang,
Puyang/0000-0003-2657-6051
Z8 0
AB 计算机断层扫描中的胸内气道分割是慢性阻塞性肺病、哮喘和肺癌等各种呼吸系统疾病分析的先决条件。由于外围分支的成像对比度低、噪声大、气道树的拓扑复杂性和类内不平衡,基于深度学习的方法分割完整的气道树(提取更深的分支)仍然具有挑战性。与具有更简单形状或拓扑的其他器官不同,气道复杂的树结构给生成“地面真相”标签带来了难以承受的负担(每个病例最多需要 7 或 3 小时的手动或半自动注释)。大多数现有气道数据集的标记/注释不完整,从而限制了计算机分段气道的完整性。在本文中,我们提出了一种通过拓扑引导迭代自学习增强的新的解剖感知多类气道分割方法。基于自然气道解剖结构,我们制定了一个简单而高效的解剖感知多类分割任务,以直观地处理气道严重的类内不平衡。为了解决不完整的标记问题,我们提出了一种定制的迭代自学习方案来分割完整的气道树。为了生成伪标签以实现更高的灵敏度(同时保留相似的特异性),我们引入了一种新颖的断裂注意图,并通过迭代连接初始伪标签中常见的断裂分支来设计拓扑引导的伪标签细化方法。已经对四个数据集进行了广泛的实验,其中包括两个公共挑战。 所提出的方法在使用平均分数的 EXACT'09 挑战和使用加权平均分数的 ATM'22 挑战中均取得了最佳性能。在公共 BAS 数据集和私有肺癌数据集中,我们的方法通过提取至少(绝对)多出 6.1% 的检测树长度和多 5.2% 的树枝,同时保持相当的精度,显着改进了以前的领先方法。卢爱乐/0000-0002-6799-9416;顾云/0000-0002-4199-0675;王濮阳/0000-0003-2657-6051 Z8 0
ZS 0
ZB 0
ZA 0
ZR 0
TC 1
Z9 1
EI 1558-254X
DA 2024-06-29
UT MEDLINE:38923479
PM 38923479
ER
ZS 0 ZB 0 ZA 0 ZR 0 TC 1 Z9 1 EI 1558-254X DA 2024-06-29 UT MEDLINE:38923479 PM 38923479 ER
PT J
AU Liu, Jiaxuan
Zhang, Hui
Tian, Jiang-Huai
Su, Yingjian
Chen, Yurong
Wang, Yaonan
PT J AU 刘, 张家轩, 田辉, 苏江怀, 陈迎建, 王玉蓉, 耀南
TI R2D2-GAN: Robust Dual Discriminator Generative Adversarial Network for
Microscopy Hyperspectral Image Super-Resolution.
TI R2D2-GAN:用于显微镜高光谱图像超分辨率的鲁棒双鉴别器生成对抗网络。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3412033
DT Journal Article
PD 2024-Jun-11
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3412033 DT 期刊文章 PD 2024-Jun-11 PY 2024
AB High-resolution microscopy hyperspectral (HS) images can provide highly
detailed spatial and spectral information, enabling the identification
and analysis of biological tissues at a microscale level. Recently,
significant efforts have been devoted to enhancing the resolution of HS
images by leveraging high spatial resolution multispectral (MS) images.
However, the inherent hardware constraints lead to a significant
distribution gap between HS and MS images, posing challenges for image
super-resolution within biomedical domains. This discrepancy may arise
from various factors, including variations in camera imaging principles
(e.g., snapshot and push-broom imaging), shooting positions, and the
presence of noise interference. To address these challenges, we
introduced a unique unsupervised super-resolution framework named
R2D2-GAN. This framework utilizes a generative adversarial network (GAN)
to efficiently merge the two data modalities and improve the resolution
of microscopy HS images. Traditionally, supervised approaches have
relied on intuitive and sensitive loss functions, such as mean squared
error (MSE). Our method, trained in a real-world unsupervised setting,
benefits from exploiting consistent information across the two
modalities. It employs a game-theoretic strategy and dynamic adversarial
loss, rather than relying solely on fixed training strategies for
reconstruction loss. Furthermore, we have augmented our proposed model
with a central consistency regularization (CCR) module, aiming to
further enhance the robustness of the R2D2-GAN. Our experimental results
show that the proposed method is accurate and robust for
super-resolution images. We specifically tested our proposed method on
both a real and a synthetic dataset, obtaining promising results in
comparison to other state-of-the-art methods. Our code and datasets are
accessible through Multimedia Content.
OI Liu, Jiaxuan/0000-0002-8011-8943; Chen, Yurong/0000-0002-6171-4555
Z8 0
TC 0
ZB 0
ZA 0
ZR 0
AB 高分辨率显微高光谱 (HS) 图像可以提供高度详细的空间和光谱信息,从而能够在微观尺度上识别和分析生物组织。最近,人们致力于利用高空间分辨率多光谱(MS)图像来提高 HS 图像的分辨率。然而,固有的硬件限制导致HS和MS图像之间存在显着的分布差距,给生物医学领域的图像超分辨率带来了挑战。这种差异可能由多种因素引起,包括相机成像原理的变化(例如快照和推扫式成像)、拍摄位置以及噪声干扰的存在。为了应对这些挑战,我们引入了一种独特的无监督超分辨率框架,名为 R2D2-GAN。该框架利用生成对抗网络(GAN)来有效地合并两种数据模式并提高显微 HS 图像的分辨率。传统上,监督方法依赖于直观且敏感的损失函数,例如均方误差(MSE)。我们的方法在现实世界的无监督环境中进行训练,受益于利用两种模式的一致信息。它采用博弈论策略和动态对抗性损失,而不是仅仅依靠固定的训练策略来重建损失。此外,我们还使用中央一致性正则化(CCR)模块增强了我们提出的模型,旨在进一步增强 R2D2-GAN 的鲁棒性。我们的实验结果表明,所提出的方法对于超分辨率图像是准确且鲁棒的。 我们专门在真实数据集和合成数据集上测试了我们提出的方法,与其他最先进的方法相比,获得了有希望的结果。我们的代码和数据集可通过多媒体内容访问。刘家轩/0000-0002-8011-8943;陈玉蓉/0000-0002-6171-4555 Z8 0 TC 0 ZB 0 ZA 0 ZR 0
ZS 0
Z9 0
EI 1558-254X
DA 2024-06-13
UT MEDLINE:38861434
PM 38861434
ER
ZS 0 Z9 0 EI 1558-254X DA 2024-06-13 UT MEDLINE:38861434 PM 38861434 ER
PT J
AU Zhang, Zhenxuan
Yu, Chengjin
Zhang, Heye
Gao, Zhifan
PT J AU 张, 于振轩, 张成金, 高荷叶, 志凡
TI Embedding Tasks Into the Latent Space: Cross-Space Consistency for
Multi-Dimensional Analysis in Echocardiography
TI 将任务嵌入到潜在空间中:超声心动图多维分析的跨空间一致性
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2215
EP 2228
DI 10.1109/TMI.2024.3362964
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2215 EP 2228 DI 10.1109/TMI.2024.3362964 DT 文章 PD 2024 年 6 月 PY 2024
AB Multi-dimensional analysis in echocardiography has attracted attention
due to its potential for clinical indices quantification and
computer-aided diagnosis. It can utilize various information to provide
the estimation of multiple cardiac indices. However, it still has the
challenge of inter-task conflict. This is owing to regional confusion,
global abnormalities, and time-accumulated errors. Task mapping methods
have the potential to address inter-task conflict. However, they may
overlook the inherent differences between tasks, especially for
multi-level tasks (e.g., pixel-level, image-level, and sequence-level
tasks). This may lead to inappropriate local and spurious task
constraints. We propose cross-space consistency (CSC) to overcome the
challenge. The CSC embeds multi-level tasks to the same-level to reduce
inherent task differences. This allows multi-level task features to be
consistent in a unified latent space. The latent space extracts
task-common features and constrains the distance in these features. This
constrains the task weight region that satisfies multiple task
conditions. Extensive experiments compare the CSC with fifteen
state-of-the-art echocardiographic analysis methods on five datasets
(10,908 patients). The result shows that the CSC can provide left
ventricular (LV) segmentation, (DSC = 0.932), keypoint detection (MAE =
3.06mm), and keyframe identification (accuracy = 0.943). These results
demonstrate that our method can provide a multi-dimensional analysis of
cardiac function and is robust in large-scale datasets.
RI Gao, Zhifan/O-9082-2019; Zhang, Zhenxuan/AAN-5198-2021; zhenxuan, zhang/; Gao, Zhifan/
OI zhenxuan, zhang/0009-0002-2904-2848; Gao, Zhifan/0000-0002-1576-4439
ZA 0
TC 4
Z8 0
AB 超声心动图的多维分析因其在临床指标量化和计算机辅助诊断方面的潜力而引起了人们的关注。它可以利用各种信息来提供多种心脏指数的估计。然而,它仍然面临任务间冲突的挑战。这是由于区域混乱、全局异常和时间累积的错误造成的。任务映射方法有可能解决任务间冲突。然而,他们可能忽略了任务之间的固有差异,特别是对于多级任务(例如,像素级、图像级和序列级任务)。这可能会导致不适当的本地和虚假任务限制。我们提出跨空间一致性(CSC)来克服这一挑战。 CSC将多级任务嵌入到同一级中,以减少固有的任务差异。这使得多级任务特征在统一的潜在空间中保持一致。潜在空间提取任务共同特征并约束这些特征中的距离。这限制了满足多个任务条件的任务权重区域。大量实验将 CSC 与 5 个数据集(10,908 名患者)上的 15 种最先进的超声心动图分析方法进行了比较。结果表明,CSC 可以提供左心室 (LV) 分割(DSC = 0.932)、关键点检测(MAE = 3.06mm)和关键帧识别(精度 = 0.943)。这些结果表明,我们的方法可以提供心脏功能的多维分析,并且在大规模数据集中具有鲁棒性。高日,志凡/O-9082-2019;张振轩/AAN-5198-2021;张振轩/;高志凡/OI振轩张/0009-0002-2904-2848;高志凡/0000-0002-1576-4439 ZA 0 TC 4 Z8 0
ZS 0
ZB 0
ZR 0
Z9 4
C1 Sun Yat Sen Univ, Sch Biomed Engn, Shenzhen 518107, Peoples R China
C1 Anhui Univ, Sch Big Data & Stat, Hefei 230039, Anhui, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100016
PM 38329865
ER
ZS 0 ZB 0 ZR 0 Z9 4 C1 中山大学生物医学工程学院,深圳 518107,人民大学 C1 安徽大学大数据与统计学院,安徽合肥 230039,人民大学 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100016 PM 38329865 ER
PT J
AU Li, Xing
Jing, Kaili
Yang, Yan
Wang, Yongbo
Ma, Jianhua
Zheng, Hairong
Xu, Zongben
PT J AU Li、Xing Jing、杨凯丽、王岩、马永波、郑建华、徐海荣、宗本
TI Noise-Generating and Imaging Mechanism Inspired Implicit Regularization
Learning Network for Low Dose CT Reconstrution
TI 噪声生成和成像机制启发用于低剂量 CT 重建的隐式正则化学习网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1677
EP 1689
DI 10.1109/TMI.2023.3347258
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1677 EP 1689 DI 10.1109/TMI.2023.3347258 DT 文章 PD 2024 年 5 月 PY 2024
AB Low-dose computed tomography (LDCT) helps to reduce radiation risks in
CT scanning while maintaining image quality, which involves a consistent
pursuit of lower incident rays and higher reconstruction performance.
Although deep learning approaches have achieved encouraging success in
LDCT reconstruction, most of them treat the task as a general inverse
problem in either the image domain or the dual (sinogram and image)
domains. Such frameworks have not considered the original noise
generation of the projection data and suffer from limited performance
improvement for the LDCT task. In this paper, we propose a novel
reconstruction model based on noise-generating and imaging mechanism in
full-domain, which fully considers the statistical properties of
intrinsic noises in LDCT and prior information in sinogram and image
domains. To solve the model, we propose an optimization algorithm based
on the proximal gradient technique. Specifically, we derive the
approximate solutions of the integer programming problem on the
projection data theoretically. Instead of hand-crafting the sinogram and
image regularizers, we propose to unroll the optimization algorithm to
be a deep network. The network implicitly learns the proximal operators
of sinogram and image regularizers with two deep neural networks,
providing a more interpretable and effective reconstruction procedure.
Numerical results demonstrate our proposed method improvements of > 2.9
dB in peak signal to noise ratio, > 1.4% promotion in structural
similarity metric, and > 9 HU decrements in root mean square error over
current state-of-the-art LDCT methods.
OI Xing, Li/0009-0001-7543-9152
ZB 0
ZR 0
AB 低剂量计算机断层扫描 (LDCT) 有助于降低 CT 扫描中的辐射风险,同时保持图像质量,这涉及对更低入射射线和更高重建性能的一贯追求。尽管深度学习方法在 LDCT 重建方面取得了令人鼓舞的成功,但大多数方法都将该任务视为图像域或双(正弦图和图像)域中的一般逆问题。此类框架没有考虑投影数据的原始噪声生成,并且 LDCT 任务的性能改进有限。在本文中,我们提出了一种基于全域噪声生成和成像机制的新型重建模型,该模型充分考虑了LDCT中固有噪声的统计特性以及正弦图和图像域中的先验信息。为了求解该模型,我们提出了一种基于近端梯度技术的优化算法。具体来说,我们从理论上推导了投影数据上的整数规划问题的近似解。我们建议将优化算法展开为深度网络,而不是手工制作正弦图和图像正则化器。该网络通过两个深度神经网络隐式学习正弦图和图像正则化器的近端算子,提供更可解释和更有效的重建过程。数值结果表明,我们提出的方法与当前最先进的 LDCT 相比,峰值信噪比提高了 > 2.9 dB,结构相似性度量提高了 > 1.4%,均方根误差降低了 > 9 HU方法。李爱兴/0009-0001-7543-9152 ZB 0 ZR 0
ZS 0
TC 0
Z8 0
ZA 0
Z9 0
C1 Xi An Jiao Tong Univ, Inst Informat & Syst Sci, Sch Math & Stat, Xian
710049, Peoples R China
C1 Univ Ottawa, Dept Math & Stat, Ottawa, ON K1N 6N5, Canada
C1 Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
C1 Pazhou Lab Huangpu, Guangzhou 510555, Peoples R China
C1 Chinese Acad Sci, Shenzhen Inst Adv Technol, Paul C Lauterbur Res Ctr
Biomed Imaging, Shenzhen 518055, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800023
PM 38145543
ER
ZS 0 TC 0 Z8 0 ZA 0 Z9 0 C1 西安交通大学,信息与系统科学学院,数学与统计学院,西安 710049,人民 R 中国 C1 渥太华大学,数学与统计系,渥太华,ON K1N 6N5,加拿大C1 南方医科大学,Sch Biomed Engn,广州 510515,人民 R 中国 C1 琶洲实验室黄埔,广州 510555,人民 R 中国 C1 中国科学院,深圳 Inst Adv Technol,Paul C Lauterbur Res Ctr Biomed Imaging,深圳 518055,人民 R 中国SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800023 PM 38145543 ER
PT J
AU Onishi, Yuya
Hashimoto, Fumio
Ote, Kibo
Ota, Ryosuke
PT J AU Onishi、桥本裕也、大手文雄、太田喜保、凉介
TI Whole Reconstruction-Free System Design for Direct Positron Emission
Imaging From Image Generation to Attenuation Correction
TI 直接正电子发射成像从图像生成到衰减校正的整体免重构系统设计
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1654
EP 1663
DI 10.1109/TMI.2023.3344095
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1654 EP 1663 DI 10.1109/TMI.2023.3344095 DT 文章 PD 2024 年 5 月 PY 2024
AB Direct positron emission imaging (dPEI), which does not require a
mathematical reconstruction step, is a next-generation molecular imaging
modality. To maximize the practical applicability of the dPEI system to
clinical practice, we introduce a novel reconstruction-free
image-formation method called direct mu(Compton) imaging, which directly
localizes the interaction position of Compton scattering from the
annihilation photons in a three-dimensional space by utilizing the same
compact geometry as that for dPEI, involving ultrafast time-of-flight
radiation detectors. This unique imaging method not only provides the
anatomical information about an object but can also be applied to
attenuation correction of dPEI images. Evaluations through Monte Carlo
simulation showed that functional and anatomical hybrid images can be
acquired using this multimodal imaging system. By fusing the images, it
is possible to simultaneously access various object data, which ensures
the synergistic effect of the two imaging methodologies. In addition,
attenuation correction improves the quantification of dPEI images. The
realization of the whole reconstruction-free imaging system from image
generation to quantitative correction provides a new perspective in
molecular imaging.
RI Hashimoto, Fumio/AAJ-3620-2020; Onishi, Yuya/AAY-4901-2021; Ote, Kibo/HOH-5837-2023; Ota, Ryosuke/AAG-9003-2021; , Kibo/; Ota, Ryosuke/; Hashimoto, Fumio/; Onishi, Yuya/
OI , Kibo/0000-0003-1826-5739; Ota, Ryosuke/0000-0001-6345-1982; Hashimoto,
Fumio/0000-0003-2352-0538; Onishi, Yuya/0000-0001-9715-4636
ZB 0
Z8 0
ZA 0
TC 0
ZR 0
AB 直接正电子发射成像 (dPEI) 不需要数学重建步骤,是下一代分子成像模式。为了最大限度地提高 dPEI 系统在临床实践中的实用性,我们引入了一种称为直接 mu(康普顿) 成像的新型免重建成像方法,该方法直接定位三维湮灭光子的康普顿散射的相互作用位置。利用与 dPEI 相同的紧凑几何结构,包括超快飞行时间辐射探测器。这种独特的成像方法不仅可以提供物体的解剖信息,还可以应用于 dPEI 图像的衰减校正。通过蒙特卡罗模拟的评估表明,可以使用这种多模态成像系统获取功能和解剖混合图像。通过融合图像,可以同时访问各种对象数据,从而确保两种成像方法的协同效应。此外,衰减校正提高了 dPEI 图像的量化。从图像生成到定量校正的整个免重建成像系统的实现,为分子成像提供了新的视角。桥本 RI,文雄/AAJ-3620-2020;大西裕也/AAY-4901-2021;大手,基博/HOH-5837-2023;太田凉介/AAG-9003-2021; , 基博/;太田凉介/;桥本文雄/;大西裕也/ OI , Kibo/0000-0003-1826-5739;太田凉介/0000-0001-6345-1982;桥本文雄/0000-0003-2352-0538;大西裕也/0000-0001-9715-4636 ZB 0 Z8 0 ZA 0 TC 0 ZR 0
ZS 0
Z9 0
C1 Hamamatsu Photon KK, Cent Res Lab, Hamamatsu 4348601, Japan
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800029
PM 38109238
ER
ZS 0 Z9 0 C1 Hamamatsu Photon KK,Cent Res Lab,滨松 4348601,日本 SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800029 PM 38109238 ER
PT J
AU Ta, Kevinminh
Ahn, Shawn S.
Thorn, Stephanie L.
Stendahl, John C.
Zhang, Xiaoran
Langdon, Jonathan
Staib, Lawrence H.
Sinusas, Albert J.
Duncan, James S.
PT J AU Ta、Kevinminh Ahn、Shawn S. Thorn、Stephanie L. Stendahl、John C. 张、Xiaoran Langdon、Jonathan Staib、Lawrence H. Sinusas、Albert J. Duncan、James S.
TI Multi-Task Learning for Motion Analysis and Segmentation in 3D
Echocardiography
用于 3D 超声心动图运动分析和分割的 TI 多任务学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 2010
EP 2020
DI 10.1109/TMI.2024.3355383
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 2010 EP 2020 DI 10.1109/TMI.2024.3355383 DT 文章 PD 2024 年 5 月 PY 2024
AB Characterizing left ventricular deformation and strain using 3D+time
echocardiography provides useful insights into cardiac function and can
be used to detect and localize myocardial injury. To achieve this, it is
imperative to obtain accurate motion estimates of the left ventricle. In
many strain analysis pipelines, this step is often accompanied by a
separate segmentation step; however, recent works have shown both tasks
to be highly related and can be complementary when optimized jointly. In
this work, we present a multi-task learning network that can
simultaneously segment the left ventricle and track its motion between
multiple time frames. Two task-specific networks are trained using a
composite loss function. Cross-stitch units combine the activations of
these networks by learning shared representations between the tasks at
different levels. We also propose a novel shape-consistency unit that
encourages motion propagated segmentations to match directly predicted
segmentations. Using a combined synthetic and in-vivo 3D
echocardiography dataset, we demonstrate that our proposed model can
achieve excellent estimates of left ventricular motion displacement and
myocardial segmentation. Additionally, we observe strong correlation of
our image-based strain measurements with crystal-based strain
measurements as well as good correspondence with SPECT perfusion
mappings. Finally, we demonstrate the clinical utility of the
segmentation masks in estimating ejection fraction and sphericity
indices that correspond well with benchmark measurements.
RI Zhang, Xiaoran/AAJ-6366-2021; Langdon, Jonathan/JGL-9505-2023; Zhang, Xiaoran/AAM-2076-2021; Stendahl, John/; Staib, Lawrence/; Duncan, James/
OI Zhang, Xiaoran/0000-0001-8918-7374; Stendahl, John/0000-0002-1568-9280;
Staib, Lawrence/0000-0002-9516-5136; Duncan, James/0000-0002-5167-9856
ZR 0
TC 1
ZB 0
Z8 0
ZA 0
AB 使用 3D+时间超声心动图表征左心室变形和应变可以提供对心脏功能的有用见解,并可用于检测和定位心肌损伤。为了实现这一目标,必须获得左心室的准确运动估计。在许多应变分析流程中,此步骤通常伴随着单独的分割步骤;然而,最近的工作表明这两项任务高度相关,并且在联合优化时可以互补。在这项工作中,我们提出了一个多任务学习网络,可以同时分割左心室并跟踪其在多个时间帧之间的运动。使用复合损失函数训练两个特定于任务的网络。十字绣单元通过学习不同级别的任务之间的共享表示来组合这些网络的激活。我们还提出了一种新颖的形状一致性单元,它鼓励运动传播分割来匹配直接预测的分割。使用组合的合成和体内 3D 超声心动图数据集,我们证明我们提出的模型可以实现左心室运动位移和心肌分割的出色估计。此外,我们观察到基于图像的应变测量与基于晶体的应变测量之间存在很强的相关性,并且与 SPECT 灌注映射具有良好的对应性。最后,我们展示了分割掩模在估计射血分数和球形指数方面的临床实用性,这些指数与基准测量值很好地对应。张日晓然/AAJ-6366-2021;乔纳森·兰登/JGL-9505-2023;张晓然/AAM-2076-2021;约翰·斯坦达尔;劳伦斯·斯塔布/;詹姆斯·邓肯/张晓然/0000-0001-8918-7374;约翰·斯坦达尔/0000-0002-1568-9280;劳伦斯·斯塔布/0000-0002-9516-5136;邓肯,詹姆斯/0000-0002-5167-9856 ZR 0 TC 1 ZB 0 Z8 0 ZA 0
ZS 0
Z9 1
C1 Yale Univ, Dept Biomed Engn, New Haven, CT 06511 USA
C1 Yale Univ, Dept Internal Med, Sect Cardiovasc Med, New Haven, CT 06511
USA
C1 Yale Univ, Dept Radiol & Biomed Imaging, New Haven, CT 06511 USA
C1 Yale Univ, Dept Biomed Engn, Dept Elect Engn, New Haven, CT 06511 USA
C1 Yale Univ, Dept Internal Med, Dept Radiol & Biomed Imaging, Sect
Cardiovasc Med, New Haven, CT 06511 USA
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800005
PM 38231820
ER
ZS 0 Z9 1 C1 耶鲁大学生物医学工程系,纽黑文,康涅狄格州 06511 美国 C1 耶鲁大学内科医学系,心血管医学科,纽黑文,康涅狄格州 06511 美国 C1 耶鲁大学放射与生物医学成像系,纽黑文,康涅狄格州06511 美国 C1 耶鲁大学,生物医学工程系,Elect Engn 系,纽黑文,康涅狄格州 06511 美国 C1 耶鲁大学,内科医学系,放射与生物医学成像系,心血管医学部,纽黑文,康涅狄格州 06511 美国 SN 0278-0062 EI 1558 -254X DA 2024-05-23 UT WOS:001214547800005 PM 38231820 ER
PT J
AU Liang, Yinhao
Tang, Wenjie
Wang, Ting
Ng, Wing W. Y.
Chen, Siyi
Jiang, Kuiming
Wei, Xinhua
Jiang, Xinqing
Guo, Yuan
PT J AU Liang、Yinhao Tang、Wenjie Wang、Ting Ng、Wing WY Chen、Siyi Jiang、Kuiming Wei、新华 Jiang、XinqingGuo、Yuan
TI HRadNet: A Hierarchical Radiomics-Based Network for Multicenter Breast
Cancer Molecular Subtypes Prediction
TI HRadNet:基于分层放射组学的多中心乳腺癌分子亚型预测网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1225
EP 1236
DI 10.1109/TMI.2023.3331301
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1225 EP 1236 DI 10.1109/TMI.2023.3331301 DT 文章 PD 2024 年 3 月 PY 2024
AB Breast cancer is a heterogeneous disease, where molecular subtypes of
breast cancer are closely related to the treatment and prognosis.
Therefore, the goal of this work is to differentiate between luminal and
non-luminal subtypes of breast cancer. The hierarchical radiomics
network (HRadNet) is proposed for breast cancer molecular subtypes
prediction based on dynamic contrast-enhanced magnetic resonance
imaging. HRadNet fuses multilayer features with the metadata of images
to take advantage of conventional radiomics methods and general
convolutional neural networks. A two-stage training mechanism is adopted
to improve the generalization capability of the network for multicenter
breast cancer data. The ablation study shows the effectiveness of each
component of HRadNet. Furthermore, the influence of features from
different layers and metadata fusion are also analyzed. It reveals that
selecting certain layers of features for a specified domain can make
further performance improvements. Experimental results on three data
sets from different devices demonstrate the effectiveness of the
proposed network. HRadNet also has good performance when transferring to
other domains without fine-tuning.
RI chen, siyi/KXR-2904-2024; Wang, Ting/KPB-6170-2024; Liang, Yinhao/
OI Wang, Ting/0000-0001-5967-5940; Liang, Yinhao/0000-0001-7004-4840
ZA 0
AB乳腺癌是一种异质性疾病,乳腺癌的分子亚型与治疗和预后密切相关。因此,这项工作的目标是区分乳腺癌的管腔亚型和非管腔亚型。分层放射组学网络(HRadNet)被提出用于基于动态增强磁共振成像的乳腺癌分子亚型预测。 HRadNet 将多层特征与图像元数据融合,以利用传统放射组学方法和通用卷积神经网络。采用两阶段训练机制,提高网络对多中心乳腺癌数据的泛化能力。消融研究显示了 HRadNet 每个组件的有效性。此外,还分析了不同层特征和元数据融合的影响。它表明,为指定领域选择某些特征层可以进一步提高性能。来自不同设备的三个数据集的实验结果证明了所提出网络的有效性。 HRadNet 在无需微调的情况下转移到其他域时也具有良好的性能。日辰四一/KXR-2904-2024;王婷/KPB-6170-2024;梁银浩/王爱婷/0000-0001-5967-5940;梁银浩/0000-0001-7004-4840 ZA 0
ZS 0
TC 2
Z8 1
ZB 0
ZR 0
Z9 3
C1 South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples
R China
C1 Guangzhou First Peoples Hosp, Dept Radiol, Guangzhou 510180, Peoples R
China
C1 South China Univ Technol, Sch Med, Guangzhou 510180, Peoples R China
C1 Guangdong Women & Children Hosp, Dept Radiol, Guangzhou 510010,
Guangdong, Peoples R China
C3 Guangdong Women & Children Hosp
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600008
PM 37938946
ER
ZS 0 TC 2 Z8 1 ZB 0 ZR 0 Z9 3 C1 华南理工大学,科学与工程学院,广州 510006,人民 C1 广州第一人民医院,放射科,广州 510180,人民 R China C1 华南理工大学, Sch Med, 广州 510180, 人民 R 中国 C1 广东妇女儿童医院, 放射科, 广州 510010, 广东, 人民 R 中国 C3 广东妇女儿童医院 SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS :001180906600008 下午 37938946 ER
PT J
AU Ahmadi, N.
Tsang, M. Y.
Gu, A. N.
Tsang, T. S. M.
Abolmaesumi, P.
PT J AU Ahmadi, N. Tsang, MY Gu, AN Tsang, TSM Abolmaesumi, P.
TI Transformer-Based Spatio-Temporal Analysis for Classification of Aortic
Stenosis Severity From Echocardiography Cine Series
基于 TI Transformer 的时空分析,对超声心动图电影系列中的主动脉瓣狭窄严重程度进行分类
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 366
EP 376
DI 10.1109/TMI.2023.3305384
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 366 EP 376 DI 10.1109/TMI.2023.3305384 DT 文章 PD 2024 年 1 月 PY 2024
AB Aortic stenosis (AS) is characterized by restricted motion and
calcification of the aortic valve and is the deadliest valvular cardiac
disease. Assessment of AS severity is typically done by expert
cardiologists using Doppler measurements of valvular flow from
echocardiography. However, this limits the assessment of AS to hospitals
staffed with experts to provide comprehensive echocardiography service.
As accurate Doppler acquisition requires significant clinical training,
in this paper, we present a deep learning framework to determine the
feasibility of AS detection and severity classification based only on
two-dimensional echocardiographic data. We demonstrate that our proposed
spatio-temporal architecture effectively and efficiently combines both
anatomical features and motion of the aortic valve for AS severity
classification. Our model can process cardiac echo cine series of
varying length and can identify, without explicit supervision, the
frames that are most informative towards the AS diagnosis. We present an
empirical study on how the model learns phases of the heart cycle
without any supervision and frame-level annotations. Our architecture
outperforms state-of-the-art results on a private and a public dataset,
achieving 95.2% and 91.5% in AS detection, and 78.1% and 83.8% in AS
severity classification on the private and public datasets,
respectively. Notably, due to the lack of a large public video dataset
for AS, we made slight adjustments to our architecture for the public
dataset. Furthermore, our method addresses common problems in training
deep networks with clinical ultrasound data, such as a low
signal-to-noise ratio and frequently uninformative frames. Our source
code is available at: https://github.com/neda77aa/FTC.git
RI Tsang, Teresa/AAO-3518-2021; AHMADI AMIRI, S. NEDA/; Abolmaesumi, Purang/; Tsang, Teresa S.M./; Gu, Ang Nan/
OI AHMADI AMIRI, S. NEDA/0000-0002-4091-6916; Abolmaesumi,
Purang/0000-0002-7259-8609; Tsang, Teresa S.M./0000-0003-4865-7119; Gu,
Ang Nan/0000-0001-8926-2397
AB 主动脉瓣狭窄 (AS) 的特点是主动脉瓣运动受限和钙化,是最致命的瓣膜性心脏病。 AS 严重程度的评估通常由心脏病专家使用超声心动图对瓣膜血流的多普勒测量来完成。然而,这限制了对AS的评估仅限于配备专家提供全面超声心动图服务的医院。由于准确的多普勒采集需要大量的临床培训,因此在本文中,我们提出了一个深度学习框架,以确定仅基于二维超声心动图数据进行 AS 检测和严重程度分类的可行性。我们证明了我们提出的时空架构有效地结合了主动脉瓣的解剖特征和运动,以进行 AS 严重程度分类。我们的模型可以处理不同长度的心脏回声电影系列,并且可以在没有明确监督的情况下识别对 AS 诊断信息最丰富的帧。我们提出了一项关于模型如何在没有任何监督和帧级注释的情况下学习心脏周期阶段的实证研究。我们的架构在私有和公共数据集上的表现优于最先进的结果,在私有和公共数据集上的 AS 检测率分别达到 95.2% 和 91.5%,AS 严重性分类分别达到 78.1% 和 83.8%。值得注意的是,由于缺乏 AS 的大型公共视频数据集,我们对公共数据集的架构进行了轻微调整。此外,我们的方法解决了使用临床超声数据训练深度网络的常见问题,例如低信噪比和经常无信息的帧。我们的源代码位于:https://github.com/neda77aa/FTC。git RI Tsang,Teresa/AAO-3518-2021;艾哈迈迪·阿米里 (AHMADI AMIRI),S. NEDA/; Abolmaesumi, 普兰/;曾, 特蕾莎 SM/;顾昂南/OI AHMADI AMIRI, S.NEDA/0000-0002-4091-6916; Abolmaesumi,普兰/0000-0002-7259-8609;曾德丽莎 SM/0000-0003-4865-7119;顾昂楠/0000-0001-8926-2397
ZS 0
ZB 0
Z8 0
TC 4
ZA 0
ZR 0
Z9 4
C1 Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4,
Canada
C1 Univ British Columbia, Fac Med, Vancouver, BC V6T 1Z4, Canada
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600034
PM 37581960
ER
ZS 0 ZB 0 Z8 0 TC 4 ZA 0 ZR 0 Z9 4 C1 不列颠哥伦比亚大学,Elect & Comp Engn,温哥华,BC V6T 1Z4,加拿大 C1 不列颠哥伦比亚大学,Fac Med,温哥华,BC V6T 1Z4,加拿大 SN 0278- 0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600034 PM 37581960 ER
PT J
AU Li, Yiyue
Qian, Guangwu
Jiang, Xiaoshuang
Jiang, Zekun
Wen, Wen
Zhang, Shaoting
Li, Kang
Lao, Qicheng
PT J AU Li, 钱一跃, 蒋光武, 蒋小双, 文泽坤, 张文, 李少婷, 劳康, 启成
TI Hierarchical-Instance Contrastive Learning for Minority Detection on
Imbalanced Medical Datasets
用于不平衡医疗数据集少数检测的 TI 分层实例对比学习
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 416
EP 426
DI 10.1109/TMI.2023.3310716
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 416 EP 426 DI 10.1109/TMI.2023.3310716 DT 文章 PD 2024 年 1 月 PY 2024
AB Deep learning methods are often hampered by issues such as data
imbalance and data-hungry. In medical imaging, malignant or rare
diseases are frequently of minority classes in the dataset, featured by
diversified distribution. Besides that, insufficient labels and unseen
cases also present conundrums for training on the minority classes. To
confront the stated problems, we propose a novel Hierarchical-instance
Contrastive Learning (HCLe) method for minority detection by only
involving data from the majority class in the training stage. To tackle
inconsistent intra-class distribution in majority classes, our method
introduces two branches, where the first branch employs an auto-encoder
network augmented with three constraint functions to effectively extract
image-level features, and the second branch designs a novel contrastive
learning network by taking into account the consistency of features
among hierarchical samples from majority classes. The proposed method is
further refined with a diverse mini-batch strategy, enabling the
identification of minority classes under multiple conditions. Extensive
experiments have been conducted to evaluate the proposed method on three
datasets of different diseases and modalities. The experimental results
show that the proposed method outperforms the state-of-the-art methods.
RI Jiang, Zekun/ABE-2831-2020; Zhang, Yu-Dong/HKM-7209-2023; Li, Kang/; Jiang, Zekun/; Li, Yiyue/
OI Li, Kang/0000-0002-8136-9816; Jiang, Zekun/0000-0002-3178-7761; Li,
Yiyue/0000-0001-5435-1699
Z8 0
ZR 0
ZA 0
TC 0
ZB 0
AB 深度学习方法经常受到数据不平衡和数据匮乏等问题的阻碍。在医学影像中,恶性或罕见疾病在数据集中往往属于少数类别,且分布多样化。除此之外,标签不足、案例未见也给少数民族班的培训带来了难题。为了解决上述问题,我们提出了一种新颖的分层实例对比学习(HCLe)方法,通过在训练阶段仅涉及来自多数类别的数据来进行少数检测。为了解决大多数类中类内分布不一致的问题,我们的方法引入了两个分支,其中第一个分支采用增强了三个约束函数的自动编码器网络来有效提取图像级特征,第二个分支设计了一种新颖的对比学习网络通过考虑大多数类别的分层样本之间特征的一致性。所提出的方法通过多样化的小批量策略进一步完善,从而能够在多种条件下识别少数类别。已经进行了大量的实验,以在不同疾病和模式的三个数据集上评估所提出的方法。实验结果表明,所提出的方法优于最先进的方法。日江泽坤/ABE-2831-2020;张玉东/HKM-7209-2023;李康/;蒋泽坤/;李一月/ OI 李康/0000-0002-8136-9816;蒋泽坤/0000-0002-3178-7761;李一月/0000-0001-5435-1699 Z8 0 ZR 0 ZA 0 TC 0 ZB 0
ZS 0
Z9 0
C1 Sichuan Univ, West China Hosp, Dept Pathol, Chengdu 610041, Sichuan,
Peoples R China
C1 Sichuan Univ, West China Hosp, West China Biomed Big Data Ctr, Chengdu
610041, Sichuan, Peoples R China
C1 Sichuan Univ, Sichuan Univ Pittsburgh Inst, Dept Comp Sci, Chengdu
610041, Peoples R China
C1 Sichuan Univ, West China Hosp, Dept Ophthalmol, Chengdu 610041, Peoples
R China
C1 Sichuan Univ, West China Hosp, Dept Ultrasound, Chengdu 610041, Peoples
R China
C1 Shanghai Artificial Intelligence Lab, Shanghai 200030, Peoples R China
C1 Beijing Univ Posts & Telecommun BUPT, Sch Artificial Intelligence,
Beijing 100876, Peoples R China
C3 Shanghai Artificial Intelligence Lab
C3 Beijing Univ Posts & Telecommun BUPT
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600022
PM 37651492
ER
ZS 0 Z9 0 C1 四川大学华西医院病理科,成都 610041,四川,人民 R China C1 四川大学,华西医院,华西生物医学大数据中心,成都 610041,四川,人民 R China C1 四川大学,四川大学匹兹堡研究所比较科学系, 成都 610041, 人民 R China C1 四川大学华西医院眼科, 成都 610041, 人民 R China C1 四川大学华西医院超声科, 成都 610041, 人民 R China C1上海人工智能实验室,上海 200030,人民 R 中国 C1 北京邮电大学 BUPT,人工智能学院,北京 100876,人民 R 中国 C3 上海人工智能实验室 C3 北京邮电大学 BUPT SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600022 PM 37651492 ER
PT J
AU Mei, Lanzhuju
Fang, Yu
Zhao, Yue
Zhou, Xiang Sean
Zhu, Min
Cui, Zhiming
Shen, Dinggang
PT J AU Mei, Lanzhuju Fang, 赵宇, Yue Zhou, 向 Sean Zhu, 崔敏, 沉志明, 丁刚
TI DTR-Net: Dual-Space 3D Tooth Model Reconstruction From Panoramic X-Ray
Images
TI DTR-Net:根据全景 X 射线图像重建双空间 3D 牙齿模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 517
EP 528
DI 10.1109/TMI.2023.3313795
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 517 EP 528 DI 10.1109/TMI.2023.3313795 DT 文章 PD 2024 年 1 月 PY 2024
AB In digital dentistry, cone-beam computed tomography (CBCT) can provide
complete 3D tooth models, yet suffers from a long concern of requiring
excessive radiation dose and higher expense. Therefore, 3D tooth model
reconstruction from 2D panoramic X-ray image is more cost-effective, and
has attracted great interest in clinical applications. In this paper, we
propose a novel dual-space framework, namely DTR-Net, to reconstruct 3D
tooth model from 2D panoramic X-ray images in both image and geometric
spaces. Specifically, in the image space, we apply a 2D-to-3D generative
model to recover intensities of CBCT image, guided by a task-oriented
tooth segmentation network in a collaborative training manner.
Meanwhile, in the geometric space, we benefit from an implicit function
network in the continuous space, learning using points to capture
complicated tooth shapes with geometric properties. Experimental results
demonstrate that our proposed DTR-Net achieves state-of-the-art
performance both quantitatively and qualitatively in 3D tooth model
reconstruction, indicating its potential application in dental practice.
RI Zhou, Xiang/AAA-3714-2020; Fang, Yu/
OI Fang, Yu/0000-0001-5121-3581
Z8 0
ZR 0
ZA 0
ZB 0
AB 在数字牙科中,锥形束计算机断层扫描(CBCT)可以提供完整的 3D 牙齿模型,但长期以来一直存在辐射剂量过多和费用较高的问题。因此,从2D全景X射线图像重建3D牙齿模型更具成本效益,并引起了临床应用的极大兴趣。在本文中,我们提出了一种新颖的双空间框架,即 DTR-Net,用于在图像和几何空间中从 2D 全景 X 射线图像重建 3D 牙齿模型。具体来说,在图像空间中,我们应用 2D 到 3D 生成模型来恢复 CBCT 图像的强度,并以协作训练的方式由面向任务的牙齿分割网络引导。同时,在几何空间中,我们受益于连续空间中的隐式函数网络,学习使用点来捕获具有几何特性的复杂牙齿形状。实验结果表明,我们提出的 DTR-Net 在 3D 牙齿模型重建中在定量和定性方面均达到了最先进的性能,表明其在牙科实践中的潜在应用。周日翔/AAA-3714-2020;于方/ 于方爱/0000-0001-5121-3581 Z8 0 ZR 0 ZA 0 ZB 0
ZS 0
TC 0
Z9 0
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples
R China
C1 Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing
400065, Peoples R China
C1 Shanghai United Imaging Intelligence Co Ltd, Shanghai 200230, Peoples R
China
C1 Shanghai Jiao Tong Univ, Shanghai Peoples Hosp 9, Shanghai 200011,
Peoples R China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 201210, Peoples R China
C3 Shanghai United Imaging Intelligence Co Ltd
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600011
PM 37751352
ER
ZS 0 TC 0 Z9 0 C1 上海科技大学,生物医学工程学院,上海 201210,人民 R 中国 C1 上海科技大学,科学信息科技学院,上海 201210,人民 R 中国 C1 重庆邮电大学,科学通信与信息工程,重庆400065,人民R中国C1上海联合影像智能有限公司,上海200230,人民R中国C1上海交通大学,上海人民医院9号,上海200011,人民R中国C1上海临床研究中心,上海201210,人民R中国C3 上海联影智能有限公司 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600011 PM 37751352 ER
PT J
AU Noichl, Wolfgang
De Marco, Fabio
Willer, Konstantin
Urban, Theresa
Frank, Manuela
Schick, Rafael
Gleich, Bernhard
Hehn, Lorenz
Gustschin, Alex
Meyer, Pascal
Koehler, Thomas
Maack, Ingo
Engel, Klaus-Jurgen
Lundt, Bernd
Renger, Bernhard
Fingerle, Alexander
Pfeiffer, Daniela
Rummeny, Ernst
Herzen, Julia
Pfeiffer, Franz
PT J AU Noichl、沃尔夫冈·德马科、法比奥·威勒、康斯坦丁·厄本、特里萨·弗兰克、曼努埃拉·希克、拉斐尔·格莱奇、伯恩哈德·赫恩、洛伦茨·古斯特钦、亚历克斯·迈耶、帕斯卡·克勒、托马斯·马克、英戈·恩格尔、克劳斯-尤尔根·伦特、贝恩德·伦格、伯恩哈德·芬格尔 / 亚历山大·菲佛 / 丹妮拉·鲁梅尼 / 恩斯特·赫尔岑 / 朱莉娅·菲佛 / 弗兰茨
TI Correction for Mechanical Inaccuracies in a Scanning Talbot-Lau
Interferometer
TI 对扫描 Talbot-Lau 干涉仪中机械误差的校正
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 28
EP 38
DI 10.1109/TMI.2023.3288358
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 28 EP 38 DI 10.1109/TMI.2023.3288358 DT 文章 PD 2024 年 1 月 PY 2024
AB Grating-based X-ray phase-contrast and in particular dark-field
radiography are promising new imaging modalities for medical
applications. Currently, the potential advantage of dark-field imaging
in early-stage diagnosis of pulmonary diseases in humans is being
investigated. These studies make use of a comparatively large scanning
interferometer at short acquisition times, which comes at the expense of
a significantly reduced mechanical stability as compared to tabletop
laboratory setups. Vibrations create random fluctuations of the grating
alignment, causing artifacts in the resulting images. Here, we describe
a novel maximum likelihood method for estimating this motion, thereby
preventing these artifacts. It is tailored to scanning setups and does
not require any sample-free areas. Unlike any previously described
method, it accounts for motion in between as well as during exposures.
RI Gleich, Bernhard/D-4803-2009; De Marco, Fabio/JDM-6021-2023; Noichl, Wolfgang/; De Marco, Fabio/; Gustschin, Alex/; MEYER, Pascal/A-7788-2011; Urban, Theresa/; Schick, Rafael/; Frank, Manuela/
OI Noichl, Wolfgang/0000-0002-3588-8164; De Marco,
Fabio/0000-0002-3561-7305; Gustschin, Alex/0000-0002-1559-5495; MEYER,
Pascal/0000-0001-5454-7739; Urban, Theresa/0000-0002-0352-8180; Schick,
Rafael/0000-0001-7650-7009; Frank, Manuela/0000-0001-5642-0694
Z8 0
基于 AB 光栅的 X 射线相衬技术,特别是暗场射线照相术,是有前景的医疗应用新成像方式。目前,正在研究暗场成像在人类肺部疾病早期诊断中的潜在优势。这些研究在较短的采集时间内使用了相对较大的扫描干涉仪,但与桌面实验室设置相比,其机械稳定性显着降低。振动会造成光栅对准的随机波动,从而导致生成的图像出现伪影。在这里,我们描述了一种新颖的最大似然方法来估计这种运动,从而防止这些伪影。它专为扫描设置而定制,不需要任何无样品区域。与之前描述的任何方法不同,它考虑了曝光之间以及曝光期间的运动。 RI格莱希,伯恩哈德/D-4803-2009;法比奥·德马科/JDM-6021-2023;沃尔夫冈·诺希尔/;德马科,法比奥/;古斯特钦,亚历克斯/;迈耶,帕斯卡/A-7788-2011;特蕾莎·乌尔班/;希克,拉斐尔/;弗兰克,曼努埃拉/OI Noichl,沃尔夫冈/0000-0002-3588-8164;法比奥·德马科/0000-0002-3561-7305;亚历克斯·古斯特钦/0000-0002-1559-5495;迈耶,帕斯卡/0000-0001-5454-7739;城市,特蕾莎/0000-0002-0352-8180;希克,拉斐尔/0000-0001-7650-7009;弗兰克·曼努埃拉/0000-0001-5642-0694 Z8 0
ZS 0
ZB 0
ZR 0
ZA 0
TC 1
Z9 1
C1 Tech Univ Munich, Chair Biomed Phys, TUM Sch Nat Sci, Dept Phys, D-85748
Garching, Germany
C1 Tech Univ Munich, Munich Inst Biomed Engn, D-85748 Garching, Germany
C1 Tech Univ Munich, Sch Med, Dept Diagnost & Intervent Radiol, D-81675
Munich, Germany
C1 Karlsruhe Inst Technol, Inst Microstruct Technol, D-76344 Eggenstein
Leopoldshafen, Germany
C1 Philips Res Hamburg, D-22335 Hamburg, Germany
C1 Philips Med Syst DMC GmbH, Hamburg, Germany
C1 Philips Res Eindhoven, NL-5656 AE Eindhoven, Netherlands
C1 Tech Univ Munich, Klinikum Rechts Isar, D-81675 Munich, Germany
C1 Tech Univ Munich, Sch Med, Dept Diagnost & Intervent Radiol, Klinikum
Rechts Isar, Munich, Germany
C1 Tech Univ Munich, TUM Inst Adv Study, D-85748 Garching, Germany
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600046
PM 37342956
ER
ZS 0 ZB 0 ZR 0 ZA 0 TC 1 Z9 1 C1 慕尼黑科技大学,生物医学物理系主席,TUM 自然科学学院,物理系,D-85748 加兴,德国 C1 慕尼黑科技大学,慕尼黑生物医学工程学院,D-85748 加兴,德国 C1 慕尼黑科技大学,Sch Med,诊断和介入放射科,D-81675 德国慕尼黑 C1 卡尔斯鲁厄 Inst Technol,Inst Microstruct Technol,D-76344 Eggenstein Leopoldshafen,德国 C1 Philips Res 汉堡,D-22335 汉堡,德国 C1 Philips Med Syst DMC GmbH,德国汉堡 C1 飞利浦 Res 埃因霍温,NL-5656 AE 埃因霍温,荷兰 C1 慕尼黑科技大学,Klinikum Rechts Isar,D-81675 慕尼黑,德国 C1 慕尼黑科技大学,Sch Med,诊断与介入放射科,Klinikum Rechts Isar,慕尼黑,德国 C1 慕尼黑理工大学,TUM Inst Adv Study,D-85748 加兴,德国 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600046 PM 37342956 ER
PT J
AU Wang, Hong
Xie, Qi
Zeng, Dong
Ma, Jianhua
Meng, Deyu
Zheng, Yefeng
PT J AU Wang, 谢红, 曾琪, 马东, 孟建华, 郑德宇, 叶峰
TI OSCNet: Orientation-Shared Convolutional Network for CT Metal Artifact
Learning
TI OSCNet:用于 CT 金属工件学习的方向共享卷积网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 489
EP 502
DI 10.1109/TMI.2023.3310987
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 489 EP 502 DI 10.1109/TMI.2023.3310987 DT 文章 PD 2024 年 1 月 PY 2024
AB X-ray computed tomography (CT) has been broadly adopted in clinical
applications for disease diagnosis and image-guided interventions.
However, metals within patients always cause unfavorable artifacts in
the recovered CT images. Albeit attaining promising reconstruction
results for this metal artifact reduction (MAR) task, most of the
existing deep-learning-based approaches have some limitations. The
critical issue is that most of these methods have not fully exploited
the important prior knowledge underlying this specific MAR task.
Therefore, in this paper, we carefully investigate the inherent
characteristics of metal artifacts which present rotationally
symmetrical streaking patterns. Then we specifically propose an
orientation-shared convolution representation mechanism to adapt such
physical prior structures and utilize Fourier-series-expansion-based
filter parametrization for modelling artifacts, which can finely
separate metal artifacts from body tissues. By adopting the classical
proximal gradient algorithm to solve the model and then utilizing the
deep unfolding technique, we easily build the corresponding
orientation-shared convolutional network, termed as OSCNet. Furthermore,
considering that different sizes and types of metals would lead to
different artifact patterns (e.g., intensity of the artifacts), to
better improve the flexibility of artifact learning and fully exploit
the reconstructed results at iterative stages for information
propagation, we design a simple-yet-effective sub-network for the
dynamic convolution representation of artifacts. By easily integrating
the sub-network into the proposed OSCNet framework, we further construct
a more flexible network structure, called OSCNet+, which improves the
generalization performance. Through extensive experiments conducted on
synthetic and clinical datasets, we comprehensively substantiate the
effectiveness of our proposed methods. Code will be released at
https://github.com/hongwang01/OSCNet.
RI Xie, Sherlock/AFR-0741-2022; Meng, Deyu/KOC-3154-2024; Zeng, Dongqiang/C-3828-2019; Zheng, Yefeng/ABG-7053-2020; Wang, Hong/; Zeng, Dong/; Zheng, Yefeng/
OI Wang, Hong/0000-0002-6520-7681; Zeng, Dong/0000-0001-6015-5010; Zheng,
Yefeng/0000-0003-2195-2847
Z8 0
ZA 0
ZR 0
ZB 0
AB X 射线计算机断层扫描 (CT) 已广泛应用于疾病诊断和图像引导干预的临床应用。然而,患者体内的金属总是会在恢复的 CT 图像中产生不利的伪影。尽管在金属伪影减少(MAR)任务中获得了有希望的重建结果,但大多数现有的基于深度学习的方法都存在一些局限性。关键问题是,大多数这些方法都没有充分利用这一特定 MAR 任务背后的重要先验知识。因此,在本文中,我们仔细研究了呈现旋转对称条纹图案的金属制品的固有特征。然后,我们特别提出了一种方向共享卷积表示机制来适应这种物理先验结构,并利用基于傅里叶级数展开的滤波器参数化来建模伪影,这可以将金属伪影与身体组织精细地分离。采用经典的近端梯度算法求解模型,然后利用深度展开技术,我们很容易构建相应的方向共享卷积网络,称为OSCNet。此外,考虑到不同尺寸和类型的金属会导致不同的伪影模式(例如伪影的强度),为了更好地提高伪影学习的灵活性并充分利用迭代阶段的重建结果进行信息传播,我们设计了一个简单的模型- 用于工件动态卷积表示的有效子网络。通过轻松地将子网络集成到所提出的 OSCNet 框架中,我们进一步构建了更灵活的网络结构,称为 OSCNet+,从而提高了泛化性能。 通过对合成和临床数据集进行广泛的实验,我们全面证实了我们提出的方法的有效性。代码将在 https://github.com/hongwang01/OSCNet 发布。日谢,夏洛克/AFR-0741-2022;孟德宇/KOC-3154-2024;曾东强/C-3828-2019;郑业峰/ABG-7053-2020;王红/;曾东/;郑业峰/王爱红/0000-0002-6520-7681;曾栋/0000-0001-6015-5010;郑业峰/0000-0003-2195-2847 Z8 0 ZA 0 ZR 0 ZB 0
ZS 0
TC 2
Z9 2
C1 Tencent Jarvis Lab, Shenzhen 518057, Peoples R China
C1 Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Peoples R China
C1 Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
C1 Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Shaanxi, Peoples R
China
C1 Xi An Jiao Tong Univ, Key Lab Intelligent Networks & Network Secur,
Minist Educ, Xian 510555, Shaanxi, Peoples R China
C1 Pazhou Lab Huangpu, Guangzhou 510555, Peoples R China
C3 Tencent Jarvis Lab
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600031
PM 37656650
ER
ZS 0 TC 2 Z9 2 C1 腾讯贾维斯实验室,深圳 518057,人民 R 中国 C1 西安交通大学数学与统计学院,西安 710049,人民 R 中国 C1 南方医科大学,生物医学工程学院,广州 510515,人民 R 中国C1 西安交通大学数学与统计学院,陕西省西安市 710049 C1 西安交通大学智能网络与网络安全教育部重点实验室,陕西省西安市 510555 C1 琶洲实验室黄埔区, 广州 510555, Peoples R China C3 腾讯 Jarvis Lab SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600031 PM 37656650 ER
PT J
AU Zhou, Lianyu
Yu, Lequan
Wang, Liansheng
PT J AU Zhou, 于连玉, 王乐泉, 连胜
TI RECIST-Induced Reliable Learning: Geometry-Driven Label Propagation for
Universal Lesion Segmentation
TI RECIST 引发的可靠学习:用于通用病灶分割的几何驱动标签传播
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 149
EP 161
DI 10.1109/TMI.2023.3294824
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 149 EP 161 DI 10.1109/TMI.2023.3294824 DT 文章 PD 2024 年 1 月 PY 2024
AB Automatic universal lesion segmentation (ULS) from Computed Tomography
(CT) images can ease the burden of radiologists and provide a more
accurate assessment than the current Response Evaluation Criteria In
Solid Tumors (RECIST) guideline measurement. However, this task is
underdeveloped due to the absence of large-scale pixel-wise labeled
data. This paper presents a weakly-supervised learning framework to
utilize the large-scale existing lesion databases in hospital Picture
Archiving and Communication Systems (PACS) for ULS. Unlike previous
methods to construct pseudo surrogate masks for fully supervised
training through shallow interactive segmentation techniques, we propose
to unearth the implicit information from RECIST annotations and thus
design a unified RECIST-induced reliable learning (RiRL) framework.
Particularly, we introduce a novel label generation procedure and an
on-the-fly soft label propagation strategy to avoid noisy training and
poor generalization problems. The former, named RECIST-induced geometric
labeling, uses clinical characteristics of RECIST to preliminarily and
reliably propagate the label. With the labeling process, a trimap
divides the lesion slices into three regions, including certain
foreground, background, and unclear regions, which consequently enables
a strong and reliable supervision signal on a wide region. A topological
knowledge-driven graph is built to conduct the on-the-fly label
propagation for the optimal segmentation boundary to further optimize
the segmentation boundary. Experimental results on a public benchmark
dataset demonstrate that the proposed method surpasses the SOTA
RECIST-based ULS methods by a large margin. Our approach surpasses SOTA
approaches over 2.0%, 1.5%, 1.4%, and 1.6% Dice with ResNet101,
ResNet50, HRNet, and ResNest50 backbones.
RI zhou, lianyu/GSE-0004-2022; zhou, lianyu/; Yu, Lequan/U-5377-2019
OI zhou, lianyu/0000-0002-3412-5106; Yu, Lequan/0000-0002-9315-6527
ZB 0
TC 0
AB 计算机断层扫描 (CT) 图像的自动通用病灶分割 (ULS) 可以减轻放射科医生的负担,并提供比当前实体瘤反应评估标准 (RECIST) 指南测量更准确的评估。然而,由于缺乏大规模像素级标记数据,该任务尚未开发。本文提出了一种弱监督学习框架,以利用 ULS 医院图片存档和通信系统 (PACS) 中现有的大规模病变数据库。与之前通过浅层交互式分割技术构建伪代理掩码以进行完全监督训练的方法不同,我们建议从 RECIST 注释中挖掘隐含信息,从而设计一个统一的 RECIST 诱导的可靠学习(RiRL)框架。特别是,我们引入了一种新颖的标签生成过程和一种动态软标签传播策略,以避免噪声训练和泛化不良问题。前者称为RECIST诱导几何标记,利用RECIST的临床特征来初步可靠地传播标签。通过标记过程,三元图将病变切片分为三个区域,包括某些前景、背景和不清晰区域,从而在广阔的区域上提供强大而可靠的监督信号。构建拓扑知识驱动图,对最佳分割边界进行即时标签传播,以进一步优化分割边界。在公共基准数据集上的实验结果表明,所提出的方法大大超过了基于 SOTA RECIST 的 ULS 方法。我们的方法比 SOTA 方法高出 2.0%、1.5%、1.4% 和 1 以上。6% Dice 具有 ResNet101、ResNet50、HRNet 和 ResNest50 主干网。周日联宇/GSE-0004-2022;周连宇/;于乐泉/U-5377-2019 周连宇/0000-0002-3412-5106;于乐泉/0000-0002-9315-6527 ZB 0 TC 0
ZS 0
Z8 0
ZR 0
ZA 0
Z9 0
C1 Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
C1 Univ Hong Kong, Dept Stat & Actuarial Sci, Hong Kong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600003
PM 37436855
ER
ZS 0 Z8 0 ZR 0 ZA 0 Z9 0 C1 厦门大学,Sch Informat,厦门 361005,人民 R China C1 香港大学,统计与精算科学系,香港,人民 R China SN 0278-0062 EI 1558-254X DA 2024 -03-13 UT WOS:001158081600003 PM 37436855 ER
PT J
AU He, Linchao
Du, Wenchao
Liao, Peixi
Fan, Fenglei
Chen, Hu
Yang, Hongyu
Zhang, Yi
PT J AU He, 杜林超, 廖文超, 范培熙, 陈风雷, 胡杨, 张宏宇, 易
TI Solving Zero-Shot Sparse-View CT Reconstruction With Variational Score
Solver.
TI 使用变分得分求解器解决零样本稀疏视图 CT 重建问题。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3475516
DT Journal Article
PD 2024-Oct-07
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3475516 DT 期刊文章 PD 2024 年 10 月 7 日 PY 2024
AB Computed tomography (CT) stands as a ubiquitous medical diagnostic tool.
Nonetheless, the radiation-related concerns associated with CT scans
have raised public apprehensions. Mitigating radiation dosage in CT
imaging poses an inherent challenge as it inevitably compromises the
fidelity of CT reconstructions, impacting diagnostic accuracy. While
previous deep learning techniques have exhibited promise in enhancing CT
reconstruction quality, they remain hindered by the reliance on paired
data, which is arduous to procure. In this study, we present a novel
approach named Variational Score Solver (VSS) for solving sparse-view
reconstruction without paired data. Our approach entails the acquisition
of a probability distribution from densely sampled CT reconstructions,
employing a latent diffusion model. High-quality reconstruction outcomes
are achieved through an iterative process, wherein the diffusion model
serves as the prior term, subsequently integrated with the data
consistency term. Notably, rather than directly employing the prior
diffusion model, we distill prior knowledge by finding the fixed point
of the diffusion model. This framework empowers us to exercise precise
control over the process. Moreover, we depart from modeling the
reconstruction outcomes as deterministic values, opting instead for a
distribution-based approach. This enables us to achieve more accurate
reconstructions utilizing a trainable model. Our approach introduces a
fresh perspective to the realm of zero-shot CT reconstruction,
circumventing the constraints of supervised learning. Our extensive
qualitative and quantitative experiments unequivocally demonstrate that
VSS surpasses other contemporary unsupervised and achieves comparable
results compared with the most advance supervised methods in sparse-view
reconstruction tasks. Codes are available in
https://github.com/fpsandnoob/vss.
OI He, Linchao/0000-0002-0562-3026
ZA 0
Z8 0
AB 计算机断层扫描 (CT) 是一种无处不在的医疗诊断工具。尽管如此,与 CT 扫描相关的辐射问题引起了公众的担忧。减少 CT 成像中的辐射剂量是一个固有的挑战,因为它不可避免地会损害 CT 重建的保真度,影响诊断准确性。虽然之前的深度学习技术在提高 CT 重建质量方面表现出了希望,但它们仍然受到对配对数据的依赖的阻碍,而配对数据很难获得。在本研究中,我们提出了一种名为变分得分求解器(VSS)的新颖方法,用于在没有配对数据的情况下解决稀疏视图重建问题。我们的方法需要使用潜在扩散模型从密集采样的 CT 重建中获取概率分布。高质量的重建结果是通过迭代过程实现的,其中扩散模型作为先验项,随后与数据一致性项集成。值得注意的是,我们不是直接采用先验扩散模型,而是通过找到扩散模型的不动点来提取先验知识。该框架使我们能够对流程进行精确控制。此外,我们不再将重建结果建模为确定性值,而是选择基于分布的方法。这使我们能够利用可训练模型实现更准确的重建。我们的方法为零样本 CT 重建领域引入了全新的视角,规避了监督学习的限制。 我们广泛的定性和定量实验明确证明,VSS 超越了当代其他无监督方法,并且与稀疏视图重建任务中最先进的有监督方法相比,取得了可比的结果。代码可在 https://github.com/fpsandnoob/vss 中找到。何林超/0000-0002-0562-3026 ZA 0 Z8 0
ZS 0
TC 0
ZB 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-10-09
UT MEDLINE:39374276
PM 39374276
ER
ZS 0 TC 0 ZB 0 ZR 0 Z9 0 EI 1558-254X DA 2024-10-09 UT MEDLINE:39374276 PM 39374276 ER
PT J
AU Chen, Yixin
Gao, Yajuan
Zhu, Lei
Shao, Wenrui
Lu, Yanye
Han, Hongbin
Xie, Zhaoheng
PT J AU Chen, 高一新, 朱亚娟, 邵雷, 陆文瑞, 韩彦野, 谢宏斌, 兆恒
TI PCNet: Prior Category Network for CT Universal Segmentation Model
TI PCNet:CT 通用分段模型的优先类别网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3319
EP 3330
DI 10.1109/TMI.2024.3395349
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3319 EP 3330 DI 10.1109/TMI.2024.3395349 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Accurate segmentation of anatomical structures in Computed Tomography
(CT) images is crucial for clinical diagnosis, treatment planning, and
disease monitoring. The present deep learning segmentation methods are
hindered by factors such as data scale and model size. Inspired by how
doctors identify tissues, we propose a novel approach, the Prior
Category Network (PCNet), that boosts segmentation performance by
leveraging prior knowledge between different categories of anatomical
structures. Our PCNet comprises three key components: prior category
prompt (PCP), hierarchy category system (HCS), and hierarchy category
loss (HCL). PCP utilizes Contrastive Language-Image Pretraining (CLIP),
along with attention modules, to systematically define the relationships
between anatomical categories as identified by clinicians. HCS guides
the segmentation model in distinguishing between specific organs,
anatomical structures, and functional systems through hierarchical
relationships. HCL serves as a consistency constraint, fortifying the
directional guidance provided by HCS to enhance the segmentation model's
accuracy and robustness. We conducted extensive experiments to validate
the effectiveness of our approach, and the results indicate that PCNet
can generate a high-performance, universal model for CT segmentation.
The PCNet framework also demonstrates a significant transferability on
multiple downstream tasks. The ablation experiments show that the
methodology employed in constructing the HCS is of critical importance.
OI Chen, Yixin/0000-0002-2727-6387; xie, zhaoheng/0000-0001-8003-8778; Zhu,
Lei/0000-0003-0506-4268
Z8 0
ZB 0
TC 0
ZA 0
AB 计算机断层扫描 (CT) 图像中解剖结构的准确分割对于临床诊断、治疗计划和疾病监测至关重要。目前的深度学习分割方法受到数据规模和模型大小等因素的阻碍。受医生如何识别组织的启发,我们提出了一种新方法,即先验类别网络(PCNet),它通过利用不同类别的解剖结构之间的先验知识来提高分割性能。我们的 PCNet 包含三个关键组件:先前类别提示(PCP)、层次类别系统(HCS)和层次类别丢失(HCL)。 PCP 利用对比语言图像预训练 (CLIP) 以及注意力模块来系统地定义临床医生确定的解剖类别之间的关系。 HCS通过层次关系指导分割模型区分特定器官、解剖结构和功能系统。 HCL作为一致性约束,强化了HCS提供的方向性指导,从而提高了分割模型的准确性和鲁棒性。我们进行了大量的实验来验证我们方法的有效性,结果表明 PCNet 可以生成高性能、通用的 CT 分割模型。 PCNet 框架还展示了在多个下游任务上的显着可转移性。消融实验表明,构建 HCS 所采用的方法至关重要。陈爱欣/0000-0002-2727-6387;谢兆恒/0000-0001-8003-8778;朱雷/0000-0003-0506-4268 Z8 0 ZB 0 TC 0 ZA 0
ZS 0
ZR 0
Z9 0
C1 Peking Univ, Inst Med Technol, Beijing 100191, Peoples R China
C1 Peking Univ, Natl Biomed Imaging Ctr, Beijing 100191, Peoples R China
C1 Peking Univ Third Hosp, Dept Radiol, Beijing 100191, Peoples R China
C1 NMPA Key Lab Evaluat Med Imaging Equipment & Tech, Beijing 100191,
Peoples R China
C1 Peking Univ, Coll Future Technol, Dept Biomed Engn, Beijing 100871,
Peoples R China
C1 Peking Univ Third Hosp, Beijing Key Lab Magnet Resonance Imaging Devices
&, Beijing 100191, Peoples R China
C3 NMPA Key Lab Evaluat Med Imaging Equipment & Tech
C3 Peking Univ Third Hosp
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600014
PM 38687654
ER
ZS 0 ZR 0 Z9 0 C1 北京大学医学技术学院,北京 100191,人民 R C1 北京大学国家生物医学影像中心,北京 100191,人民 R C1 北京大学第三医院放射科,北京 100191 C1 国家药品监督管理局医学影像设备与技术评价重点实验室,北京 100191 C1 北京大学未来技术学院生物医学工程系,北京 100871 C1 北京大学第三医院北京市磁共振成像设备及技术重点实验室北京 100191,人民 R 中国 C3 NMPA 医学影像设备与技术评价重点实验室 C3 北京大学第三医院 SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600014 PM 38687654 ER
PT J
AU Lou, Wei
Wan, Xiang
Li, Guanbin
Lou, Xiaoying
Li, Chenghang
Gao, Feng
Li, Haofeng
PT J AU Lou、Wei Wan、Xiang Li、Guanbin Lou、Xiaoying Li、Chenghang Gang、Feng Li、Haofeng
TI Structure Embedded Nucleus Classification for Histopathology Images
用于组织病理学图像的 TI Structure 嵌入式核分类
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3149
EP 3160
DI 10.1109/TMI.2024.3388328
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3149 EP 3160 DI 10.1109/TMI.2024.3388328 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Nuclei classification provides valuable information for histopathology
image analysis. However, the large variations in the appearance of
different nuclei types cause difficulties in identifying nuclei. Most
neural network based methods are affected by the local receptive field
of convolutions, and pay less attention to the spatial distribution of
nuclei or the irregular contour shape of a nucleus. In this paper, we
first propose a novel polygon-structure feature learning mechanism that
transforms a nucleus contour into a sequence of points sampled in order,
and employ a recurrent neural network that aggregates the sequential
change in distance between key points to obtain learnable shape
features. Next, we convert a histopathology image into a graph structure
with nuclei as nodes, and build a graph neural network to embed the
spatial distribution of nuclei into their representations. To capture
the correlations between the categories of nuclei and their surrounding
tissue patterns, we further introduce edge features that are defined as
the background textures between adjacent nuclei. Lastly, we integrate
both polygon and graph structure learning mechanisms into a whole
framework that can extract intra and inter-nucleus structural
characteristics for nuclei classification. Experimental results show
that the proposed framework achieves significant improvements compared
to the previous methods.
RI Walter, Jochen/B-3677-2014; Li, Haofeng/AEJ-4106-2022; Lou, Wei/IXW-5136-2023; Li, Haofeng/; LOU, WEI/
OI Li, Haofeng/0000-0001-9120-9843; LOU, WEI/0000-0002-2071-4081
Z8 0
ZA 0
ZR 0
AB 细胞核分类为组织病理学图像分析提供了有价值的信息。然而,不同核类型的外观差异很大,导致识别核的困难。大多数基于神经网络的方法受到卷积局部感受野的影响,较少关注核的空间分布或核的不规则轮廓形状。在本文中,我们首先提出了一种新颖的多边形结构特征学习机制,将核轮廓转换为按顺序采样的点序列,并采用循环神经网络聚合关键点之间距离的顺序变化以获得可学习的形状特征。接下来,我们将组织病理学图像转换为以细胞核为节点的图结构,并构建图神经网络将细胞核的空间分布嵌入到它们的表示中。为了捕获细胞核类别与其周围组织模式之间的相关性,我们进一步引入了边缘特征,这些边缘特征被定义为相邻细胞核之间的背景纹理。最后,我们将多边形和图结构学习机制集成到一个整体框架中,可以提取核内和核间结构特征以进行核分类。实验结果表明,与之前的方法相比,所提出的框架取得了显着的改进。 RI 沃尔特,约亨/B-3677-2014;李浩峰/AEJ-4106-2022;楼伟/IXW-5136-2023;李浩峰/;楼伟/爱丽浩峰/0000-0001-9120-9843;楼伟/0000-0002-2071-4081 Z8 0 ZA 0 ZR 0
ZS 0
TC 1
ZB 1
Z9 1
C1 Chinese Univ Hong Kong Shenzhen, Shenzhen Res Inst Big Data, Guangdong
Prov Key Lab Big Data Comp, Shenzhen 518172, Peoples R China
C1 Pazhou Lab, Guangzhou 510330, Peoples R China
C1 Sun Yat sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
C1 Sun Yat sen Univ, Affiliated Hosp 6, Dept Pathol, Guangdong Prov Key Lab
Colorectal & Pelv Floor Dis, Guangzhou 510655, Peoples R China
C1 Sun Yat sen Univ, Affiliated Hosp 6, Dept Colorectal Surg, Dept Gen
Surg,Guangdong Prov Key Lab Colorectal &, Guangzhou 510655, Peoples R
China
C1 Shanghai Artificial Intelligence Lab, Shanghai 200240, Peoples R China
C3 Shanghai Artificial Intelligence Lab
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600016
PM 38607704
ER
ZS 0 TC 1 ZB 1 Z9 1 C1 香港中文大学深圳分校、深圳大数据研究院、广东省大数据重点实验室、深圳 518172、人民 R 中国 C1 琶洲实验室、广州 510330、人民 R 中国 C1 中山大学, 科学与工程学院, 广州 510006, 中山大学附属医院人民路 C1 院 广东省病理科 6 科 结直肠与骨盆底科重点实验室, 广州 510655, 中山大学附属医院人民路 C1 6,广东省结直肠外科重点实验室,广东省结直肠外科重点实验室,广州 510655,人民 R 中国 C1 上海人工智能实验室,上海 200240,人民 R 中国 C3 上海人工智能实验室 SN 0278-0062 EI 1558-254X DA 2024 -09-18 UT WOS:001307429600016 PM 38607704 ER
PT J
AU Zheng, Yi
Conrad, Regan D.
Green, Emily J.
Burks, Eric J.
Betke, Margrit
Beane, Jennifer E.
Kolachalama, Vijaya B.
PT J AU Cheng、Yi Conrad、Regan D. Green、Emily J. Burks、Eric J. Betke、Margrit Beane、Jennifer E. Kolachalama、Vijaya B.
TI Graph Attention-Based Fusion of Pathology Images and Gene Expression for
Prediction of Cancer Survival
基于 TI Graph 注意的病理图像和基因表达融合,用于预测癌症生存
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3085
EP 3097
DI 10.1109/TMI.2024.3386108
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3085 EP 3097 DI 10.1109/TMI.2024.3386108 DT 文章 PD 2024 年 SEP 2024 年 PY
AB Multimodal machine learning models are being developed to analyze
pathology images and other modalities, such as gene expression, to gain
clinical and biological insights. However, most frameworks for
multimodal data fusion do not fully account for the interactions between
different modalities. Here, we present an attention-based fusion
architecture that integrates a graph representation of pathology images
with gene expression data and concomitantly learns from the fused
information to predict patient-specific survival. In our approach,
pathology images are represented as undirected graphs, and their
embeddings are combined with embeddings of gene expression signatures
using an attention mechanism to stratify tumors by patient survival. We
show that our framework improves the survival prediction of human
non-small cell lung cancers, outperforming existing state-of-the-art
approaches that leverage multimodal data. Our framework can facilitate
spatial molecular profiling to identify tumor heterogeneity using
pathology images and gene expression data, complementing results
obtained from more expensive spatial transcriptomic and proteomic
technologies.
RI Kolachalama, Vijaya/AAH-3528-2020; Zheng, Yi/; Conrad, Regan/; Betke, Margrit/; Kolachalama, Vijaya/; Beane-Ebel, Jennifer/
OI Zheng, Yi/0009-0002-7792-6218; Conrad, Regan/0000-0002-5855-4415; Betke,
Margrit/0000-0002-4491-6868; Kolachalama, Vijaya/0000-0002-5312-8644;
Beane-Ebel, Jennifer/0000-0002-6699-2132
ZA 0
ZB 0
AB 多模态机器学习模型正在开发中,用于分析病理图像和其他模态(例如基因表达),以获得临床和生物学见解。然而,大多数多模态数据融合框架并没有完全考虑不同模态之间的相互作用。在这里,我们提出了一种基于注意力的融合架构,它将病理图像的图形表示与基因表达数据集成在一起,并同时从融合信息中学习以预测患者特定的生存率。在我们的方法中,病理图像被表示为无向图,并且它们的嵌入与基因表达特征的嵌入相结合,使用注意力机制根据患者的生存情况对肿瘤进行分层。我们表明,我们的框架改善了人类非小细胞肺癌的生存预测,优于利用多模态数据的现有最先进方法。我们的框架可以促进空间分子分析,以使用病理图像和基因表达数据来识别肿瘤异质性,补充从更昂贵的空间转录组和蛋白质组技术获得的结果。 RI Kolachalama,维贾亚/AAH-3528-2020;郑毅/;康拉德·里根/;玛格丽特·贝特克/;科拉查拉马,维贾亚/;詹妮弗·比恩-埃贝尔/郑毅/0009-0002-7792-6218;康拉德·里根/0000-0002-5855-4415;玛格丽特·贝特克/0000-0002-4491-6868;科拉查拉马,维贾亚/0000-0002-5312-8644;比恩-埃贝尔,詹妮弗/0000-0002-6699-2132 ZA 0 ZB 0
ZS 0
Z8 0
ZR 0
TC 0
Z9 0
C1 Boston Univ, Dept Comp Sci, Boston, MA 02215 USA
C1 Boston Univ, Chobanian & Avedisian Sch Med, Dept Med, Boston, MA 02218
USA
C1 Boston Univ, Chobanian Avedisian Sch Med, Pathol & Lab Med, Boston, MA
02218 USA
C1 Boston Univ, Dept Comp Sci, Boston, MA 02215 USA
C1 Boston Univ, Fac Comp & Data Sci, Boston, MA 02215 USA
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600003
PM 38587959
ER
ZS 0 Z8 0 ZR 0 TC 0 Z9 0 C1 波士顿大学,Comp Sci 系,波士顿,马萨诸塞州 02215 美国 C1 波士顿大学,Chobanian & Avedisian Sch Med,医学系,波士顿,MA 02218 美国 C1 波士顿大学,Chobanian Avedisian Sch Med, Pathol & Lab Med, Boston, MA 02218 USA C1 Boston Univ, Dept Comp Sci, Boston, MA 02215 USA C1 Boston Univ, Fac Comp & Data Sci, Boston, MA 02215 USA SN 0278-0062 EI 1558-254X DA 2024-09 -18 UT WOS:001307429600003 PM 38587959 ER
PT J
AU Meng, Xiangxi
Sun, Kaicong
Xu, Jun
He, Xuming
Shen, Dinggang
PT J AU孟, 孙向西, 徐凯聪, 何俊, 沉旭明, 定刚
TI Multi-Modal Modality-Masked Diffusion Network for Brain MRI Synthesis
With Random Modality Missing
TI 多模态模态掩模扩散网络,用于随机模态缺失的脑 MRI 合成
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 7
BP 2587
EP 2598
DI 10.1109/TMI.2024.3368664
DT Article
PD JUL 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 7 BP 2587 EP 2598 DI 10.1109/TMI.2024.3368664 DT 文章 PD 2024 年 7 月 PY 2024
AB Synthesis of unavailable imaging modalities from available ones can
generate modality-specific complementary information and enable
multi-modality based medical images diagnosis or treatment. Existing
generative methods for medical image synthesis are usually based on
cross-modal translation between acquired and missing modalities. These
methods are usually dedicated to specific missing modality and perform
synthesis in one shot, which cannot deal with varying number of missing
modalities flexibly and construct the mapping across modalities
effectively. To address the above issues, in this paper, we propose a
unified Multi-modal Modality-masked Diffusion Network (M2DN), tackling
multi-modal synthesis from the perspective of "progressive
whole-modality inpainting", instead of "cross-modal translation".
Specifically, our M2DN considers the missing modalities as random noise
and takes all the modalities as a unity in each reverse diffusion step.
The proposed joint synthesis scheme performs synthesis for the missing
modalities and self-reconstruction for the available ones, which not
only enables synthesis for arbitrary missing scenarios, but also
facilitates the construction of common latent space and enhances the
model representation ability. Besides, we introduce a modality-mask
scheme to encode availability status of each incoming modality
explicitly in a binary mask, which is adopted as condition for the
diffusion model to further enhance the synthesis performance of our M2DN
for arbitrary missing scenarios. We carry out experiments on two public
brain MRI datasets for synthesis and downstream segmentation tasks.
Experimental results demonstrate that our M2DN outperforms the
state-of-the-art models significantly and shows great generalizability
for arbitrary missing modalities.
OI sun, kaicong/0000-0002-9999-2542
ZR 0
ZA 0
Z8 0
AB 从可用的成像模式中合成不可用的成像模式可以生成特定于模式的补充信息,并实现基于多模式的医学图像诊断或治疗。现有的医学图像合成生成方法通常基于已获取模态和缺失模态之间的跨模态转换。这些方法通常致力于特定的缺失模态并一次性进行合成,不能灵活地处理不同数量的缺失模态并有效地构建跨模态的映射。为了解决上述问题,在本文中,我们提出了一种统一的多模态模态掩蔽扩散网络(M2DN),从“渐进式全模态修复”的角度来解决多模态合成,而不是“跨模态翻译” ”。具体来说,我们的 M2DN 将缺失的模态视为随机噪声,并将每个反向扩散步骤中的所有模态视为一个整体。所提出的联合合成方案对缺失的模态进行合成,对可用的模态进行自重建,这不仅能够合成任意缺失的场景,而且有利于公共潜在空间的构建并增强模型表示能力。此外,我们引入了一种模态掩码方案,将每个传入模态的可用性状态明确地编码在二进制掩码中,这被用作扩散模型的条件,以进一步增强我们的 M2DN 对于任意缺失场景的综合性能。我们在两个公共脑 MRI 数据集上进行了实验,用于合成和下游分割任务。 实验结果表明,我们的 M2DN 显着优于最先进的模型,并且对任意缺失的模态表现出良好的通用性。OI太阳凯聪/0000-0002-9999-2542 ZR 0 ZA 0 Z8 0
ZS 0
ZB 0
TC 0
Z9 0
C1 ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
C1 Nankai Univ, Sch Stat & Data Sci, Tianjin 300071, Peoples R China
C1 Chinese Univ Hong Kong Shenzhen, Guangdong Prov Key Lab Big Data Comp,
Shenzhen 518172, Peoples R China
C1 ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples
R China
C1 ShanghaiTech Univ, State Key Lab Adv Med Mat & Devices, Shanghai 201210,
Peoples R China
C1 Shanghai United Imaging Intelligence Co Ltd, Shanghai 201807, Peoples R
China
C1 Shanghai Clin Res & Trial Ctr, Shanghai 200231, Peoples R China
C3 Shanghai United Imaging Intelligence Co Ltd
C3 Shanghai Clin Res & Trial Ctr
SN 0278-0062
EI 1558-254X
DA 2024-07-22
UT WOS:001263692100022
PM 38393846
ER
ZS 0 ZB 0 TC 0 Z9 0 C1 上海科技大学生物医学工程学院,上海 201210,人民大学 C1 南开大学统计与数据科学学院,天津 300071,人民大学 C1 香港深圳大学广东省重点实验室Data Comp, 深圳 518172, 人民 R China C1 上海科技大学, 科学信息科技, 上海 201210, 人民 R China C1 上海科技大学, 先进医学材料与器件国家重点实验室, 上海 201210, 人民 R China C1 上海联合影像智能有限公司有限公司,上海 201807,人民 R 中国 C1 上海临床研究与试验中心,上海 200231,人民 R 中国 C3 上海联影智能有限公司 C3 上海临床研究与试验中心 SN 0278-0062 EI 1558-254X DA 2024-07-22 UT WOS:001263692100022 PM 38393846 ER
PT J
AU Cai, De
Chen, Jie
Zhao, Junhan
Xue, Yuan
Yang, Sen
Yuan, Wei
Feng, Min
Weng, Haiyan
Liu, Shuguang
Peng, Yulong
Zhu, Junyou
Wang, Kanran
Jackson, Christopher
Tang, Hongping
Huang, Junzhou
Wang, Xiyue
PT J AU Cai, 陈德, 赵杰, 薛俊瀚, 杨远, 袁森, 冯伟, 翁敏, 刘海燕, 彭曙光, 朱玉龙, 王俊友, Kanran Jackson, Christopher Tang, 黄红平, 王俊洲,西月
TI HiCervix: An Extensive Hierarchical Dataset and Benchmark for Cervical
Cytology Classification.
TI HiCervix:广泛的分层数据集和宫颈细胞学分类基准。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3419697
DT Journal Article
PD 2024-Jun-26
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3419697 DT 期刊文章 PD 2024-Jun-26 PY 2024
AB Cervical cytology is a critical screening strategy for early detection
of pre-cancerous and cancerous cervical lesions. The challenge lies in
accurately classifying various cervical cytology cell types. Existing
automated cervical cytology methods are primarily trained on databases
covering a narrow range of coarse-grained cell types, which fail to
provide a comprehensive and detailed performance analysis that
accurately represents real-world cytopathology conditions. To overcome
these limitations, we introduce HiCervix, the most extensive,
multi-center cervical cytology dataset currently available to the
public. HiCervix includes 40,229 cervical cells from 4,496 whole slide
images, categorized into 29 annotated classes. These classes are
organized within a three-level hierarchical tree to capture fine-grained
subtype information. To exploit the semantic correlation inherent in
this hierarchical tree, we propose HierSwin, a hierarchical vision
transformer-based classification network. HierSwin serves as a benchmark
for detailed feature learning in both coarse-level and fine-level
cervical cancer classification tasks. In our comprehensive experiments,
HierSwin demonstrated remarkable performance, achieving 92.08% accuracy
for coarse-level classification and 82.93% accuracy averaged across all
three levels. When compared to board-certified cytopathologists,
HierSwin achieved high classification performance (0.8293 versus 0.7359
averaged accuracy), highlighting its potential for clinical
applications. This newly released HiCervix dataset, along with our
benchmark HierSwin method, is poised to make a substantial impact on the
advancement of deep learning algorithms for rapid cervical cancer
screening and greatly improve cancer prevention and patient outcomes in
real-world clinical settings.
RI Yang, Sen/; Wang, Xiyue/GXM-5641-2022; Zhao, Junhan/
OI Yang, Sen/0000-0002-0639-4122; Wang, Xiyue/0000-0002-3597-9090; Zhao,
Junhan/0000-0002-0316-8365
AB 宫颈细胞学检查是早期发现癌前病变和癌性宫颈病变的重要筛查策略。挑战在于准确分类各种宫颈细胞学细胞类型。现有的自动化宫颈细胞学方法主要是在涵盖狭窄范围的粗粒度细胞类型的数据库上进行训练,这些数据库无法提供准确代表真实世界细胞病理学条件的全面且详细的性能分析。为了克服这些限制,我们引入了 HiCervix,这是目前向公众提供的最广泛的多中心宫颈细胞学数据集。 HiCervix 包含来自 4,496 个完整幻灯片图像的 40,229 个宫颈细胞,分为 29 个带注释的类别。这些类被组织在一个三级层次树中,以捕获细粒度的子类型信息。为了利用这种分层树中固有的语义相关性,我们提出了 HierSwin,一种基于分层视觉变换器的分类网络。 HierSwin 可以作为粗级和精细级宫颈癌分类任务中详细特征学习的基准。在我们的综合实验中,HierSwin 表现出了卓越的性能,粗级分类准确率达到 92.08%,所有三个级别的平均准确率达到 82.93%。与经过委员会认证的细胞病理学家相比,HierSwin 实现了较高的分类性能(平均准确度为 0.8293 对比 0.7359),凸显了其临床应用的潜力。 这个新发布的 HiCervix 数据集以及我们的基准 HierSwin 方法有望对用于快速宫颈癌筛查的深度学习算法的进步产生重大影响,并极大地改善现实临床环境中的癌症预防和患者结果。日阳森/;王夕月/GXM-5641-2022;赵俊瀚/艾阳森/0000-0002-0639-4122;王夕月/0000-0002-3597-9090;赵俊瀚/0000-0002-0316-8365
ZS 0
ZA 0
Z8 0
ZB 0
ZR 0
TC 0
Z9 0
EI 1558-254X
DA 2024-06-29
UT MEDLINE:38923481
PM 38923481
ER
ZS 0 ZA 0 Z8 0 ZB 0 ZR 0 TC 0 Z9 0 EI 1558-254X DA 2024-06-29 UT MEDLINE:38923481 PM 38923481 ER
PT J AU Xu, Yanwu Sun, Li Peng, Wei Jia, Shuyue Morrison, Katelyn Perer, Adam Zandifar, Afrooz Visweswaran, Shyam Eslami, Motahhare Batmanghelich, Kayhan
TI MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT
Images.
TI MedSyn:文本引导的高保真 3D CT 图像解剖感知合成。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3415032
DT Journal Article
PD 2024-Jun-20
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3415032 DT 期刊文章 PD 2024-Jun-20 PY 2024
AB This paper introduces an innovative methodology for producing
high-quality 3D lung CT images guided by textual information. While
diffusion-based generative models are increasingly used in medical
imaging, current state-of-the-art approaches are limited to
low-resolution outputs and underutilize radiology reports' abundant
information. The radiology reports can enhance the generation process by
providing additional guidance and offering fine-grained control over the
synthesis of images. Nevertheless, expanding text-guided generation to
high-resolution 3D images poses significant memory and anatomical
detail-preserving challenges. Addressing the memory issue, we introduce
a hierarchical scheme that uses a modified UNet architecture. We start
by synthesizing low-resolution images conditioned on the text, serving
as a foundation for subsequent generators for complete volumetric data.
To ensure the anatomical plausibility of the generated samples, we
provide further guidance by generating vascular, airway, and lobular
segmentation masks in conjunction with the CT images. The model
demonstrates the capability to use textual input and segmentation tasks
to generate synthesized images. Algorithmic comparative assessments and
blind evaluations conducted by 10 board-certified radiologists indicate
that our approach exhibits superior performance compared to the most
advanced models based on GAN and diffusion techniques, especially in
accurately retaining crucial anatomical features such as fissure lines
and airways. This innovation introduces novel possibilities. This study
focuses on two main objectives: (1) the development of a method for
creating images based on textual prompts and anatomical components, and
(2) the capability to generate new images conditioning on anatomical
elements. The advancements in image generation can be applied to enhance
numerous downstream tasks.
OI Sun, Li/0000-0002-5809-1318; Peng, Wei/0000-0002-2892-5764
Z8 0
ZA 0
AB 本文介绍了一种在文本信息引导下生成高质量 3D 肺部 CT 图像的创新方法。虽然基于扩散的生成模型越来越多地用于医学成像,但当前最先进的方法仅限于低分辨率输出,并且未充分利用放射学报告的丰富信息。放射学报告可以通过提供额外的指导和对图像合成的细粒度控制来增强生成过程。然而,将文本引导生成扩展到高分辨率 3D 图像对记忆和解剖细节保留提出了重大挑战。为了解决内存问题,我们引入了一种使用修改后的 UNet 架构的分层方案。我们首先合成以文本为条件的低分辨率图像,作为后续完整体积数据生成器的基础。为了确保生成的样本的解剖学合理性,我们通过结合 CT 图像生成血管、气道和小叶分割掩模来提供进一步的指导。该模型演示了使用文本输入和分割任务生成合成图像的能力。由 10 名经过委员会认证的放射科医生进行的算法比较评估和盲评估表明,与基于 GAN 和扩散技术的最先进模型相比,我们的方法表现出优越的性能,特别是在准确保留裂痕线和气道等关键解剖特征方面。这项创新带来了新的可能性。 本研究侧重于两个主要目标:(1)开发一种基于文本提示和解剖成分创建图像的方法,以及(2)根据解剖元素生成新图像的能力。图像生成方面的进步可用于增强众多下游任务。孙莉/0000-0002-5809-1318;彭伟/0000-0002-2892-5764 Z8 0 ZA 0
ZS 0
TC 0
ZB 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-06-22
UT MEDLINE:38900619
PM 38900619
ER
ZS 0 TC 0 ZB 0 ZR 0 Z9 0 EI 1558-254X DA 2024-06-22 UT MEDLINE:38900619 PM 38900619 ER
PT J AU Xu, Ziang Rittscher, Jens Ali, Sharib
TI SSL-CPCD: Self-supervised learning with composite pretext-class
discrimination for improved generalisability in endoscopic image
analysis.
TI SSL-CPCD:具有复合借口类别歧视的自监督学习,可提高内窥镜图像分析的通用性。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3411933
DT Journal Article
PD 2024-Jun-10
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3411933 DT 期刊文章 PD 2024-Jun-10 PY 2024
AB Data-driven methods have shown tremendous progress in medical image
analysis. In this context, deep learning-based supervised methods are
widely popular. However, they require a large amount of training data
and face issues in generalisability to unseen datasets that hinder
clinical translation. Endoscopic imaging data is characterised by large
inter- and intra-patient variability that makes these models more
challenging to learn representative features for downstream tasks. Thus,
despite the publicly available datasets and datasets that can be
generated within hospitals, most supervised models still underperform.
While self-supervised learning has addressed this problem to some extent
in natural scene data, there is a considerable performance gap in the
medical image domain. In this paper, we propose to explore patch-level
instance-group discrimination and penalisation of inter-class variation
using additive angular margin within the cosine similarity metrics. Our
novel approach enables models to learn to cluster similar
representations, thereby improving their ability to provide better
separation between different classes. Our results demonstrate
significant improvement on all metrics over the state-of-the-art (SOTA)
methods on the test set from the same and diverse datasets. We evaluated
our approach for classification, detection, and segmentation. SSL-CPCD
attains notable Top 1 accuracy of 79.77% in ulcerative colitis
classification, an 88.62% mean average precision (mAP) for detection,
and an 82.32% dice similarity coefficient for segmentation tasks. These
represent improvements of over 4%, 2%, and 3%, respectively, compared to
the baseline architectures. We demonstrate that our method generalises
better than all SOTA methods to unseen datasets, reporting over 7%
improvement.
OI Rittscher, Jens/0000-0002-8528-8298; Ali, Sharib/0000-0003-1313-3542;
Xu, Ziang/0000-0002-3883-3716
TC 0
ZR 0
AB 数据驱动方法在医学图像分析方面取得了巨大进步。在此背景下,基于深度学习的监督方法广泛流行。然而,它们需要大量的训练数据,并且面临着对看不见的数据集的通用性问题,从而阻碍了临床转化。内窥镜成像数据的特点是患者间和患者内差异较大,这使得这些模型在学习下游任务的代表性特征方面更具挑战性。因此,尽管有公开的数据集和可以在医院内部生成的数据集,但大多数监督模型仍然表现不佳。虽然自监督学习在自然场景数据中一定程度上解决了这个问题,但在医学图像领域还存在相当大的性能差距。在本文中,我们建议使用余弦相似性度量内的加性角度裕度来探索补丁级实例组歧视和类间变异的惩罚。我们的新颖方法使模型能够学习对相似的表示进行聚类,从而提高它们在不同类别之间提供更好分离的能力。我们的结果表明,在来自相同和不同数据集的测试集上,所有指标均比最先进的 (SOTA) 方法有了显着改进。我们评估了我们的分类、检测和分割方法。 SSL-CPCD 在溃疡性结肠炎分类中达到了 79.77% 的 Top 1 准确率,检测平均精度 (mAP) 为 88.62%,分割任务的骰子相似系数为 82.32%。与基准架构相比,这些改进分别超过 4%、2% 和 3%。 我们证明,我们的方法比所有 SOTA 方法对未见过的数据集具有更好的泛化能力,报告改进超过 7%。OI Rittscher,延斯/0000-0002-8528-8298;阿里·沙里布/0000-0003-1313-3542;徐子昂/0000-0002-3883-3716 TC 0 ZR 0
ZS 0
ZA 0
ZB 0
Z8 0
Z9 0
EI 1558-254X
DA 2024-06-12
UT MEDLINE:38857149
PM 38857149
ER
ZS 0 ZA 0 ZB 0 Z8 0 Z9 0 EI 1558-254X DA 2024-06-12 UT MEDLINE:38857149 PM 38857149 ER
PT J
AU Chen, Yuanyuan
Guo, Xiaoqing
Xia, Yong
Yuan, Yixuan
PT J AU Chen, 郭媛媛, 夏晓青, 袁勇, 艺轩
TI Disentangle Then Calibrate With Gradient Guidance: A Unified Framework
for Common and Rare Disease Diagnosis
TI Disentangle 然后使用梯度引导进行校准:常见和罕见疾病诊断的统一框架
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1816
EP 1827
DI 10.1109/TMI.2023.3349284
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1816 EP 1827 DI 10.1109/TMI.2023.3349284 DT 文章 PD 2024 年 5 月 PY 2024
AB The computer-aided diagnosis (CAD) for rare diseases using medical
imaging poses a significant challenge due to the requirement of large
volumes of labeled training data, which is particularly difficult to
collect for rare diseases. Although Few-shot learning (FSL) methods have
been developed for this task, these methods focus solely on rare disease
diagnosis, failing to preserve the performance in common disease
diagnosis. To address this issue, we propose the Disentangle then
Calibrate with Gradient Guidance (DCGG) framework under the setting of
generalized few-shot learning, i.e., using one model to diagnose both
common and rare diseases. The DCGG framework consists of a network
backbone, a gradient-guided network disentanglement (GND) module, and a
gradient-induced feature calibration (GFC) module. The GND module
disentangles the network into a disease-shared component and a
disease-specific component based on gradient guidance, and devises
independent optimization strategies for both components, respectively,
when learning from rare diseases. The GFC module transfers only the
disease-shared channels of common-disease features to rare diseases, and
incorporates the optimal transport theory to identify the best transport
scheme based on the semantic relationship among different diseases.
Based on the best transport scheme, the GFC module calibrates the
distribution of rare-disease features at the disease-shared channels,
deriving more informative rare-disease features for better diagnosis.
The proposed DCGG framework has been evaluated on three public medical
image classification datasets. Our results suggest that the DCGG
framework achieves state-of-the-art performance in diagnosing both
common and rare diseases.
RI guo, xiaoqing/HTO-0629-2023; hu, chen/JHT-2836-2023; Chen, Yuanyuan/; Yuan, Yixuan/KSL-8440-2024
OI Chen, Yuanyuan/0000-0002-2009-226X; Yuan, Yixuan/0000-0002-0853-6948
TC 1
ZB 1
Z8 0
AB 使用医学成像对罕见疾病进行计算机辅助诊断 (CAD) 提出了重大挑战,因为需要大量标记的训练数据,而对于罕见疾病来说,收集这些数据尤其困难。尽管已经为此任务开发了少样本学习(FSL)方法,但这些方法仅专注于罕见疾病诊断,未能保持常见疾病诊断的性能。为了解决这个问题,我们在广义少样本学习的背景下提出了“Disentangle then Calibrate with Gradient Guidance”(DCGG)框架,即使用一个模型来诊断常见疾病和罕见疾病。 DCGG框架由网络主干、梯度引导网络解缠(GND)模块和梯度诱导特征校准(GFC)模块组成。 GND 模块基于梯度引导将网络分解为疾病共享组件和疾病特定组件,并在学习罕见疾病时分别为这两个组件设计独立的优化策略。 GFC模块仅将常见疾病特征的疾病共享通道转移到罕见疾病,并结合最优传输理论,根据不同疾病之间的语义关系识别最佳传输方案。 GFC模块基于最佳传输方案,校准罕见疾病特征在疾病共享通道上的分布,得出更多信息丰富的罕见疾病特征,以便更好地诊断。所提出的 DCGG 框架已在三个公共医学图像分类数据集上进行了评估。我们的结果表明,DCGG 框架在诊断常见疾病和罕见疾病方面均实现了最先进的性能。日果小青/HTO-0629-2023;胡陈/JHT-2836-2023;陈媛媛/;袁逸轩/KSL-8440-2024 陈爱媛/0000-0002-2009-226X;袁逸轩/0000-0002-0853-6948 TC 1 ZB 1 Z8 0
ZS 0
ZR 0
ZA 0
Z9 1
C1 Northwestern Polytech Univ, Sch Comp Sci & Engn, Natl Engn Lab
Integrated Aerosp Ground Ocean Big, Xian 710072, Peoples R China
C1 Univ Oxford, Dept Engn Sci, Oxford OX1 2JD, England
C1 Northwestern Polytech Univ, Ningbo Inst, Ningbo 315048, Peoples R China
C1 Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800004
PM 38165794
ER
ZS 0 ZR 0 ZA 0 Z9 1 C1 西北工业大学,Sch Comp Sci & Engn,国家工程实验室集成 Aerosp 地面海洋大,西安 710072,人民 R 中国 C1 牛津大学,工程科学系,牛津 OX1 2JD,英国 C1 西北工业大学Univ, Ningbo Inst, Ningbo 315048, Peoples R China C1 Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800004 PM 38165794 ER
PT J
AU Ma, Jiabo
Chen, Hao
PT J AU Ma, 陈家博, 郝
TI Efficient Supervised Pretraining of Swin-Transformer for Virtual
Staining of Microscopy Images
用于显微图像虚拟染色的 TI 高效监督预训练 Swin-Transformer
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1388
EP 1399
DI 10.1109/TMI.2023.3337253
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1388 EP 1399 DI 10.1109/TMI.2023.3337253 DT 文章 PD 2024 年 4 月 PY 2024
AB Fluorescence staining is an important technique in life science for
labeling cellular constituents. However, it also suffers from being
time-consuming, having difficulty in simultaneous labeling, etc. Thus,
virtual staining, which does not rely on chemical labeling, has been
introduced. Recently, deep learning models such as transformers have
been applied to virtual staining tasks. However, their performance
relies on large-scale pretraining, hindering their development in the
field. To reduce the reliance on large amounts of computation and data,
we construct a Swin-transformer model and propose an efficient
supervised pretraining method based on the masked autoencoder (MAE).
Specifically, we adopt downsampling and grid sampling to mask 75% of
pixels and reduce the number of tokens. The pretraining time of our
method is only 1/16 compared with the original MAE. We also design a
supervised proxy task to predict stained images with multiple styles
instead of masked pixels. Additionally, most virtual staining approaches
are based on private datasets and evaluated by different metrics, making
a fair comparison difficult. Therefore, we develop a standard benchmark
based on three public datasets and build a baseline for the convenience
of future researchers. We conduct extensive experiments on three
benchmark datasets, and the experimental results show the proposed
method achieves the best performance both quantitatively and
qualitatively. In addition, ablation studies are conducted, and
experimental results illustrate the effectiveness of the proposed
pretraining method. The benchmark and code are available at
https://github.com/birkhoffkiki/CAS-Transformer.
RI Chen, Hao/V-4299-2019; Ma, Jiabo/
OI Chen, Hao/0000-0002-8400-3780; Ma, Jiabo/0000-0001-8532-4466
Z8 0
TC 1
ZA 0
ZB 0
ZR 0
AB 荧光染色是生命科学中标记细胞成分的重要技术。然而,它也存在耗时、难以同时标记等问题。因此,不依赖化学标记的虚拟染色被引入。最近,变压器等深度学习模型已应用于虚拟染色任务。然而,它们的性能依赖于大规模的预训练,阻碍了它们在该领域的发展。为了减少对大量计算和数据的依赖,我们构建了 Swin-transformer 模型,并提出了一种基于掩码自动编码器(MAE)的有效监督预训练方法。具体来说,我们采用下采样和网格采样来屏蔽 75% 的像素并减少 token 的数量。与原始 MAE 相比,我们方法的预训练时间仅为 1/16。我们还设计了一个监督代理任务来预测具有多种样式而不是屏蔽像素的染色图像。此外,大多数虚拟染色方法都基于私有数据集并通过不同的指标进行评估,这使得公平比较变得困难。因此,我们基于三个公共数据集制定了标准基准,并建立了一个基线,以方便未来的研究人员。我们在三个基准数据集上进行了广泛的实验,实验结果表明所提出的方法在定量和定性上都达到了最佳性能。此外,还进行了消融研究,实验结果说明了所提出的预训练方法的有效性。基准测试和代码可在 https://github.com/birkhoffkiki/CAS-Transformer 获取。日晨浩/V-4299-2019;马家波/陈爱浩/0000-0002-8400-3780;马家博/0000-0001-8532-4466 Z8 0 TC 1 ZA 0 ZB 0 ZR 0
ZS 0
Z9 1
C1 Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R
China
C1 Hong Kong Univ Sci & Technol, Dept Chem & Biol Engn, Hong Kong, Peoples
R China
C1 HKUST Shenzhen Hong Kong Collaborat Innovat Res In, Shenzhen, Peoples R
China
C3 HKUST Shenzhen Hong Kong Collaborat Innovat Res In
SN 0278-0062
EI 1558-254X
DA 2024-07-08
UT WOS:001196733400013
PM 38010933
ER
ZS 0 Z9 1 C1 香港科技大学,计算科学与工程系,香港,人民 R 中国 C1 香港科技大学,化学与生物工程系,香港,人民 R 中国 C1 香港科技大学深圳香港合作深圳创新研究中心,人民 R China C3 HKUST 深圳香港合作创新研究中心 SN 0278-0062 EI 1558-254X DA 2024-07-08 UT WOS:001196733400013 PM 38010933 ER
PT J
AU Meng, Qingjie
Bai, Wenjia
O'Regan, Declan P.
Rueckert, Daniel
PT J AU孟,白庆杰,Wenjia O'Regan,Declan P. Rueckert,Daniel
TI DeepMesh: Mesh-Based Cardiac Motion Tracking Using Deep Learning
TI DeepMesh:使用深度学习进行基于网格的心脏运动跟踪
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 4
BP 1489
EP 1500
DI 10.1109/TMI.2023.3340118
DT Article
PD APR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 4 BP 1489 EP 1500 DI 10.1109/TMI.2023.3340118 DT 文章 PD 2024 年 4 月 PY 2024
AB 3D motion estimation from cine cardiac magnetic resonance (CMR) images
is important for the assessment of cardiac function and the diagnosis of
cardiovascular diseases. Current state-of-the art methods focus on
estimating dense pixel-/voxel-wise motion fields in image space, which
ignores the fact that motion estimation is only relevant and useful
within the anatomical objects of interest, e.g., the heart. In this
work, we model the heart as a 3D mesh consisting of epi- and endocardial
surfaces. We propose a novel learning framework, DeepMesh, which
propagates a template heart mesh to a subject space and estimates the 3D
motion of the heart mesh from CMR images for individual subjects. In
DeepMesh, the heart mesh of the end-diastolic frame of an individual
subject is first reconstructed from the template mesh. Mesh-based 3D
motion fields with respect to the end-diastolic frame are then estimated
from 2D short- and long-axis CMR images. By developing a differentiable
mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape
information from multiple anatomical views for 3D mesh reconstruction
and mesh motion estimation. The proposed method estimates vertex-wise
displacement and thus maintains vertex correspondences between time
frames, which is important for the quantitative assessment of cardiac
function across different subjects and populations. We evaluate DeepMesh
on CMR images acquired from the UK Biobank. We focus on 3D motion
estimation of the left ventricle in this work. Experimental results show
that the proposed method quantitatively and qualitatively outperforms
other image-based and mesh-based cardiac motion tracking methods.
RI Bai, Wenjia/ABH-6023-2020; Rueckert, Daniel/C-4393-2008; Bai, Wenjia/B-3377-2017; Meng, Qingjie/; O'Regan, Declan/; Rueckert, Daniel/
OI Bai, Wenjia/0000-0003-2943-7698; Meng, Qingjie/0000-0001-8728-4007;
O'Regan, Declan/0000-0002-0691-0270; Rueckert,
Daniel/0000-0002-5683-5889
ZB 0
Z8 0
ZA 0
ZR 0
电影心脏磁共振 (CMR) 图像的 AB 3D 运动估计对于评估心脏功能和诊断心血管疾病非常重要。当前最先进的方法集中于估计图像空间中密集的像素/体素运动场,这忽略了运动估计仅在感兴趣的解剖对象(例如心脏)内相关和有用的事实。在这项工作中,我们将心脏建模为由心外膜和心内膜表面组成的 3D 网格。我们提出了一种新颖的学习框架 DeepMesh,它将模板心脏网格传播到主题空间,并根据各个主题的 CMR 图像估计心脏网格的 3D 运动。在 DeepMesh 中,首先根据模板网格重建个体受试者舒张末期帧的心脏网格。然后根据 2D 短轴和长轴 CMR 图像估计相对于舒张末期帧的基于网格的 3D 运动场。通过开发可微分的网格到图像光栅器,DeepMesh 能够利用来自多个解剖视图的 2D 形状信息进行 3D 网格重建和网格运动估计。所提出的方法估计顶点方向的位移,从而维持时间帧之间的顶点对应关系,这对于不同受试者和人群的心脏功能的定量评估非常重要。我们根据从英国生物银行获取的 CMR 图像评估 DeepMesh。在这项工作中,我们重点关注左心室的 3D 运动估计。实验结果表明,该方法在定量和定性上优于其他基于图像和基于网格的心脏运动跟踪方法。日白文佳/ABH-6023-2020;丹尼尔·吕克特/C-4393-2008;白文佳/B-3377-2017;孟庆杰/;奥雷根,德克兰/;丹尼尔·吕克特/白文佳/0000-0003-2943-7698;孟庆杰/0000-0001-8728-4007;奥雷根,德克兰/0000-0002-0691-0270;丹尼尔·吕克特/0000-0002-5683-5889 ZB 0 Z8 0 ZA 0 ZR 0
ZS 0
TC 1
Z9 1
C1 Imperial Coll London, Dept Comp, Biomed Image Anal Grp, London SW7 2AZ,
England
C1 Univ Birmingham, Sch Comp Sci, Birmingham B15 2SQ, England
C1 Imperial Coll London, Dept Brain Sci, London SW7 2AZ, England
C1 Imperial Coll London, Data Sci Inst, London SW7 2AZ, England
C1 Imperial Coll London, MRC London Inst Med Sci, London W12 0HS, England
C1 Tech Univ Munich, Klinikum rechts Isar, D-80333 Munich, Germany
SN 0278-0062
EI 1558-254X
DA 2024-07-02
UT WOS:001196733400004
PM 38064325
ER
ZS 0 TC 1 Z9 1 C1 伦敦帝国学院,Comp 系,Biomed Image Anal Grp,伦敦 SW7 2AZ,英格兰 C1 伯明翰大学,Sch Comp Sci,伯明翰 B15 2SQ,英格兰 C1 伦敦帝国学院,脑科学系,伦敦 SW7 2AZ,英格兰 C1 Imperial Coll London,Data Sci Inst,伦敦 SW7 2AZ,英格兰 C1 Imperial Coll London,MRC London Inst Med Sci,伦敦 W12 0HS,英格兰 C1 Tech Univ 慕尼黑,Klinikum rechts Isar,D-80333 慕尼黑,德国 SN 0278-0062 EI 1558-254X DA 2024-07-02 UT WOS:001196733400004 PM 38064325 ER
PT J
AU Challoob, Mohsin
Gao, Yongsheng
Busch, Andrew
PT J AU Challoob、Mohsin Gau、Yongsheng Busch、Andrew
TI Distinctive Phase Interdependency Model for Retinal Vasculature
Delineation in OCT-Angiography Images
用于 OCT 血管造影图像中视网膜脉管系统描绘的 TI 独特相位相互依赖性模型
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1018
EP 1032
DI 10.1109/TMI.2023.3326742
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1018 EP 1032 DI 10.1109/TMI.2023.3326742 DT 文章 PD 2024 年 3 月 PY 2024
AB Automatic detection of retinal vasculature in optical coherence
tomography angiography (OCTA) images faces several challenges such as
the closely located capillaries, vessel discontinuity and high noise
level. This paper introduces a new distinctive phase interdependency
model to address these problems for delineating centerline patterns of
the vascular network. We capture the inherent property of vascular
centerlines by obtaining the inter-scale dependency information that
exists between neighboring symmetrical wavelets in complex Poisson
domain. In particular, the proposed phase interdependency model
identifies vascular centerlines as the distinctive features that have
high magnitudes over adjacent symmetrical coefficients whereas the
coefficients caused by background noises are decayed rapidly along
adjacent wavelet scales. The potential relationships between the
neighboring Poisson coefficients are established based on the coherency
of distinctive symmetrical wavelets. The proposed phase model is
assessed on the OCTA-500 database (300 OCTA images + 200 OCT images),
ROSE-1-SVC dataset (9 OCTA images), ROSE-1 (SVC+ DVC) dataset (9 OCTA
images), and ROSE-2 dataset (22 OCTA images). The experiments on the
clinically relevant OCTA images validate the effectiveness of the
proposed method in achieving high-quality results. Our method produces
average F-score of 0.822, 0.782, and 0.779 on ROSE-1-SVC, ROSE-1 (SVC+
DVC), and ROSE-2 datasets, respectively, and the F-score of 0.910 and
0.862 on OCTA_6mm and OCT_3mm datasets (OCTA-500 database),
respectively, demonstrating its superior performance over the
state-of-the-art benchmark methods.
RI Challoob, Mohsin/HII-5374-2022; Gao, Yongsheng/P-4704-2016; Challoob, Mohsin/; Busch, Andrew/
OI Challoob, Mohsin/0000-0001-5732-0273; Busch, Andrew/0000-0002-9461-9722
Z8 0
ZR 0
ZB 0
TC 0
ZA 0
AB 在光学相干断层扫描血管造影 (OCTA) 图像中自动检测视网膜脉管系统面临着多项挑战,例如毛细血管位置紧密、血管不连续性和高噪声水平。本文介绍了一种新的独特的相位相互依赖性模型来解决这些问题,以描绘血管网络的中心线模式。我们通过获取复杂泊松域中相邻对称小波之间存在的尺度间依赖性信息来捕获血管中心线的固有属性。特别是,所提出的相位相互依赖性模型将血管中心线识别为在相邻对称系数上具有高幅度的独特特征,而由背景噪声引起的系数沿相邻小波尺度快速衰减。相邻泊松系数之间的潜在关系是基于独特对称小波的相干性建立的。所提出的相位模型在 OCTA-500 数据库(300 个 OCTA 图像 + 200 个 OCT 图像)、ROSE-1-SVC 数据集(9 个 OCTA 图像)、ROSE-1 (SVC+ DVC) 数据集(9 个 OCTA 图像)和 ROSE 上进行评估-2 数据集(22 个 OCTA 图像)。对临床相关 OCTA 图像的实验验证了所提出的方法在获得高质量结果方面的有效性。我们的方法在 ROSE-1-SVC、ROSE-1 (SVC+ DVC) 和 ROSE-2 数据集上产生的平均 F 分数分别为 0.822、0.782 和 0.779,在 OCTA_6mm 和 OCT_3mm 上产生的 F 分数分别为 0.910 和 0.862数据集(OCTA-500 数据库),分别证明了其优于最先进的基准方法的性能。RI Challoob,莫辛/HII-5374-2022;高永生/P-4704-2016;莫辛·查洛布/;安德鲁·布施/莫辛·OI Challoob/0000-0001-5732-0273;安德鲁·布施/0000-0002-9461-9722 Z8 0 ZR 0 ZB 0 TC 0 ZA 0
ZS 0
Z9 0
C1 Griffith Univ, Sch Engn & Built Environm, Brisbane, Qld 4111, Australia
C1 Univ Technol Baghdad, Control & Syst Engn Dept, Baghdad 10066, Iraq
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600027
PM 37871100
ER
ZS 0 Z9 0 C1 格里菲斯大学,工程与建筑环境学院,布里斯班,Qld 4111,澳大利亚 C1 巴格达科技大学,控制与系统工程部,巴格达 10066,伊拉克 SN 0278-0062 EI 1558-254X DA 2024-04-12 UT WOS:001180906600027 下午 37871100 ER
PT J
AU Kurtz, Samuel
Wattrisse, Bertrand
Van Houten, Elijah E. W.
PT J AU Kurtz、Samuel Wattrisse、Bertrand Van Houten、Elijah EW
TI Minimizing Measurement-Induced Errors in Viscoelastic MR Elastography
TI 最大限度地减少粘弹性 MR 弹性成像中测量引起的误差
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 3
BP 1138
EP 1148
DI 10.1109/TMI.2023.3329293
DT Article
PD MAR 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 3 BP 1138 EP 1148 DI 10.1109/TMI.2023.3329293 DT 文章 PD 2024 年 3 月 PY 2024
AB The inverse problem that underlies Magnetic Resonance Elastography (MRE)
is sensitive to the measurement data and the quality of the results of
this tissue elasticity imaging process can be influenced both directly
and indirectly by measurement noise. In this work, we apply a coupled
adjoint field formulation of the viscoelastic constitutive parameter
identification problem, where the indirect influence of noise through
applied boundary conditions is avoided. A well-posed formulation of the
coupled field problem is obtained through conditions applied to the
adjoint field, relieving the computed displacement field from kinematic
errors on the boundary. The theoretical framework for this formulation
via a nearly incompressible, parallel subdomain-decomposition approach
is presented, along with verification and a detailed exploration of the
performance of the methods via a numerical simulation study. In
addition, the advantages of this novel approach are demonstrated in-vivo
in the human brain, showing the ability of the method to obtain viable
tissue property maps in difficult configurations, enhancing the accuracy
of the method.
OI Kurtz, Samuel/0000-0003-3453-664X
ZA 0
ZB 1
Z8 0
TC 1
ZR 0
AB 磁共振弹性成像 (MRE) 背后的反问题对测量数据很敏感,并且该组织弹性成像过程的结果质量可能会直接或间接地受到测量噪声的影响。在这项工作中,我们应用了粘弹性本构参数识别问题的耦合伴随场公式,其中避免了噪声通过应用边界条件的间接影响。通过应用于伴随场的条件,获得了耦合场问题的适定公式,从而消除了计算的位移场的边界运动学误差。通过几乎不可压缩的并行子域分解方法提出了该公式的理论框架,并通过数值模拟研究对该方法的性能进行了验证和详细探索。此外,这种新颖方法的优点在人脑体内得到了证明,表明该方法能够在困难的配置中获得可行的组织特性图,从而提高了该方法的准确性。 OI 库尔茨,塞缪尔/0000-0003-3453-664X ZA 0 ZB 1 Z8 0 TC 1 ZR 0
ZS 0
Z9 1
C1 Univ Montpellier, Lab Mecan & Genie Civil, CNRS, F-34090 Montpellier,
France
C1 Univ Sherbrooke, Dept Mech Engn, Sherbrooke, PQ J1K 2R1, Canada
SN 0278-0062
EI 1558-254X
DA 2024-04-12
UT WOS:001180906600004
PM 37910409
ER
ZS 0 Z9 1 C1 蒙彼利埃大学,Mecan & Genie Civil 实验室,CNRS,F-34090 蒙彼利埃,法国 C1 舍布鲁克大学,机械工程系,舍布鲁克,PQ J1K 2R1,加拿大 SN 0278-0062 EI 1558-254X DA 2024-04- 12 UT WOS:001180906600004 PM 37910409 ER
PT J
AU Dan, Tingting
Kim, Minjeong
Kim, Won Hwa
Wu, Guorong
PT J AU Dan、Tingting Kim、Minjeong Kim、Won Hwa Wu、Guorong
TI Developing Explainable Deep Model for Discovering Novel Control
Mechanism of Neuro-Dynamics
TI 开发可解释的深度模型来发现神经动力学的新型控制机制
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 427
EP 438
DI 10.1109/TMI.2023.3309821
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 427 EP 438 DI 10.1109/TMI.2023.3309821 DT 文章 PD 2024 年 1 月 PY 2024
AB Human brain is a complex system composed of many components that
interact with each other. A well-designed computational model, usually
in the format of partial differential equations (PDEs), is vital to
understand the working mechanisms that can explain dynamic and
self-organized behaviors. However, the model formulation and parameters
are often tuned empirically based on the predefined domain-specific
knowledge, which lags behind the emerging paradigm of discovering novel
mechanisms from the unprecedented amount of spatiotemporal data. To
address this limitation, we sought to link the power of deep neural
networks and physics principles of complex systems, which allows us to
design explainable deep models for uncovering the mechanistic role of
how human brain (the most sophisticated complex system) maintains
controllable functions while interacting with external stimulations. In
the spirit of optimal control, we present a unified framework to design
an explainable deep model that describes the dynamic behaviors of
underlying neurobiological processes, allowing us to understand the
latent control mechanism at a system level. We have uncovered the
pathophysiological mechanism of Alzheimer's disease to the extent of
controllability of disease progression, where the dissected system-level
understanding enables higher prediction accuracy for disease progression
and better explainability for disease etiology than conventional (black
box) deep models.
OI Kim, Minjeong/0009-0004-7712-1684; Wu, Guorong/0000-0002-0550-6145; Dan,
Tingting/0000-0001-6936-2649; Kim, Won Hwa/0000-0001-5393-0883
AB 人脑是一个复杂的系统,由许多相互作用的组件组成。精心设计的计算模型(通常采用偏微分方程 (PDE) 格式)对于理解解释动态和自组织行为的工作机制至关重要。然而,模型的公式和参数通常是根据预定义的特定领域知识进行经验调整的,这落后于从前所未有的时空数据中发现新机制的新兴范式。为了解决这个限制,我们试图将深度神经网络的力量和复杂系统的物理原理联系起来,这使我们能够设计可解释的深度模型,以揭示人脑(最复杂的复杂系统)如何维持可控功能的机械作用,同时与外界刺激相互作用。本着最优控制的精神,我们提出了一个统一的框架来设计一个可解释的深度模型,该模型描述了底层神经生物学过程的动态行为,使我们能够在系统层面理解潜在的控制机制。我们在疾病进展可控的程度上揭示了阿尔茨海默病的病理生理机制,与传统(黑匣子)深度模型相比,剖析的系统级理解能够实现更高的疾病进展预测准确性和更好的疾病病因解释性。 OI Kim, Minjeong/0009-0004-7712-1684;吴国荣/0000-0002-0550-6145;丹婷婷/0000-0001-6936-2649;金元花/0000-0001-5393-0883
ZS 0
ZA 0
TC 0
ZR 0
ZB 0
Z8 0
Z9 0
C1 Univ N Carolina, Dept Psychiat, Chapel Hill, NC 27599 USA
C1 Univ N Carolina, Dept Comp Sci, Greensboro, NC 27402 USA
C1 POSTECH, Grad Sch AI, Dept Comp Sci & Engn, Pohang 37673, South Korea
C1 Univ Texas Arlington, Dept Comp Sci & Engn, Arlington, TX 76010 USA
C1 Univ N Carolina, Dept Stat & Operat Res STOR, Chapel Hill, NC 27599 USA
C1 Univ N Carolina, Carolina Inst Dev Disabil, Chapel Hill, NC 27599 USA
C1 Univ N Carolina, UNC Neurosci Ctr, Chapel Hill, NC 27599 USA
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600020
PM 37643099
ER
ZS 0 ZA 0 TC 0 ZR 0 ZB 0 Z8 0 Z9 0 C1 Univ N Carolina,精神病学系,Chapel Hill,NC 27599 USA C1 Univ N Carolina,Dept Comp Sci,Greensboro,NC 27402 USA C1 POSTECH,Grad Sch AI,系Comp Sci & Engn, Pohang 37673, South Korea C1 Univ Texas Arlington, Dept Comp Sci & Engn, Arlington, TX 76010 USA C1 Univ N Carolina, Dept Stat & Operat Res STOR, Chapel Hill, NC 27599 USA C1 Univ N Carolina, Carolina Inst Dev Disabil, Chapel Hill, NC 27599 USA C1 Univ N Carolina, UNC Neurosci Ctr, Chapel Hill, NC 27599 USA SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600020 PM 37643099 ER
PT J
AU Li, Jingxiong
Zheng, Sunyi
Shui, Zhongyi
Zhang, Shichuan
Yang, Linyi
Sun, Yuxuan
Zhang, Yunlong
Li, Honglin
Ye, Yuanxin
van Ooijen, Peter M. A.
Li, Kang
Yang, Lin
PT J AU Li、Jingxiong Cheng、Sunyi Shui、Zhongyi 张、Shichuan Yang、Linyi Sun、Yuxuan 张、Yunlong Li、Honglin Ye、Yuanxin van Ooijen、Peter MA Li、Kang Yang、Lin
TI Masked Conditional Variational Autoencoders for Chromosome Straightening
用于染色体矫正的 TI 屏蔽条件变分自动编码器
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 1
BP 216
EP 228
DI 10.1109/TMI.2023.3293854
DT Article
PD JAN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 1 BP 216 EP 228 DI 10.1109/TMI.2023.3293854 DT 文章 PD 2024 年 1 月 PY 2024
AB Karyotyping is of importance for detecting chromosomal aberrations in
human disease. However, chromosomes easily appear curved in microscopic
images, which prevents cytogeneticists from analyzing chromosome types.
To address this issue, we propose a framework for chromosome
straightening, which comprises a preliminary processing algorithm and a
generative model called masked conditional variational autoencoders
(MC-VAE). The processing method utilizes patch rearrangement to address
the difficulty in erasing low degrees of curvature, providing reasonable
preliminary results for the MC-VAE. The MC-VAE further straightens the
results by leveraging chromosome patches conditioned on their curvatures
to learn the mapping between banding patterns and conditions. During
model training, we apply a masking strategy with a high masking ratio to
train the MC-VAE with eliminated redundancy. This yields a non-trivial
reconstruction task, allowing the model to effectively preserve
chromosome banding patterns and structure details in the reconstructed
results. Extensive experiments on three public datasets with two stain
styles show that our framework surpasses the performance of
state-of-the-art methods in retaining banding patterns and structure
details. Compared to using real-world bent chromosomes, the use of
high-quality straightened chromosomes generated by our proposed method
can improve the performance of various deep learning models for
chromosome classification by a large margin. Such a straightening
approach has the potential to be combined with other karyotyping systems
to assist cytogeneticists in chromosome analysis.
RI Zheng, Sunyi/IVH-9231-2023; Yunlong, Zhang/AAV-9041-2021; van Ooijen, Peter/B-9150-2008; Yang, Linfeng/AAZ-8792-2020; Yang, Linyi/; van Ooijen, Peter/G-1146-2012; Zheng, Sunyi/; Li, Jingxiong/
OI Yang, Linyi/0000-0003-0667-7349; van Ooijen, Peter/0000-0002-8995-1210;
Zheng, Sunyi/0000-0002-9005-4875; Li, Jingxiong/0000-0002-6519-5043
ZR 0
Z8 0
ZB 0
ZA 0
AB 核型分析对于检测人类疾病中的染色体畸变非常重要。然而,染色体在显微图像中很容易出现弯曲,这阻碍了细胞遗传学家分析染色体类型。为了解决这个问题,我们提出了一个染色体拉直框架,其中包括初步处理算法和称为掩码条件变分自动编码器(MC-VAE)的生成模型。该处理方法利用补丁重排解决了擦除低曲率的困难,为MC-VAE提供了合理的初步结果。 MC-VAE 通过利用以曲率为条件的染色体补丁来学习条带模式和条件之间的映射,从而进一步矫正结果。在模型训练过程中,我们应用高掩蔽率的掩蔽策略来训练消除冗余的MC-VAE。这产生了一个不平凡的重建任务,使模型能够有效地保留重建结果中的染色体带型模式和结构细节。对具有两种染色风格的三个公共数据集进行的广泛实验表明,我们的框架在保留条带图案和结构细节方面超越了最先进的方法的性能。与使用现实世界的弯曲染色体相比,使用我们提出的方法生成的高质量直染色体可以大幅提高各种染色体分类深度学习模型的性能。这种拉直方法有可能与其他核型分析系统相结合,以协助细胞遗传学家进行染色体分析。日正孙毅/IVH-9231-2023;张云龙/AAV-9041-2021;彼得·范·奥伊延/B-9150-2008;杨林峰/AAZ-8792-2020;杨,临沂/;范·奥伊延,彼得/G-1146-2012;郑孙毅/;李竞雄/ 爱阳 临沂/0000-0003-0667-7349;彼得·范·奥伊延/0000-0002-8995-1210;郑孙怡/0000-0002-9005-4875;李竞雄/0000-0002-6519-5043 ZR 0 Z8 0 ZB 0 ZA 0
ZS 0
TC 0
Z9 0
C1 Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
C1 Westlake Univ, Sch Engn, Hangzhou 310030, Peoples R China
C1 Westlake Inst Adv Study, Inst Adv Technol, Hangzhou 310030, Peoples R
China
C1 Sichuan Univ, West China Hosp, Dept Lab Med, Chengdu 610000, Peoples R
China
C1 Univ Groningen, Univ Med Ctr Groningen, NL-9713 GZ Groningen,
Netherlands
SN 0278-0062
EI 1558-254X
DA 2024-03-13
UT WOS:001158081600027
PM 37428657
ER
ZS 0 TC 0 Z9 0 C1 浙江大学理工学院,杭州 310027,人民 R 中国 C1 西湖大学,工程学院,杭州 310030,人民 R 中国 C1 西湖高级技术研究所,杭州 310030,人民 R中国 C1 四川大学,华西医院,医学部,成都 610000,人民 R 中国 C1 格罗宁根大学,格罗宁根大学医学中心,NL-9713 GZ 格罗宁根,荷兰 SN 0278-0062 EI 1558-254X DA 2024-03-13 UT WOS:001158081600027 下午 37428657 ER
PT J
AU Kou, Zhengchang
Lowerison, Matthew R.
You, Qi
Wang, Yike
Song, Pengfei
Oelze, Michael L.
PT J AU Kou、Zhengchang Lowerison、Matthew R. You、王琪、宋一科、Pengfei Oelze、Michael L.
TI High-Resolution Power Doppler Using Null Subtraction Imaging
使用零减法成像的 TI 高分辨率功率多普勒
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 9
BP 3060
EP 3071
DI 10.1109/TMI.2024.3383768
DT Article
PD SEP 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 9 BP 3060 EP 3071 DI 10.1109/TMI.2024.3383768 DT 文章 PD 2024 年 SEP 2024 年 PY
AB To improve the spatial resolution of power Doppler (PD) imaging, we
explored null subtraction imaging (NSI) as an alternative beamforming
technique to delay-and-sum (DAS). NSI is a nonlinear beamforming
approach that uses three different apodizations on receive and
incoherently sums the beamformed envelopes. NSI uses a null in the beam
pattern to improve the lateral resolution, which we apply here for
improving PD spatial resolution both with and without contrast
microbubbles. In this study, we used NSI with three types of singular
value decomposition (SVD)-based clutter filters and noise equalization
to generate high-resolution PD images. An element sensitivity correction
scheme was also proposed as a crucial component of NSI-based PD imaging.
First, a microbubble trace experiment was performed to evaluate the
resolution improvement of NSI-based PD over traditional DAS-based PD.
Then, both contrast-enhanced and contrast free ultrasound PD images were
generated from the scan of a rat brain. The cross-sectional profile of
the microbubble traces and microvessels were plotted. FWHM was also
estimated to provide a quantitative metric. Furthermore, iso-frequency
curves were calculated to provide a resolution evaluation metric over
the global field of view. Up to six-fold resolution improvement was
demonstrated by the FWHM estimate and four-fold resolution improvement
was demonstrated by the iso-frequency curve from the NSI-based PD
microvessel images compared to microvessel images generated by
traditional DAS-based beamforming. A resolvability of 39 mu m was
measured from the NSI-based PD microvessel image. The computational cost
of NSI-based PD was only increased by 40 percent over the DAS-based PD.
RI Song, Pengfei/G-9326-2015; Wang, Yike/JEP-6766-2023; Wang, Yike/; Kou, Zhengchang/GWC-2787-2022; Song, Pengfei/; You, Qi/HKE-8198-2023
OI Wang, Yike/0009-0003-0113-6014; Kou, Zhengchang/0000-0001-5322-0267;
Song, Pengfei/0000-0002-9103-6345; You, Qi/0000-0002-2706-1394
ZB 0
Z8 0
AB 为了提高功率多普勒 (PD) 成像的空间分辨率,我们探索了零减成像 (NSI) 作为延迟求和 (DAS) 的替代波束形成技术。 NSI 是一种非线性波束形成方法,在接收时使用三种不同的变迹并对波束形成的包络进行非相干求和。 NSI 在波束图案中使用零点来提高横向分辨率,我们在这里应用它来提高有或没有对比微泡的 PD 空间分辨率。在本研究中,我们使用 NSI 以及三种基于奇异值分解 (SVD) 的杂波滤波器和噪声均衡来生成高分辨率 PD 图像。还提出了元件灵敏度校正方案作为基于 NSI 的 PD 成像的关键组成部分。首先,进行了微泡痕量实验,以评估基于 NSI 的 PD 相对于传统的基于 DAS 的 PD 的分辨率改进。然后,通过扫描大鼠大脑生成对比增强和无对比超声 PD 图像。绘制了微泡痕迹和微血管的横截面轮廓。 FWHM 还被估计为提供定量指标。此外,计算等频曲线以提供全局视场的分辨率评估指标。与传统的基于 DAS 的波束形成生成的微血管图像相比,FWHM 估计表明分辨率提高了六倍,基于 NSI 的 PD 微血管图像的等频曲线表明分辨率提高了四倍。从基于 NSI 的 PD 微血管图像测得分辨率为 39 μm。基于 NSI 的 PD 的计算成本仅比基于 DAS 的 PD 增加了 40%。日松鹏飞/G-9326-2015;王一轲/JEP-6766-2023;王一轲/;寇正昌/GWC-2787-2022;宋鹏飞/;尤琪/HKE-8198-2023 王艾轲/0009-0003-0113-6014;寇正昌/0000-0001-5322-0267;宋鹏飞/0000-0002-9103-6345;尤琪/0000-0002-2706-1394 ZB 0 Z8 0
ZS 0
TC 2
ZR 0
ZA 0
Z9 2
C1 Univ Illinois, Beckman Inst Adv Sci & Technol, Urbana, IL 61820 USA
C1 Carle Illinois Coll Med, Dept Elect & Comp Engn, Urbana, IL 61801 USA
C1 Carle Illinois Coll Med, Dept Bioengn, Urbana, IL 61801 USA
C3 Carle Illinois Coll Med
C3 Carle Illinois Coll Med
SN 0278-0062
EI 1558-254X
DA 2024-09-18
UT WOS:001307429600026
PM 38557625
ER
ZS 0 TC 2 ZR 0 ZA 0 Z9 2 C1 伊利诺伊大学贝克曼高级科学与技术学院,厄巴纳,伊利诺斯州 61820 美国 C1 卡尔伊利诺伊大学医学院,Dept Elect & Comp Engn,厄巴纳,伊利诺斯州 61801 美国 C1 卡尔伊利诺伊大学医学院,部门Bioengn, 厄巴纳, IL 61801 美国 C3 卡尔伊利诺伊大学医学中心 C3 卡尔伊利诺伊大学医学中心 SN 0278-0062 EI 1558-254X DA 2024-09-18 UT WOS:001307429600026 PM 38557625 ER
PT J
AU Guo, Shouchang
Fessler, Jeffrey A.
Noll, Douglas C.
PT J AU 郭, 寿昌 Fessler, Jeffrey A. Noll, Douglas C.
TI Manifold Regularizer for High-Resolution fMRI Joint Reconstruction and
Dynamic Quantification
用于高分辨率 fMRI 联合重建和动态量化的 TI Manifold 正则化器
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 2937
EP 2948
DI 10.1109/TMI.2024.3381197
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 2937 EP 2948 DI 10.1109/TMI.2024.3381197 DT 文章 PD 2024 年 8 月 PY 2024
AB Oscillating Steady-State Imaging (OSSI) is a recently developed fMRI
acquisition method that can provide 2 to 3 times higher SNR than
standard fMRI approaches. However, because the OSSI signal exhibits a
nonlinear oscillation pattern, one must acquire and combine n(c) (e.g.,
10) OSSI images to get an image that is free of oscillation for fMRI,
and fully sampled acquisitions would compromise temporal resolution. To
improve temporal resolution and accurately model the nonlinearity of
OSSI signals, instead of using subspace models that are not well suited
for the data, we build the MR physics for OSSI signal generation as a
regularizer for the undersampled reconstruction. Our proposed
physics-based manifold model turns the disadvantages of OSSI acquisition
into advantages and enables joint reconstruction and quantification.
OSSI manifold model (OSSIMM) outperforms subspace models and
reconstructs high-resolution fMRI images with a factor of 12
acceleration and without spatial or temporal smoothing. Furthermore,
OSSIMM can dynamically quantify important physics parameters, including
R-2* maps, with a temporal resolution of 150 ms.
OI Fessler, Jeffrey/0000-0001-9998-3315
ZB 0
TC 0
Z8 0
ZA 0
ZR 0
AB 振荡稳态成像 (OSSI) 是一种最近开发的 fMRI 采集方法,可提供比标准 fMRI 方法高 2 至 3 倍的信噪比。然而,由于OSSI信号表现出非线性振荡模式,因此必须采集并组合n(c)(例如,10)个OSSI图像以获得用于fMRI的无振荡图像,并且完全采样的采集会损害时间分辨率。为了提高时间分辨率并准确建模 OSSI 信号的非线性,我们没有使用不太适合数据的子空间模型,而是构建了用于 OSSI 信号生成的 MR 物理场,作为欠采样重建的正则器。我们提出的基于物理的流形模型将 OSSI 采集的缺点转化为优点,并实现联合重建和量化。 OSSI 流形模型 (OSSIMM) 的性能优于子空间模型,可以以 12 倍的加速度重建高分辨率 fMRI 图像,并且无需空间或时间平滑。此外,OSSIMM 可以动态量化重要的物理参数,包括 R-2* 图,时间分辨率为 150 ms。 OI 费斯勒,杰弗里/0000-0001-9998-3315 ZB 0 TC 0 Z8 0 ZA 0 ZR 0
ZS 0
Z9 0
C1 Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
C1 Univ Michigan, Funct MRI Lab, Ann Arbor, MI 48109 USA
C1 Univ Michigan, Dept Biomed Engn, Ann Arbor, MI 48109 USA
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200010
PM 38526890
ER
ZS 0 Z9 0 C1 密歇根大学,Elect Engn & Comp Sci 系,安娜堡,密歇根州 48109 美国 C1 密歇根大学,Funct MRI 实验室,安娜堡,密歇根州 48109 美国 C1 密歇根大学,生物医学工程系,安娜堡,密歇根州 48109 美国 SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200010 PM 38526890 ER
PT J
AU Huang, Xingru
Huang, Jian
Zhao, Kai
Zhang, Tianyun
Li, Zhi
Yue, Changpeng
Chen, Wenhao
Wang, Ruihao
Chen, Xuanbin
Zhang, Qianni
Fu, Ying
Wang, Yangyundou
Guo, Yihao
PT J AU Huang, 黄星茹, 赵健, 张凯, 李天云, 岳志, 陈长鹏, 王文浩, 陈瑞豪, 张玄彬, 付倩妮, 王颖, 郭阳云斗, 益豪
TI SASAN: Spectrum-Axial Spatial Approach Networks for Medical Image
Segmentation
TI SASAN:用于医学图像分割的谱轴空间方法网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 8
BP 3044
EP 3056
DI 10.1109/TMI.2024.3383466
DT Article
PD AUG 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 8 BP 3044 EP 3056 DI 10.1109/TMI.2024.3383466 DT 文章 PD 2024 年 8 月 PY 2024
AB Ophthalmic diseases such as central serous chorioretinopathy (CSC)
significantly impair the vision of millions of people globally. Precise
segmentation of choroid and macular edema is critical for diagnosing and
treating these conditions. However, existing 3D medical image
segmentation methods often fall short due to the heterogeneous nature
and blurry features of these conditions, compounded by medical image
clarity issues and noise interference arising from equipment and
environmental limitations. To address these challenges, we propose the
Spectrum Analysis Synergy Axial-Spatial Network (SASAN), an approach
that innovatively integrates spectrum features using the Fast Fourier
Transform (FFT). SASAN incorporates two key modules: the Frequency
Integrated Neural Enhancer (FINE), which mitigates noise interference,
and the Axial-Spatial Elementum Multiplier (ASEM), which enhances
feature extraction. Additionally, we introduce the Self-Adaptive
Multi-Aspect Loss ( $\mathcal {L}_{\textit {SM}}$ ), which balances
image regions, distribution, and boundaries, adaptively updating weights
during training. We compiled and meticulously annotated the Choroid and
Macular Edema OCT Mega Dataset (CMED-18k), currently the world's largest
dataset of its kind. Comparative analysis against 13 baselines shows our
method surpasses these benchmarks, achieving the highest Dice scores and
lowest HD95 in the CMED and OIMHS datasets.
RI Huang, Xingru/CAI-6773-2022; Chen, Wen-Hao/K-8829-2013; Wang, Ruihao/AAN-2850-2021; Zhang, TianYun/KEH-4663-2024; Huang, Xingru/; Zhang, Tianyun/; , wang rui hao/; Zhang, Qianni/; Huang, Jian/; LI, Zhi/
OI Huang, Xingru/0000-0003-3971-8434; Zhang, Tianyun/0009-0004-3215-9603; ,
wang rui hao/0009-0009-8894-2249; Zhang, Qianni/0000-0001-7685-2187;
Huang, Jian/0009-0001-1679-3047; LI, Zhi/0000-0002-8789-0151
ZB 0
TC 2
ZR 0
Z8 0
AB 中心性浆液性脉络膜视网膜病变 (CSC) 等眼科疾病严重损害全球数百万人的视力。脉络膜和黄斑水肿的精确分割对于诊断和治疗这些疾病至关重要。然而,由于这些条件的异构性和模糊特征,再加上医学图像清晰度问题以及设备和环境限制产生的噪声干扰,现有的 3D 医学图像分割方法往往存在不足。为了应对这些挑战,我们提出了频谱分析协同轴向空间网络(SASAN),这是一种使用快速傅立叶变换(FFT)创新地集成频谱特征的方法。 SASAN 包含两个关键模块:用于减轻噪声干扰的频率集成神经增强器 (FINE) 和用于增强特征提取的轴向空间元素乘法器 (ASEM)。此外,我们引入了自适应多方面损失( $\mathcal {L}_{\textit {SM}}$ ),它平衡图像区域、分布和边界,在训练期间自适应更新权重。我们编译并精心注释了脉络膜和黄斑水肿 OCT 大数据集 (CMED-18k),这是目前世界上最大的同类数据集。对 13 个基线的比较分析表明,我们的方法超越了这些基准,在 CMED 和 OIMHS 数据集中实现了最高的 Dice 分数和最低的 HD95。日黄星茹/CAI-6773-2022;陈文浩/K-8829-2013;王瑞浩/AAN-2850-2021;张天云/KEH-4663-2024;黄星茹/;张天云/; , 王瑞浩/;张倩妮/;黄健/;李志/黄星茹/0000-0003-3971-8434;张天云/0009-0004-3215-9603; , 王瑞浩/0009-0009-8894-2249;张倩妮/0000-0001-7685-2187;黄健/0009-0001-1679-3047;李智/0000-0002-8789-0151 ZB 0 TC 2 ZR 0 Z8 0
ZS 0
ZA 0
Z9 2
C1 Hangzhou Dianzi Univ, Hangzhou 310018, Peoples R China
C1 Chinese Peoples Liberat Army Gen Hosp, Med Ctr 1, Dept Neurosurg,
Beijing 100853, Peoples R China
C1 Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E3 4BL,
England
C1 Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R
China
C1 Beijing Inst Technol, MIIT Key Lab Complex Field Intelligent Sensing,
Beijing 100081, Peoples R China
C1 Xidian Univ, Hangzhou Inst Technol, Hangzhou 311231, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-08-18
UT WOS:001285367200001
PM 38557622
ER
ZS 0 ZA 0 Z9 2 C1 杭州电子科技大学,杭州 310018,中国人民解放军 C1 中国人民解放军上将医院,医学中心 1,神经外科,北京 100853,人民 R 中国 C1 伦敦玛丽女王大学,工程与科学学院, London E3 4BL, England C1 北京理工大学, Sch Comp Sci & Technol, 北京 100081, 人民路中国 C1 北京工业和信息化部重点实验室综合体现场智能传感, 北京 100081, 人民路中国 C1 西安电子科技大学, 杭州工业大学,杭州 311231,人民 R 中国 SN 0278-0062 EI 1558-254X DA 2024-08-18 UT WOS:001285367200001 PM 38557622 ER
PT J AU Zhou, Houliang He, Lifang Chen, Brian Y Shen, Li Zhang, Yu
TI Multi-Modal Diagnosis of Alzheimer's Disease using Interpretable Graph
Convolutional Networks.
TI 使用可解释图卷积网络对阿尔茨海默病进行多模态诊断。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3432531
DT Journal Article
PD 2024-Jul-23
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3432531 DT 期刊文章 PD 2024-Jul-23 PY 2024
AB The interconnection between brain regions in neurological disease
encodes vital information for the advancement of biomarkers and
diagnostics. Although graph convolutional networks are widely applied
for discovering brain connection patterns that point to disease
conditions, the potential of connection patterns that arise from
multiple imaging modalities has yet to be fully realized. In this paper,
we propose a multi-modal sparse interpretable GCN framework (SGCN) for
the detection of Alzheimer's disease (AD) and its prodromal stage, known
as mild cognitive impairment (MCI). In our experimentation, SGCN learned
the sparse regional importance probability to find signature regions of
interest (ROIs), and the connective importance probability to reveal
disease-specific brain network connections. We evaluated SGCN on the
Alzheimer's Disease Neuroimaging Initiative database with multi-modal
brain images and demonstrated that the ROI features learned by SGCN were
effective for enhancing AD status identification. The identified
abnormalities were significantly correlated with AD-related clinical
symptoms. We further interpreted the identified brain dysfunctions at
the level of large-scale neural systems and sex-related connectivity
abnormalities in AD/MCI. The salient ROIs and the prominent brain
connectivity abnormalities interpreted by SGCN are considerably
important for developing novel biomarkers. These findings contribute to
a better understanding of the network-based disorder via multi-modal
diagnosis and offer the potential for precision diagnostics. The source
code is available at https://github.com/Houliang-Zhou/SGCN.
OI Zhou, Houliang/0000-0002-5793-3042; Shen, Li/0000-0002-5443-0503; He,
Lifang/0000-0001-7810-9071; Zhang, Yu/0000-0003-4087-6544
ZA 0
ZR 0
TC 0
AB 神经系统疾病中大脑区域之间的互连为生物标志物和诊断的进步编码了重要信息。尽管图卷积网络广泛应用于发现指向疾病状况的大脑连接模式,但多种成像模式产生的连接模式的潜力尚未完全实现。在本文中,我们提出了一种多模态稀疏可解释 GCN 框架(SGCN),用于检测阿尔茨海默病(AD)及其前驱阶段,即轻度认知障碍(MCI)。在我们的实验中,SGCN 学习了稀疏区域重要性概率来找到感兴趣的特征区域 (ROI),并学习了连接重要性概率来揭示疾病特定的大脑网络连接。我们使用多模态脑图像在阿尔茨海默病神经影像计划数据库上评估了 SGCN,并证明 SGCN 学习的 ROI 特征可有效增强 AD 状态识别。所发现的异常与 AD 相关的临床症状显着相关。我们进一步解释了 AD/MCI 中大规模神经系统水平的脑功能障碍和与性别相关的连接异常。 SGCN 解释的显着 ROI 和显着的大脑连接异常对于开发新型生物标志物非常重要。这些发现有助于通过多模式诊断更好地理解基于网络的疾病,并提供精确诊断的潜力。源代码可在 https://github.com/Houliang-Zhou/SGCN 获取。周厚良/0000-0002-5793-3042;沉莉/0000-0002-5443-0503;何丽芳/0000-0001-7810-9071;张宇/0000-0003-4087-6544 ZA 0 ZR 0 TC 0
ZS 0
Z8 0
ZB 0
Z9 0
EI 1558-254X
DA 2024-07-26
UT MEDLINE:39042528
PM 39042528
ER
ZS 0 Z8 0 ZB 0 Z9 0 EI 1558-254X DA 2024-07-26 UT MEDLINE:39042528 PM 39042528 ER
PT J
AU He, Yufang
Liu, Zeyu
Qi, Mingxin
Ding, Shengwei
Zhang, Peng
Song, Fan
Ma, Chenbin
Wu, Huijie
Cai, Ruxin
Feng, Youdan
Zhang, Haonan
Zhang, Tianyi
Zhang, Guanglei
PT J AU He, 刘玉芳, 齐泽宇, 丁明欣, 张胜伟, 宋鹏, 马凡, 吴陈斌, 蔡慧杰, 冯如新, 张友丹, 张浩南, 张天一, 光磊
TI PST-Diff: Achieving High-consistency Stain Transfer by Diffusion Models
with Pathological and Structural Constraints.
TI PST-Diff:通过具有病理和结构约束的扩散模型实现高浓度污渍转移。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3430825
DT Journal Article
PD 2024-Jul-18
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3430825 DT 期刊文章 PD 2024-Jul-18 PY 2024
AB Histopathological examinations heavily rely on hematoxylin and eosin
(HE) and immunohistochemistry (IHC) staining. IHC staining can offer
more accurate diagnostic details but it brings significant financial and
time costs. Furthermore, either re-staining HE-stained slides or using
adjacent slides for IHC may compromise the accuracy of pathological
diagnosis due to information loss. To address these challenges, we
develop PST-Diff, a method for generating virtual IHC images from HE
images based on diffusion models, which allows pathologists to
simultaneously view multiple staining results from the same tissue
slide. To maintain the pathological consistency of the stain transfer,
we propose the asymmetric attention mechanism (AAM) and latent transfer
(LT) module in PST-Diff. Specifically, the AAM can retain more local
pathological information of the source domain images through the design
of asymmetric attention mechanisms, while ensuring the model's
flexibility in generating virtual stained images that highly confirm to
the target domain. Subsequently, the LT module transfers the implicit
representations across different domains, effectively alleviating the
bias introduced by direct connection and further enhancing the
pathological consistency of PST-Diff. Furthermore, to maintain the
structural consistency of the stain transfer, the conditional frequency
guidance (CFG) module is proposed to precisely control image generation
and preserve structural details according to the frequency recovery
process. To conclude, the pathological and structural consistency
constraints provide PST-Diff with effectiveness and superior
generalization in generating stable and functionally pathological IHC
images with the best evaluation score. In general, PST-Diff offers
prospective application in clinical virtual staining and pathological
image analysis.
OI Liu, Zeyu/0000-0002-9342-1247; Zhang, Peng/0000-0002-3879-5860
TC 0
Z8 0
ZA 0
ZB 0
AB 组织病理学检查严重依赖苏木精和伊红 (HE) 以及免疫组织化学 (IHC) 染色。 IHC 染色可以提供更准确的诊断细节,但会带来巨大的财务和时间成本。此外,重新染色 HE 染色的载玻片或使用相邻载玻片进行 IHC 可能会因信息丢失而影响病理诊断的准确性。为了应对这些挑战,我们开发了 PST-Diff,这是一种基于扩散模型从 HE 图像生成虚拟 IHC 图像的方法,它允许病理学家同时查看同一组织载玻片的多个染色结果。为了保持染色转移的病理一致性,我们在 PST-Diff 中提出了不对称注意机制(AAM)和潜在转移(LT)模块。具体来说,AAM可以通过不对称注意机制的设计保留源域图像的更多局部病理信息,同时确保模型在生成与目标域高度一致的虚拟染色图像时的灵活性。随后,LT模块跨不同域传递隐式表示,有效减轻直接连接引入的偏差,进一步增强PST-Diff的病理一致性。此外,为了保持染色转移的结构一致性,提出了条件频率引导(CFG)模块,以根据频率恢复过程精确控制图像生成并保留结构细节。总之,病理和结构一致性约束为 PST-Diff 提供了有效性和卓越的泛化性,可生成具有最佳评估分数的稳定且功能性病理 IHC 图像。 总的来说,PST-Diff 在临床虚拟染色和病理图像分析方面具有广阔的应用前景。刘爱 泽宇/0000-0002-9342-1247;张鹏/0000-0002-3879-5860 TC 0 Z8 0 ZA 0 ZB 0
ZS 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-07-20
UT MEDLINE:39024079
PM 39024079
ER
ZS 0 ZR 0 Z9 0 EI 1558-254X DA 2024-07-20 UT MEDLINE:39024079 PM 39024079 ER
PT J
AU Wang, Hongqiu
Yang, Guang
Zhang, Shichen
Qin, Jing
Guo, Yike
Xu, Bo
Jin, Yueming
Zhu, Lei
PT J AU Wang、杨红秋、张光、秦世辰、郭靖、徐一科、金波、朱月明、雷
TI Video-Instrument Synergistic Network for Referring Video Instrument
Segmentation in Robotic Surgery.
TI 视频仪器协同网络,用于在机器人手术中参考视频仪器分割。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3426953
DT Journal Article
PD 2024-Jul-11
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3426953 DT 期刊文章 PD 2024-Jul-11 PY 2024
AB Surgical instrument segmentation is fundamentally important for
facilitating cognitive intelligence in robot-assisted surgery. Although
existing methods have achieved accurate instrument segmentation results,
they simultaneously generate segmentation masks of all instruments,
which lack the capability to specify a target object and allow an
interactive experience. This paper focuses on a novel and essential task
in robotic surgery, i.e., Referring Surgical Video Instrument
Segmentation (RSVIS), which aims to automatically identify and segment
the target surgical instruments from each video frame, referred by a
given language expression. This interactive feature offers enhanced user
engagement and customized experiences, greatly benefiting the
development of the next generation of surgical education systems. To
achieve this, this paper constructs two surgery video datasets to
promote the RSVIS research. Then, we devise a novel Video-Instrument
Synergistic Network (VIS-Net) to learn both video-level and
instrument-level knowledge to boost performance, while previous work
only utilized video-level information. Meanwhile, we design a
Graph-based Relation-aware Module (GRM) to model the correlation between
multi-modal information (i.e., textual description and video frame) to
facilitate the extraction of instrument-level information. Extensive
experimental results on two RSVIS datasets exhibit that the VIS-Net can
significantly outperform existing state-of-the-art referring
segmentation methods. We will release our code and dataset for future
research (Git).
RI Jin, Yueming/; Yang, Guang/S-5032-2016; Zhu, Lei/
OI Jin, Yueming/0000-0003-3775-3877; Yang, Guang/0000-0001-7344-7733; Zhu,
Lei/0000-0003-3871-663X
ZA 0
Z8 0
TC 0
ZB 0
AB 手术器械分割对于促进机器人辅助手术中的认知智能至关重要。尽管现有方法已经获得了准确的仪器分割结果,但它们同时生成所有仪器的分割掩模,缺乏指定目标对象和允许交互体验的能力。本文重点研究机器人手术中一项新颖且重要的任务,即参考手术视频器械分割(RSVIS),其目的是通过给定的语言表达来自动识别和分割每个视频帧中的目标手术器械。这种交互功能提供了增强的用户参与度和定制体验,极大有利于下一代外科教育系统的开发。为此,本文构建了两个手术视频数据集来促进 RSVIS 研究。然后,我们设计了一种新颖的视频仪器协同网络(VIS-Net)来学习视频级和仪器级知识以提高性能,而以前的工作仅利用视频级信息。同时,我们设计了基于图的关系感知模块(GRM)来对多模态信息(即文本描述和视频帧)之间的相关性进行建模,以方便提取仪器级信息。对两个 RSVIS 数据集的大量实验结果表明,VIS-Net 的性能可以显着优于现有的最先进的参考分割方法。我们将发布我们的代码和数据集以供未来研究(Git)。日进,月明/;杨光/S-5032-2016;朱雷/爱金月明/0000-0003-3775-3877;杨光/0000-0001-7344-7733;朱雷/0000-0003-3871-663X ZA 0 Z8 0 TC 0 ZB 0
ZS 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-07-14
UT MEDLINE:38990752
PM 38990752
ER
ZS 0 ZR 0 Z9 0 EI 1558-254X DA 2024-07-14 UT MEDLINE:38990752 PM 38990752 ER
PT J
AU Kreitner, Linus
Paetzold, Johannes C.
Rauch, Nikolaus
Chen, Chen
Hagag, Ahmed M.
Fayed, Alaa E.
Sivaprasad, Sobha
Rausch, Sebastian
Weichsel, Julian
Menze, Bjoern H.
Harders, Matthias
Knier, Benjamin
Rueckert, Daniel
Menten, Martin J.
PT J AU Kreitner、Linus Paetzold、Johannes C. Rauch、Nikolaus Chen、Chen Hagag、Ahmed M. Fayed、Alaa E. Sivaprasad、Sobha Rausch、Sebastian Weichsel、Julian Menze、Bjoern H. Harders、Matthias Knier、Benjamin Rueckert、Daniel门滕,马丁·J.
TI Synthetic Optical Coherence Tomography Angiographs for Detailed Retinal
Vessel Segmentation Without Human Annotations
TI 合成光学相干断层扫描血管造影无需人工注释即可进行详细的视网膜血管分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2061
EP 2073
DI 10.1109/TMI.2024.3354408
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2061 EP 2073 DI 10.1109/TMI.2024.3354408 DT 文章 PD 2024 年 6 月 PY 2024
AB Optical coherence tomography angiography (OCTA) is a non-invasive
imaging modality that can acquire high-resolution volumes of the retinal
vasculature and aid the diagnosis of ocular, neurological and cardiac
diseases. Segmenting the visible blood vessels is a common first step
when extracting quantitative biomarkers from these images. Classical
segmentation algorithms based on thresholding are strongly affected by
image artifacts and limited signal-to-noise ratio. The use of modern,
deep learning-based segmentation methods has been inhibited by a lack of
large datasets with detailed annotations of the blood vessels. To
address this issue, recent work has employed transfer learning, where a
segmentation network is trained on synthetic OCTA images and is then
applied to real data. However, the previously proposed simulations fail
to faithfully model the retinal vasculature and do not provide effective
domain adaptation. Because of this, current methods are unable to fully
segment the retinal vasculature, in particular the smallest capillaries.
In this work, we present a lightweight simulation of the retinal
vascular network based on space colonization for faster and more
realistic OCTA synthesis. We then introduce three contrast adaptation
pipelines to decrease the domain gap between real and artificial images.
We demonstrate the superior segmentation performance of our approach in
extensive quantitative and qualitative experiments on three public
datasets that compare our method to traditional computer vision
algorithms and supervised training using human annotations. Finally, we
make our entire pipeline publicly available, including the source code,
pretrained models, and a large dataset of synthetic OCTA images.
RI Paetzold, Johannes/KBD-4403-2024; Chen, Chen/AAM-1881-2021; Rueckert, Daniel/C-4393-2008; Menze, Bjoern/JOK-2720-2023; Sivaprasad, S./D-6876-2015; Sivaprasad, Sobha/; Fayed, Alaa E./; Paetzold, Johannes C./; Rauch, Nikolaus/; Chen, Chen/; Kreitner, Linus/
OI Sivaprasad, Sobha/0000-0001-8952-0659; Fayed, Alaa
E./0000-0003-2634-3880; Paetzold, Johannes C./0000-0002-4844-6955;
Rauch, Nikolaus/0000-0003-1326-709X; Chen, Chen/0000-0002-3525-9755;
Kreitner, Linus/0009-0004-5727-6697
Z8 0
ZR 0
TC 6
ZB 5
ZA 0
AB 光学相干断层扫描血管造影 (OCTA) 是一种非侵入性成像方式,可以获取视网膜脉管系统的高分辨率体积,并有助于诊断眼部、神经系统和心脏疾病。从这些图像中提取定量生物标记物时,分割可见血管是常见的第一步。基于阈值的经典分割算法受到图像伪影和有限信噪比的强烈影响。由于缺乏带有血管详细注释的大型数据集,基于深度学习的现代分割方法的使用受到限制。为了解决这个问题,最近的工作采用了迁移学习,其中分割网络在合成 OCTA 图像上进行训练,然后应用于真实数据。然而,先前提出的模拟未能忠实地模拟视网膜脉管系统,并且没有提供有效的域适应。因此,当前的方法无法完全分割视网膜脉管系统,特别是最小的毛细血管。在这项工作中,我们提出了基于空间殖民的视网膜血管网络的轻量级模拟,以实现更快、更真实的 OCTA 合成。然后,我们引入三个对比度适应管道来减少真实图像和人造图像之间的域差距。我们在三个公共数据集上进行了广泛的定量和定性实验,展示了我们的方法的卓越分割性能,这些实验将我们的方法与传统计算机视觉算法和使用人工注释的监督训练进行比较。最后,我们公开了整个流程,包括源代码、预训练模型和合成 OCTA 图像的大型数据集。 RI 帕佐尔德,约翰内斯/KBD-4403-2024;陈陈/AAM-1881-2021;丹尼尔·吕克特/C-4393-2008;比约恩·门泽/JOK-2720-2023;西瓦普拉萨德,S./D-6876-2015;西瓦普拉萨德,索巴/;法耶德,阿拉·E./;帕佐尔德,约翰内斯·C./;尼古拉斯·劳赫/;陈,陈/;克莱特纳,莱纳斯/OI 西瓦普拉萨德,索巴/0000-0001-8952-0659;法耶德,阿拉 E./0000-0003-2634-3880;帕措尔德,约翰内斯 C./0000-0002-4844-6955;尼古拉斯·劳赫/0000-0003-1326-709X;陈陈/0000-0002-3525-9755;莱纳斯·克莱特纳/0009-0004-5727-6697 Z8 0 ZR 0 TC 6 ZB 5 ZA 0
ZS 0
Z9 6
C1 Tech Univ Munich TUM, Klinikum Rechts Isar MRI, Inst AI Healthcare &
Med, D-81675 Munich, Germany
C1 Imperial Coll London, BioMedIA, London SW72BX, England
C1 Helmholtz Zentrum Munchen, ITERM Inst, D-81377 Munich, Germany
C1 Univ Innsbruck, Interact Graph & Simulat Grp, A-6020 Innsbruck, Austria
C1 Univ Oxford, Oxford BioMedIA Grp, Oxford OX1 2JD, England
C1 Univ Sheffield, Dept Comp Sci, Sheffield S10 2TN, England
C1 Moorfields Eye Hosp NHS Fdn Trust, NIHR Moorfields Biomed Res Ctr,
London EC1V 2PD, England
C1 Boehringer Ingelheim Ltd, Bracknell RG12 8YS, England
C1 Cairo Univ, Kasr Al Ainy Sch Med, Dept Ophthalmol, Giza 12613, Egypt
C1 Watany Eye Hosp, Cairo 12561, Egypt
C1 Heidelberg Engn GmbH, D-69115 Heidelberg, Germany
C1 Univ Zurich, Dept Quant Biomed, CH-8006 Zurich, Switzerland
C1 Tech Univ Munich TUM, Klinikum Rechts Isar MRI, Dept Neurol, D-81675
Munich, Germany
C3 Watany Eye Hosp
C3 Heidelberg Engn GmbH
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100027
PM 38224512
ER
ZS 0 Z9 6 C1 慕尼黑理工大学 TUM,Klinikum Rechts Isar MRI,Inst AI Healthcare & Med,D-81675 慕尼黑,德国 C1 Imperial Coll London,BioMediA,伦敦 SW72BX,英国 C1 Helmholtz Zentrum Munchen,ITERM Inst,D-81377 慕尼黑, 德国 C1 Univ Innsbruck, Interact Graph & Simulat Grp, A-6020 Innsbruck, 奥地利 C1 Univ Oxford, Oxford BioMediA Grp, Oxford OX1 2JD, 英国 C1 Univ Sheffield, Dept Comp Sci, Sheffield S10 2TN, England C1 Moorfields Eye Hosp NHS Fdn Trust, NIHR Moorfields Biomed Res Ctr, London EC1V 2PD, England C1 Boehringer Ingelheim Ltd, Bracknell RG12 8YS, England C1 Cairo Univ, Kasr Al Ainy Sch Med, Dept Ophasemol, Giza 12613, Egypt C1 Watany Eye Hosp, Cairo 12561, Egypt C1海德堡工程有限公司,D-69115 海德堡,德国 C1 苏黎世大学,Quant Biomed 系,CH-8006 苏黎世,瑞士 C1 慕尼黑理工大学,TUM,Klinikum Rechts Isar MRI,Neurol 系,D-81675 慕尼黑,德国 C3 Watany 眼科医院 C3 海德堡工程有限公司 SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100027 PM 38224512 ER
PT J
AU Leynes, Andrew P.
Deveshwar, Nikhil
Nagarajan, Srikantan S.
Larson, Peder E. Z.
PT J AU Leynes、Andrew P. Deveshwar、Nikhil Nagarajan、Srikantan S. Larson、Peder EZ
TI Scan-Specific Self-Supervised Bayesian Deep Non-Linear Inversion for
Undersampled MRI Reconstruction
用于欠采样 MRI 重建的 TI 扫描特定自监督贝叶斯深度非线性反演
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2358
EP 2369
DI 10.1109/TMI.2024.3364911
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2358 EP 2369 DI 10.1109/TMI.2024.3364911 DT 文章 PD 2024 年 6 月 PY 2024
AB Magnetic resonance imaging is subject to slow acquisition times due to
the inherent limitations in data sampling. Recently, supervised deep
learning has emerged as a promising technique for reconstructing
sub-sampled MRI. However, supervised deep learning requires a large
dataset of fully-sampled data. Although unsupervised or self-supervised
deep learning methods have emerged to address the limitations of
supervised deep learning approaches, they still require a database of
images. In contrast, scan-specific deep learning methods learn and
reconstruct using only the sub-sampled data from a single scan. Here, we
introduce Scan-Specific Self-Supervised Bayesian Deep Non-Linear
Inversion (DNLINV) that does not require an auto calibration scan
region. DNLINV utilizes a Deep Image Prior-type generative modeling
approach and relies on approximate Bayesian inference to regularize the
deep convolutional neural network. We demonstrate our approach on
several anatomies, contrasts, and sampling patterns and show improved
performance over existing approaches in scan-specific calibrationless
parallel imaging and compressed sensing.
OI Nagarajan, Srikantan/0000-0001-7209-3857; Deveshwar,
Nikhil/0000-0002-5516-3769
ZB 0
Z8 0
TC 1
ZA 0
由于数据采样的固有限制,AB 磁共振成像的采集时间很慢。最近,监督深度学习已成为重建子采样 MRI 的一种有前景的技术。然而,监督深度学习需要大量完全采样的数据。尽管无监督或自监督的深度学习方法已经出现,以解决有监督深度学习方法的局限性,但它们仍然需要图像数据库。相比之下,特定于扫描的深度学习方法仅使用单次扫描的子采样数据进行学习和重建。在这里,我们介绍不需要自动校准扫描区域的扫描特定自监督贝叶斯深度非线性反演(DNLINV)。 DNLINV 采用深度图像先验型生成建模方法,并依靠近似贝叶斯推理来正则化深度卷积神经网络。我们在多个解剖结构、对比度和采样模式上展示了我们的方法,并在特定于扫描的无校准并行成像和压缩传感方面展示了比现有方法更高的性能。 OI Nagarajan,斯里坎坦/0000-0001-7209-3857;德韦什瓦尔,尼基尔/0000-0002-5516-3769 ZB 0 Z8 0 TC 1 ZA 0
ZS 0
ZR 0
Z9 1
C1 Univ Calif San Francisco, Dept Radiol & Biomed Imaging, San Francisco,
CA 94158 USA
C1 UC Berkeley UC San Francisco Joint Grad Program Bi, Berkeley, CA 94720
USA
C3 UC Berkeley UC San Francisco Joint Grad Program Bi
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100013
PM 38335079
ER
ZS 0 ZR 0 Z9 1 C1 加州大学旧金山分校,放射与生物医学成像系,旧金山,CA 94158 美国 C1 加州大学伯克利分校 加州大学旧金山分校联合研究生项目 Bi,伯克利,CA 94720 美国 C3 加州大学伯克利分校 加州大学旧金山分校联合研究生项目 Bi SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100013 PM 38335079 ER
PT J
AU Wang, Haiqiao
Ni, Dong
Wang, Yi
PT J AU Wang, 倪海桥, 王东, 易
TI Recursive Deformable Pyramid Network for Unsupervised Medical Image
Registration
用于无监督医学图像配准的 TI 递归可变形金字塔网络
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 6
BP 2229
EP 2240
DI 10.1109/TMI.2024.3362968
DT Article
PD JUN 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 6 BP 2229 EP 2240 DI 10.1109/TMI.2024.3362968 DT 文章 PD 2024 年 6 月 PY 2024
AB Complicated deformation problems are frequently encountered in medical
image registration tasks. Although various advanced registration models
have been proposed, accurate and efficient deformable registration
remains challenging, especially for handling the large volumetric
deformations. To this end, we propose a novel recursive deformable
pyramid (RDP) network for unsupervised non-rigid registration. Our
network is a pure convolutional pyramid, which fully utilizes the
advantages of the pyramid structure itself, but does not rely on any
high-weight attentions or transformers. In particular, our network
leverages a step-by-step recursion strategy with the integration of
high-level semantics to predict the deformation field from coarse to
fine, while ensuring the rationality of the deformation field.
Meanwhile, due to the recursive pyramid strategy, our network can
effectively attain deformable registration without separate affine
pre-alignment. We compare the RDP network with several existing
registration methods on three public brain magnetic resonance imaging
(MRI) datasets, including LPBA, Mindboggle and IXI. Experimental results
demonstrate our network consistently outcompetes state of the art with
respect to the metrics of Dice score, average symmetric surface
distance, Hausdorff distance, and Jacobian. Even for the data without
the affine pre-alignment, our network maintains satisfactory performance
on compensating for the large deformation. The code is publicly
available at https://github.com/ZAX130/RDP.
RI 王, 海桥/IQS-6586-2023; Wang, Haiqiao/
OI Wang, Haiqiao/0009-0008-5086-3628
Z8 0
ZA 0
AB 医学图像配准任务中经常遇到复杂的变形问题。尽管已经提出了各种先进的配准模型,但准确有效的变形配准仍然具有挑战性,特别是在处理大体积变形时。为此,我们提出了一种新颖的递归变形金字塔(RDP)网络,用于无监督非刚性配准。我们的网络是一个纯卷积金字塔,充分利用了金字塔结构本身的优势,但不依赖于任何高权重的注意力或变压器。特别是,我们的网络利用逐步递归策略并融合高级语义来预测变形场从粗到细,同时保证变形场的合理性。同时,由于递归金字塔策略,我们的网络可以有效地实现变形配准,而无需单独的仿射预对齐。我们将 RDP 网络与三个公共脑磁共振成像 (MRI) 数据集(包括 LPBA、Mindboggle 和 IXI)上的几种现有配准方法进行比较。实验结果表明,我们的网络在 Dice 得分、平均对称表面距离、豪斯多夫距离和雅可比行列式等指标方面始终优于最先进的技术。即使对于没有仿射预对齐的数据,我们的网络在补偿大变形方面也保持了令人满意的性能。该代码可在 https://github.com/ZAX130/RDP 上公开获取。 RI 王, 海桥/IQS-6586-2023;王海桥/ OI 王海桥/0009-0008-5086-3628 Z8 0 ZA 0
ZS 0
ZB 1
TC 7
ZR 0
Z9 7
C1 Shenzhen Univ, Natl Reg Key Technol Engn Lab Med Ultrasound, Sch Biomed
Engn, Shenzhen 518060, Peoples R China
C1 Shenzhen Univ, Sch Biomed Engn, Guangdong Key Lab Biomed Measurements &
Ultrasoun, Shenzhen 518060, Peoples R China
C1 Shenzhen Univ, Med Sch, Shenzhen 518060, Peoples R China
C1 Smart Med Imaging Learning & Engn SMILE Lab, Shenzhen 518060, Peoples R
China
C1 Med UltraSound Image Comp MUSIC Lab, Shenzhen 518060, Peoples R China
C3 Smart Med Imaging Learning & Engn SMILE Lab
C3 Med UltraSound Image Comp MUSIC Lab
SN 0278-0062
EI 1558-254X
DA 2024-06-23
UT WOS:001243728100023
PM 38319758
ER
ZS 0 ZB 1 TC 7 ZR 0 Z9 7 C1 深圳大学,国家重点实验室医学超声,上海生物医学工程,深圳 518060,人民 R 中国 C1 深圳大学,上海生物医学工程,广东省生物医学测量与超声重点实验室,深圳518060,人民R中国C1深圳大学医学院,深圳 518060,人民R中国C1智能医学影像学习与工程微笑实验室,深圳 518060,人民R中国C1医学超声图像音乐实验室,深圳 518060,人民R中国C3智能医学成像学习与工程 SMILE Lab C3 医学超声图像 Comp MUSIC Lab SN 0278-0062 EI 1558-254X DA 2024-06-23 UT WOS:001243728100023 PM 38319758 ER
PT J
AU Bayasi, Nourhan
Hamarneh, Ghassan
Garbi, Rafeef
PT J AU Bayasi、Nourhan Hamarneh、Ghassan Garbi、Rafeef
TI GC2: Generalizable Continual Classification of Medical Images.
TI GC2:医学图像的通用连续分类。
SO IEEE transactions on medical imaging
VL PP
DI 10.1109/TMI.2024.3398533
DT Journal Article
PD 2024-May-08
PY 2024
SO IEEE 医学成像交易 VL PP DI 10.1109/TMI.2024.3398533 DT 期刊文章 PD 2024-May-08 PY 2024
AB Deep learning models have achieved remarkable success in medical image
classification. These models are typically trained once on the available
annotated images and thus lack the ability of continually learning new
tasks (i.e., new classes or data distributions) due to the problem of
catastrophic forgetting. Recently, there has been more interest in
designing continual learning methods to learn different tasks presented
sequentially over time while preserving previously acquired knowledge.
However, these methods focus mainly on preventing catastrophic
forgetting and are tested under a closed-world assumption; i.e.,
assuming the test data is drawn from the same distribution as the
training data. In this work, we advance the state-of-the-art in
continual learning by proposing GC2 for medical image classification,
which learns a sequence of tasks while simultaneously enhancing its
out-of-distribution robustness. To alleviate forgetting, GC2 employs a
gradual culpability-based network pruning to identify an optimal
subnetwork for each task. To improve generalization, GC2 incorporates
adversarial image augmentation and knowledge distillation approaches for
learning generalized and robust representations for each subnetwork. Our
extensive experiments on multiple benchmarks in a task-agnostic
inference demonstrate that GC2 significantly outperforms baselines and
other continual learning methods in reducing forgetting and enhancing
generalization. Our code is publicly available at the following link:
https://github.com/ nourhanb/TMI2024-GC2.
OI Bayasi, Nourhan/0000-0003-4653-6081
Z8 0
ZB 0
AB深度学习模型在医学图像分类方面取得了显着的成功。这些模型通常在可用的带注释图像上进行一次训练,因此由于灾难性遗忘的问题而缺乏持续学习新任务(即新类别或数据分布)的能力。最近,人们对设计持续学习方法越来越感兴趣,以学习随时间顺序呈现的不同任务,同时保留以前获得的知识。然而,这些方法主要侧重于防止灾难性遗忘,并在封闭世界假设下进行测试;即,假设测试数据来自与训练数据相同的分布。在这项工作中,我们通过提出用于医学图像分类的 GC2 来推进持续学习的最新技术,它学习一系列任务,同时增强其分布外鲁棒性。为了减少遗忘,GC2 采用基于罪责的渐进网络修剪来确定每个任务的最佳子网络。为了提高泛化能力,GC2 结合了对抗性图像增强和知识蒸馏方法来学习每个子网络的泛化和鲁棒表示。我们在与任务无关的推理中对多个基准进行的广泛实验表明,GC2 在减少遗忘和增强泛化方面显着优于基线和其他持续学习方法。我们的代码可通过以下链接公开获取:https://github.com/nourhanb/TMI2024-GC2。 OI 巴亚西,努尔汉/0000-0003-4653-6081 Z8 0 ZB 0
ZS 0
ZA 0
TC 0
ZR 0
Z9 0
EI 1558-254X
DA 2024-05-10
UT MEDLINE:38717881
PM 38717881
ER
ZS 0 ZA 0 TC 0 ZR 0 Z9 0 EI 1558-254X DA 2024-05-10 UT MEDLINE:38717881 PM 38717881 ER
PT J
AU Zhong, Yutian
Zhang, Shuangyang
Liu, Zhenyang
Zhang, Xiaoming
Mo, Zongxin
Zhang, Yizhe
Hu, Haoyu
Chen, Wufan
Qi, Li
PT J AU Zhu, 张玉田, 刘双阳, 张振阳, 莫晓明, 张宗鑫, 胡一哲, 陈浩宇, 齐吴凡, 李
TI Unsupervised Fusion of Misaligned PAT and MRI Images via Mutually
Reinforcing Cross-Modality Image Generation and Registration
TI 通过相互增强的跨模态图像生成和配准对未对准的 PAT 和 MRI 图像进行无监督融合
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 5
BP 1702
EP 1714
DI 10.1109/TMI.2023.3347511
DT Article
PD MAY 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 5 BP 1702 EP 1714 DI 10.1109/TMI.2023.3347511 DT 文章 PD 2024 年 5 月 PY 2024
AB Photoacoustic tomography (PAT) and magnetic resonance imaging (MRI) are
two advanced imaging techniques widely used in pre-clinical research.
PAT has high optical contrast and deep imaging range but poor soft
tissue contrast, whereas MRI provides excellent soft tissue information
but poor temporal resolution. Despite recent advances in medical image
fusion with pre-aligned multimodal data, PAT-MRI image fusion remains
challenging due to misaligned images and spatial distortion. To address
these issues, we propose an unsupervised multi-stage deep learning
framework called PAMRFuse for misaligned PAT and MRI image fusion.
PAMRFuse comprises a multimodal to unimodal registration network to
accurately align the input PAT-MRI image pairs and a self-attentive
fusion network that selects information-rich features for fusion. We
employ an end-to-end mutually reinforcing mode in our registration
network, which enables joint optimization of cross-modality image
generation and registration. To the best of our knowledge, this is the
first attempt at information fusion for misaligned PAT and MRI.
Qualitative and quantitative experimental results show the excellent
performance of our method in fusing PAT-MRI images of small animals
captured from commercial imaging systems.
RI Zhang, Shuangyang/AHE-4902-2022; zhong, YUTIAN/LDE-9042-2024; Chen, Wufan/; Zhang, Shuangyang/; Zhong, Yutian/JMQ-6334-2023
OI Chen, Wufan/0000-0002-4125-7537; Zhang, Shuangyang/0000-0001-8025-4737;
Zhong, Yutian/0009-0006-9654-9090
Z8 0
ZB 0
ZA 0
TC 4
ZR 0
AB 光声断层扫描 (PAT) 和磁共振成像 (MRI) 是广泛应用于临床前研究的两种先进成像技术。 PAT具有高光学对比度和深成像范围,但软组织对比度较差,而MRI提供良好的软组织信息,但时间分辨率较差。尽管最近在使用预对齐多模态数据的医学图像融合方面取得了进展,但由于图像未对齐和空间失真,PAT-MRI 图像融合仍然具有挑战性。为了解决这些问题,我们提出了一种名为 PAMRFuse 的无监督多阶段深度学习框架,用于未对齐的 PAT 和 MRI 图像融合。 PAMRFuse 包括一个多模态到单模态配准网络,用于精确对齐输入 PAT-MRI 图像对,以及一个自注意力融合网络,用于选择信息丰富的特征进行融合。我们在配准网络中采用端到端的相互增强模式,从而能够联合优化跨模态图像生成和配准。据我们所知,这是针对未对准的 PAT 和 MRI 进行信息融合的首次尝试。定性和定量实验结果表明,我们的方法在融合从商业成像系统捕获的小动物的 PAT-MRI 图像方面具有出色的性能。张日,双阳/AHE-4902-2022;钟宇田/LDE-9042-2024;陈五凡/;张双阳/;钟雨田/JMQ-6334-2023 陈爱凡/0000-0002-4125-7537;张双阳/0000-0001-8025-4737;钟玉田/0009-0006-9654-9090 Z8 0 ZB 0 ZA 0 TC 4 ZR 0
ZS 0
Z9 4
C1 Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Guangdong, Peoples
R China
C1 Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou
510515, Guangdong, Peoples R China
C1 Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec,
Guangzhou 510515, Guangdong, Peoples R China
C1 Southern Med Univ, Zhujiang Hosp, Dept Hepatobiliary Surg, Guangzhou
510515, Guangdong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-23
UT WOS:001214547800017
PM 38147426
ER
ZS 0 Z9 4 C1 南方医科大学,生物医学工程,广州 510515,广东,人民 C1 南方医科大学,广东省重点实验室医学影像处理中心,广州 510515,广东,人民 C1 南方医科大学,广东省工程实验室医学影像与诊断技术中心,广东省广州市 510515,中华人民共和国 C1 南方医科大学珠江医院肝胆外科,广东省广州市 510515 SN 0278-0062 EI 1558-254X DA 2024-05-23 UT WOS:001214547800017 下午 38147426 ER
PT J
AU Liu, Bingxue
Wang, Yongchao
Fomin-Thunemann, Natalie
Thunemann, Martin
Kilic, Kivilcim
Devor, Anna
Cheng, Xiaojun
Tan, Jiyong
Jiang, John
Boas, David A.
Tang, Jianbo
PT J AU Liu、Bingxue Wang、Yongchao Fomin-Thunemann、Natalie Thunemann、Martin Kilic、Kivilcim Devor、Anna Cheng、Xiaojun Tan、Jiyong Jiang、John Boas、David A. Tang、Jianbo
TI Time-Lagged Functional Ultrasound for Multi-Parametric Cerebral
Hemodynamic Imaging
用于多参数脑血流动力学成像的 TI 时滞功能超声
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 638
EP 648
DI 10.1109/TMI.2023.3314734
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 638 EP 648 DI 10.1109/TMI.2023.3314734 DT 文章 PD 2024 年 2 月 PY 2024
AB We introduce an ultrasound speckle decorrelation-based time-lagged
functional ultrasound technique (tl-fUS) for the quantification of the
relative changes in cerebral blood flow speed (rCBF $_{\text {speed}}$
), cerebral blood volume (rCBV) and cerebral blood flow (rCBF) during
functional stimulations. Numerical simulations, phantom validations, and
in vivo mouse brain experiments were performed to test the capability of
tl-fUS to parse out and quantify the ratio change of these hemodynamic
parameters. The blood volume change was found to be more prominent in
arterioles compared to venules and the peak blood flow changes were
around 2.5 times the peak blood volume change during brain activation,
agreeing with previous observations in the literature. The tl-fUS shows
the ability of distinguishing the relative changes of rCBFspeed, rCBV,
and rCBF, which can inform specific physiological interpretations of the
fUS measurements.
RI Wang, YongChao/ABG-5837-2021; Cheng, Xiaojun/A-8947-2019; Tang, Jianbo/GWQ-7568-2022; Boas, David/; Tan, Jiyong/JAC-7779-2023; , John/; wang, yongchao/; Liu, Bingxue/
OI Boas, David/0000-0002-6709-7711; Tan, Jiyong/0000-0001-6356-1743; ,
John/0000-0002-4636-0701; wang, yongchao/0000-0002-6637-0157; Liu,
Bingxue/0000-0002-7335-4710
ZR 0
Z8 0
TC 0
ZB 0
ZA 0
AB 我们引入一种基于超声散斑去相关的时滞功能超声技术(tl-fUS),用于量化脑血流速度(rCBF $_{\text {speed}}$)、脑血容量( rCBV)和功能刺激期间的脑血流量(rCBF)。通过数值模拟、模型验证和体内小鼠大脑实验来测试 tl-fUS 解析和量化这些血流动力学参数比率变化的能力。研究发现,与小静脉相比,小动脉的血容量变化更为显着,并且峰值血流量变化约为大脑激活期间峰值血容量变化的 2.5 倍,这与文献中先前的观察结果一致。 tl-fUS 显示了区分 rCBFspeed、rCBV 和 rCBF 相对变化的能力,这可以为 fUS 测量的特定生理学解释提供信息。日王勇超/ABG-5837-2021;程晓军/A-8947-2019;唐剑波/GWQ-7568-2022;博阿斯,大卫/;谭继勇/JAC-7779-2023; ,约翰/;王永超/;刘冰雪/ OI Boas, David/0000-0002-6709-7711;谭继勇/0000-0001-6356-1743; ,约翰/0000-0002-4636-0701;王永超/0000-0002-6637-0157;刘冰雪/0000-0002-7335-4710 ZR 0 Z8 0 TC 0 ZB 0 ZA 0
ZS 0
Z9 0
C1 Boston Univ, Dept Biomed Engn, Boston, MA 02215 USA
C1 Southern Univ Sci & Technol, Dept Biomed Engn, Shenzhen 518055,
Guangdong, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400018
PM 37703138
ER
ZS 0 Z9 0 C1 波士顿大学生物医学工程系, Boston, MA 02215 USA C1 南方科技大学生物医学工程系, 深圳 518055, 广东, Peoples R China SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400018 PM 37703138 ER
PT J
AU Tan, Yubo
Shen, Wen-Da
Wu, Ming-Yuan
Liu, Gui-Na
Zhao, Shi-Xuan
Chen, Yang
Yang, Kai-Fu
Li, Yong-Jie
PT J AU Tan, 沉宇波, 吴文达, 刘明远, 赵桂娜, 陈世轩, 杨阳, 李开复, 永杰
TI Retinal Layer Segmentation in OCT Images With Boundary Regression and
Feature Polarization
具有边界回归和特征偏振的 OCT 图像中的 TI 视网膜层分割
SO IEEE TRANSACTIONS ON MEDICAL IMAGING
VL 43
IS 2
BP 686
EP 700
DI 10.1109/TMI.2023.3317072
DT Article
PD FEB 2024
PY 2024
SO IEEE 医学成像交易 VL 43 IS 2 BP 686 EP 700 DI 10.1109/TMI.2023.3317072 DT 文章 PD 2024 年 2 月 PY 2024
AB The geometry of retinal layers is an important imaging feature for the
diagnosis of some ophthalmic diseases. In recent years, retinal layer
segmentation methods for optical coherence tomography (OCT) images have
emerged one after another, and huge progress has been achieved. However,
challenges due to interference factors such as noise, blurring, fundus
effusion, and tissue artifacts remain in existing methods, primarily
manifesting as intra-layer false positives and inter-layer boundary
deviation. To solve these problems, we propose a method called Tightly
combined Cross-Convolution and Transformer with Boundary regression and
feature Polarization (TCCT-BP). This method uses a hybrid architecture
of CNN and lightweight Transformer to improve the perception of retinal
layers. In addition, a feature grouping and sampling method and the
corresponding polarization loss function are designed to maximize the
differentiation of the feature vectors of different retinal layers, and
a boundary regression loss function is devised to constrain the retinal
boundary distribution for a better fit to the ground truth. Extensive
experiments on four benchmark datasets demonstrate that the proposed
method achieves state-of-the-art performance in dealing with problems of
false positives and boundary distortion. The proposed method ranked
first in the OCT Layer Segmentation task of GOALS challenge held by
MICCAI 2022. The source code is available at
https://www.github.com/tyb311/TCCT.
RI chen, yang/LBH-3305-2024; liu, guina/; Tan, Yubo/; Wu, Mingyuan/; Chen, Yang/; Zhao, Shixuan/; Li, Yongjie/
OI liu, guina/0000-0002-6733-4262; Tan, Yubo/0000-0002-7495-5446; Wu,
Mingyuan/0009-0008-6803-7797; Chen, Yang/0000-0002-3667-4862; Zhao,
Shixuan/0000-0001-7387-2799; Li, Yongjie/0000-0002-7395-3131
ZA 0
ZB 0
AB 视网膜层的几何形状是诊断某些眼科疾病的重要成像特征。近年来,光学相干断层扫描(OCT)图像的视网膜层分割方法相继出现,并取得了巨大的进展。然而,现有方法仍然面临噪声、模糊、眼底积液和组织伪影等干扰因素带来的挑战,主要表现为层内误报和层间边界偏差。为了解决这些问题,我们提出了一种称为“具有边界回归和特征极化的紧密组合交叉卷积和变换器”(TCCT-BP)的方法。该方法使用 CNN 和轻量级 Transformer 的混合架构来改善视网膜层的感知。此外,设计了特征分组和采样方法以及相应的偏振损失函数,以最大限度地区分不同视网膜层的特征向量,并设计了边界回归损失函数来约束视网膜边界分布,以更好地拟合视网膜边界分布。基本事实。对四个基准数据集的大量实验表明,所提出的方法在处理误报和边界失真问题方面实现了最先进的性能。该方法在MICCAI 2022举办的GOALS挑战赛OCT层分割任务中排名第一。源代码可在https://www.github.com/tyb311/TCCT获取。陈阳/LBH-3305-2024;刘吉娜/;谭玉波/;吴明远/;陈阳/;赵世轩/;李永杰/刘爱桂/0000-0002-6733-4262;谭玉波/0000-0002-7495-5446;吴明远/0009-0008-6803-7797;陈阳/0000-0002-3667-4862;赵世轩/0000-0001-7387-2799;李永杰/0000-0002-7395-3131 ZA 0 ZB 0
ZS 0
Z8 1
ZR 0
TC 6
Z9 7
C1 Univ Elect Sci & Technol China, MOE Key Lab Neuroinformat, Radiat Oncol
Key Lab Sichuan Prov, Chengdu 610054, Peoples R China
C1 Changchun Univ Sci & Technol, Sch Optoelect Engn, Changchun 130022,
Peoples R China
C1 Nanjing Univ Sci & Technol, Taizhou Inst Sci & Technol, Taizhou 225306,
Peoples R China
C1 Sichuan Univ, West China Hosp, Chengdu 610044, Peoples R China
SN 0278-0062
EI 1558-254X
DA 2024-05-25
UT WOS:001203303400022
PM 37725718
ER
ZS 0 Z8 1 ZR 0 TC 6 Z9 7 C1 中国理工大学,神经信息学教育部重点实验室,四川省放射肿瘤重点实验室,成都 610054,人民 R 中国 C1 长春理工大学,光电工程学院,长春 130022,人民R中国C1南京理工大学、台州科技学院泰州225306、人民R中国C1四川大学华西医院成都610044、人民R中国SN 0278-0062 EI 1558-254X DA 2024-05-25 UT WOS:001203303400022 PM 37725718 ER
EF