This repository contains the code for the paper ECHOPulse: ECG Controlled Echocardiograms Video Generation. Aiming to generate the ECHO video based on the ECG signal. Model weights have been released.
Echocardiography (ECHO) is essential for cardiac assessments, but its video quality and interpretation heavily relies on manual expertise, leading to inconsistent results from clinical and portable devices. ECHO video generation offers a solution by improving automated monitoring through synthetic data and generating high-quality videos from routine health data. However, existing models often face high computational costs, slow inference, and rely on complex conditional prompts that require expert annotations. To address these challenges, we propose ECHOPulse, an ECG-conditioned ECHO video generation model. ECHOPulse introduces two key advancements: (1) it accelerates ECHO video generation by leveraging VQ-VAE tokenization and masked visual token modeling for fast decoding, and (2) it conditions on readily accessible ECG signals, which are highly coherent with ECHO videos, bypassing complex conditional prompts. To the best of our knowledge, this is the first work to use time-series prompts like ECG signals for ECHO video generation. ECHOPulse not only enables controllable synthetic ECHO data generation but also provides updated cardiac function information for disease monitoring and prediction beyond ECG alone. Evaluations on three public and private datasets demonstrate state-of-the-art performance in ECHO video generation across both qualitative and quantitative measures. Additionally, ECHOPulse can be easily generalized to other modality generation tasks, such as cardiac MRI, fMRI, and 3D CT generation.
We visualize each set of ECG-echodiagrams and compared them in the first row (Prediction) and the second row (Ground Truth).
Type | Set 1 | Set 2 | Set 3 | Set 4 | Set 5 | Set 6 |
---|---|---|---|---|---|---|
Prediction | ||||||
Ground Truth |
✨ We also introduce our powerful ECHOPulse to general IoT devices (i.e., ⌚ Apple Watch in this case) and obtain the following visualization.
ECG Signal from Apple Watch v9 | Generated Echodiagram |
---|---|
Methods | Dataset | MSE | MAE |
---|---|---|---|
EchoNet-Synthetic | Echo-Dynamic | ||
ECHOPulse (Only natural videos) | Echo-Dynamic | ||
🏆 ECHOPulse (Domain transfer) | Echo-Dynamic | ||
ECHOPulse (Only natural videos) | Private data | ||
🏆 ECHOPulse (Domain transfer) | Private data |
Methods | Condition | S. time ⬇️ | R² ⬆️ | MAE ⬇️ | RMSE ⬇️ | Parameters |
---|---|---|---|---|---|---|
EchoDiffusion | Image+EF | Gen. 146s | 0.89 | 4.81 | 6.69 | 381M |
ECHOPulse | Image+ECG | Gen. 6.4s | 0.85 | 2.51 | 2.86 | 279M |
Methods | Dataset | Condition | FID↓ | A2C FVD↓ | A2C SSIM↑ | A4C FID↓ | A4C FVD↓ | A4C SSIM↑ |
---|---|---|---|---|---|---|---|---|
MoonShot | CAMUS | Text | 48.44 | 202.41 | 0.63 | 61.57 | 290.08 | 0.62 |
VideoComposer | CAMUS | Text | 37.68 | 164.96 | 0.60 | 35.04 | 180.32 | 0.61 |
HeartBeat | CAMUS | Text | 107.66 | 305.12 | 0.53 | 76.46 | 381.28 | 0.53 |
HeartBeat | CAMUS | Text&Sketch&Mask... | 25.23 | 97.28 | 0.66 | 31.99 | 159.36 | 0.65 |
ECHOPLuse (Only natural videos) | CAMUS | Text | 12.71 | 273.15 | 0.61 | 15.38 | 336.04 | 0.58 |
ECHOPLuse (Domain transfer) | CAMUS | Text | 5.65 | 211.85 | 0.79 | 8.17 | 283.32 | 0.75 |
EchoDiffusion-4SCM | Echo-Dynamic | Text&LVEF | - | - | - | 24.00 | 228.00 | 0.48 |
EchoNet-Synthetic (Video Editing) | Echo-Dynamic | LVEF | - | - | - | 16.90 | 87.40 | - |
ECHOPLuse (Only natural videos) | Echo-Dynamic | Text | 36.10 | 319.25 | 0.39 | 44.21 | 334.95 | 0.35 |
ECHOPLuse (Domain transfer) | Echo-Dynamic | Text | 27.50 | 249.46 | 0.46 | 29.83 | 312.31 | 0.41 |
EchoDiffusion-4SCM | Private data | Text&LVEF | 20.71 | 379.43 | 0.55 | 23.20 | 390.17 | 0.53 |
EchoNet-Synthetic (Video Editing) | Private data | LVEF | 18.39 | 91.29 | 0.56 | 26.13 | 120.91 | 0.55 |
ECHOPLuse (Only natural videos) | Private data | Text | 27.49 | 291.67 | 0.53 | 34.13 | 374.92 | 0.51 |
ECHOPLuse (Domain transfer) | Private data | Text | 25.44 | 224.90 | 0.54 | 31.21 | 334.09 | 0.54 |
🥈 ECHOPLuse (Only natural videos) | Private data | ECG | 18.73 | 200.45 | 0.56 | 27.37 | 302.89 | 0.55 |
🥇 ECHOPLuse (Domain transfer) | Private data | ECG | 15.50 | 82.44 | 0.67 | 20.82 | 107.40 | 0.66 |
conda create -n ECHOPulse python==3.8
conda activate ECHOPulse
pip install -r requirements.txt
python step1_train.py
python step2_train.py
Model Weights should be downloaded and put into the Model_weights folder. The ECG Foundation Model used in this repo is called ST-MEM.
echo_inference.ipynb
If you use the code, please cite the following paper:
@article{li2024echopulse,
title={ECHOPulse: ECG controlled echocardio-grams video generation},
author={Li, Yiwei and Kim, Sekeun and Wu, Zihao and Jiang, Hanqi and Pan, Yi and Jin, Pengfei and Song, Sifan and Shi, Yucheng and Yang, Tianze and Liu, Tianming and others},
journal={arXiv preprint arXiv:2410.03143},
year={2024}
}